content
stringlengths
86
994k
meta
stringlengths
288
619
OpenStax College Physics, Chapter 28, Problem 8 (Problems & Exercises) If relativistic effects are to be less than 3%, then γ must be less than 1.03. At what relative velocity is $\gamma = 1.03$? Question by is licensed under CC BY 4.0 Final Answer $7.183\times 10^{7}\textrm{ m/s}$ Solution video OpenStax College Physics, Chapter 28, Problem 8 (Problems & Exercises) vote with a rating of votes with an average rating of. Video Transcript This is College Physics Answers with Shaun Dychko. What is the relative speed when the Lorentz factor—γ—is 1.03? So γ is 1 over the square root of 1 minus the relative speed squared over c squared and we can multiply both sides by 1 minus v squared over c squared and divide both sides by γ and we get square root 1 minus v squared over c squared is 1 over γ and then we can square both sides and get 1 minus v squared over c squared is 1 over γ squared and then add v squared over c squared to both sides and subtract 1 over γ squared from both sides and we get v squared over c squared is 1 minus 1 over γ squared then multiply both sides by c squared and you get v squared is c squared times 1 minus 1 over γ squared then square root both sides and finally we have a formula for the speed v in terms of speed of light and γ so we have speed of light times square root 1 minus 1 over γ squared. So that's 2.998 times 10 to the 8 meters per second times square root 1 minus 1 over 1.03 squared and that's 7.183 times 10 to the 7 meters per second. So at speeds greater than this, you will have a Lorentz factor that is creating a relativistic effect that exceeds 3 percent.
{"url":"https://collegephysicsanswers.com/openstax-solutions/if-relativistic-effects-are-be-less-3-then-g-must-be-less-103-what-relative","timestamp":"2024-11-04T02:20:47Z","content_type":"text/html","content_length":"163778","record_id":"<urn:uuid:af6fb6dd-cb16-4066-bc56-ca9c87ae6c19>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00825.warc.gz"}
An Introduction to Computer Simulation Methods Third Edition (revised) An Introduction to Computer Simulation Methods Third Edition (revised) Documents This material has 23 associated documents. Select a document title to view a document's information. Main Document written by Harvey Gould, Jan Tobochnik, and Wolfgang Christian The complete revised Third Edition for An Introduction to Computer Simulation Methods Third Edition by Harvey Gould, Jan Tobochnik, and Wolfgang Christian. Last Modified September 15, 2016 Supplemental Documents (22) Frontmatter for an Introduction to Computer Simulation Methods. Last Modified September 15, 2016 The importance of computers in physics and the nature of computer simulation is discussed. The nature of object-oriented programming and various computer languages also is considered. Last Modified February 11, 2011 We introduce some of the core syntax of Java in the context of simulating the motion of falling particles near the Earth's surface. A simple algorithm for solving first-order differential equations numerically also is discussed. Last Modified February 11, 2011 We discuss several numerical methods needed to simulate the motion of particles using Newton's laws and introduce interfaces, an important Java construct that makes it possible for unrelated objects to declare that they perform the same methods. Last Modified February 11, 2011 We explore the behavior of oscillatory systems, including the simple harmonic oscillator, a simple pendulum, electrical circuits, and introduce the concept of phase space. Last Modified February 11, 2011 We apply Newton's laws of motion to planetary motion and other systems of a few particles and explore some of the counter-intuitive consequences of Newton's laws. Last Modified February 11, 2011 We study simple nonlinear deterministic models that exhibit chaotic behavior. We will find that the use of the computer to do numerical experiments will help us gain insight into the nature of chaos. Last Modified February 11, 2011 Random processes are introduced in the context of several simple physical systems, including random walks on a lattice, polymers, and diffusion controlled chemical reactions. The generation of random number sequences also is discussed. Last Modified February 11, 2011 We simulate the dynamical behavior of many particle systems such as dense gases, liquids, and solids and observe their qualitative features. Some of the basic ideas of equilibrium statistical mechanics and kinetic theory are introduced. Last Modified February 11, 2011 We discuss the physics of wave phenomena and the motivation and use of Fourier transforms. Last Modified February 11, 2011 We compute the electric fields due to static and moving charges, describe methods for computing the electric potential in boundary value problems, and solve Maxwell's equations numerically. Last Modified February 11, 2011 Simple classical and Monte Carlo methods including importance sampling are illustrated in the context of the numerical evaluation of definite integrals. Last Modified February 11, 2011 We introduce several geometrical concepts associated with percolation, including the percolation threshold, clusters, and cluster finding algorithms. We also introduce the ideas of critical phenomena in the context of the percolation transition, including critical exponents, scaling relations, and the renormalization group. Last Modified February 11, 2011 We introduce the concept of fractal dimension and discuss several processes that generate fractal objects. Last Modified February 11, 2011 We introduce cellular automata, neural networks, genetic algorithms, and growing networks to explore the concepts of self-organization and complexity. Applications to sandpiles, fluids, earthquakes, and other areas are discussed. Last Modified February 11, 2011 We discuss how to simulate thermal systems using a variety of Monte Carlo methods including the traditional Metropolis algorithm. Applications to the Ising model and various particle systems are discussed and more efficient Monte Carlo algorithms are introduced. Last Modified February 11, 2011 We discuss numerical solutions of the time-independent and time-dependent Schroedinger equation and describe several Monte Carlo methods for estimating the ground state of quantum systems. Last Modified February 11, 2011 We study affine transformations in order to visualize objects in three dimensions. We then solve Euler's equation of motion for rigid body dynamics using the quaternion representation of rotations. Last Modified February 11, 2011 We compute how objects appear at relativistic speeds and in the vicinity of a large spherically symmetric mass. Last Modified February 11, 2011 We emphasize that the methods we have discussed can be applied to a wide variety of natural phenomena and contexts. Last Modified February 11, 2011 Updates and errata to An Introduction to Computer Simulation Methods Third Edition. Last Modified November 24, 2013 Computing in Science and Engineering book review of An Introduction to Computer Simulation Methods. Last Modified February 11, 2011
{"url":"https://www.compadre.org/Repository/document/ServeFile.cfm?ID=7375&DocID=4514&DocFID=9492&Attachment=1","timestamp":"2024-11-12T22:30:05Z","content_type":"application/xhtml+xml","content_length":"45395","record_id":"<urn:uuid:056038b8-ce30-437d-988c-a17c57cedda5>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00011.warc.gz"}
Jousting Finals - Page 16 [NEW MILLETIANS] Please note that all new forum users have to be approved before posting. This process can take up to 24 hours, and we appreciate your patience. If this is your first visit, be sure to check out the Nexon Forums Code of Conduct . You have to register before you can post, so you can log in or create a forum name above to proceed. Thank you for your visit! • 2021 Feb 14 Alexina had 10-1 and its' final can be found , while Nao had 10-2 and its' final can be found • 2021 Feb 21 Alexina had 9-1 and its' final can be found , while Nao had 9-2 and its' final can be found • 2021 Feb 28 Alexina had 8-1 and its' final can be found , while Nao had 11-2 and its' final can be found • 2021 Mar 7 Alexina had 6, so no finals there, this time, while Nao had 12-2 and its' final can be found • 2021 Mar 14 Alexina had 10-1 and its' final can be found , while Nao had 10-2 and its' final can be found • 2021 Mar 21 Alexina had 9-1 and its' final can be found , while Nao had 9-2 and its' final can be found • 2021 Mar 28 Alexina had 10-1 and its' final can be found , while Nao had 11-2 and its' final can be found • 2021 Apr 4 Alexina had 4, so no finals there, this time, while Nao had 9-2 and its' final can be found • 2021 Apr 11 Alexina had 6, so no finals there, this time, while Nao had 8-2 and its' final can be found • 2021 Apr 18 Alexina had 8-1 and its' final can be found , while Nao had 11-2 and its' final can be found • 2021 Apr 25 Alexina had 9-1 and its' final can be found , while Nao had 11-2 and its' final can be found • Here's a dumb question for you; why not use a tournament organizer software? Here's a dumb question for you; why not use a tournament organizer software? The weekly Sunday tournament already has a built-in elimination format. If there is an odd number of players in a round then the odd-one-out faces one of the NPC jousters. I've seen a few Guild organized jousting tournaments that used such software however, to organise what are, at the heart of it, practice matches. If you are thinking of taking up a lance there are now sufficiently large enough communities on both servers that finals do happen much more often than not. If you are on Alexina, you face more of an up-hill fight as it's community is very experienced. The community on Nao has gotten significantly better since the merge to the degree where if they squared-off against Alexina, Las Vegas would still favour Alexina but couldn't guarantee it. • 2021 May 2 Alexina had 8-1 and its' final can be found , while Nao had 10-2 and its' final can be found • 2021 May 9 Alexina had 9-1 and its' final can be found , while Nao had 11-2 and its' final can be found • 2021 May 16 Alexina had 9-1 and its' final can be found , while Nao had 8-2 and its' final can be found Here's a dumb question for you; why not use a tournament organizer software? The weekly Sunday tournament already has a built-in elimination format. If there is an odd number of players in a round then the odd-one-out faces one of the NPC jousters. I've seen a few Guild organized jousting tournaments that used such software however, to organise what are, at the heart of it, practice matches. If you are thinking of taking up a lance there are now sufficiently large enough communities on both servers that finals do happen much more often than not. If you are on Alexina, you face more of an up-hill fight as it's community is very experienced. The community on Nao has gotten significantly better since the merge to the degree where if they squared-off against Alexina, Las Vegas would still favour Alexina but couldn't guarantee it. That's interesting to note. Almost makes me curious to try it from the Alexina side. Here's a dumb question for you; why not use a tournament organizer software? The weekly Sunday tournament already has a built-in elimination format. If there is an odd number of players in a round then the odd-one-out faces one of the NPC jousters. I've seen a few Guild organized jousting tournaments that used such software however, to organise what are, at the heart of it, practice matches. If you are thinking of taking up a lance there are now sufficiently large enough communities on both servers that finals do happen much more often than not. If you are on Alexina, you face more of an up-hill fight as it's community is very experienced. The community on Nao has gotten significantly better since the merge to the degree where if they squared-off against Alexina, Las Vegas would still favour Alexina but couldn't guarantee it. That's interesting to note. Almost makes me curious to try it from the Alexina side. You should. The communities on both servers are very open and welcoming. Were there to be a final server merge, I would expect them both to integrate without issue. You can also make use of my YouTube channel to come up to speed faster. This has been a huge factor in closing the gap between Alexina and Nao. • 2021 May 23 Alexina had 8-1 and its' final can be found , while Nao had 11-2 and its' final can be found Here's a dumb question for you; why not use a tournament organizer software? The weekly Sunday tournament already has a built-in elimination format. If there is an odd number of players in a round then the odd-one-out faces one of the NPC jousters. I've seen a few Guild organized jousting tournaments that used such software however, to organise what are, at the heart of it, practice matches. If you are thinking of taking up a lance there are now sufficiently large enough communities on both servers that finals do happen much more often than not. If you are on Alexina, you face more of an up-hill fight as it's community is very experienced. The community on Nao has gotten significantly better since the merge to the degree where if they squared-off against Alexina, Las Vegas would still favour Alexina but couldn't guarantee it. That's interesting to note. Almost makes me curious to try it from the Alexina side. You should. The communities on both servers are very open and welcoming. Were there to be a final server merge, I would expect them both to integrate without issue. You can also make use of my YouTube channel to come up to speed faster. This has been a huge factor in closing the gap between Alexina and Nao. I may if there's time.....I'm technically on break between semesters right now but its been a royal mess working on transfer stuffs. Haven't had time Alexina or Nao past week and possibly this one, so... may not be happening for a bit. I'll need to find some time soon though. Nao's is nice, would be interesting to see it from Alexina as well.
{"url":"https://forums.mabinogi.nexon.net/discussion/20732/jousting-finals/p16","timestamp":"2024-11-12T13:43:42Z","content_type":"text/html","content_length":"82424","record_id":"<urn:uuid:9eb3923e-de81-4819-b93f-fee865db1df1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00281.warc.gz"}
How do you simplify (64x^3+1)/( 4x^2-100) * (4x+20)/(64x^2-16x+4)? | HIX Tutor How do you simplify #(64x^3+1)/( 4x^2-100) * (4x+20)/(64x^2-16x+4)#? Answer 1 $\frac{64 {x}^{3} + 1}{4 {x}^{2} - 100} \cdot \frac{4 x + 20}{64 {x}^{2} - 16 x + 4}$ Use sum of cubes identity: #a^3+b^3 = (a+b)(a^2-ab+b^2)# Use difference of squares identity: #a^2-b^2 = (a-b)(a+b)# with exclusion #x != -5# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To simplify the expression (64x^3+1)/(4x^2-100) * (4x+20)/(64x^2-16x+4), we can start by factoring the numerator and denominator of each fraction separately. The numerator of the first fraction, 64x^3+1, is a sum of cubes and can be factored as (4x+1)(16x^2-4x+1). The denominator, 4x^2-100, is a difference of squares and can be factored as (2x+10)(2x-10). The numerator of the second fraction, 4x+20, can be factored out the common factor of 4, resulting in 4(x+5). The denominator, 64x^2-16x+4, cannot be factored further. Now, we can cancel out any common factors between the numerators and denominators. In this case, we have a common factor of (4x+1) in the first fraction and (4x) in the second fraction. After canceling out the common factors, the expression simplifies to: [(4x+1)(16x^2-4x+1)] / [(2x+10)(2x-10)] * [4(x+5)] / (64x^2-16x+4) Simplifying further is not possible without additional information or specific instructions. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-simplify-64x-3-1-4x-2-100-4x-20-64x-2-16x-4-8f9af9bff3","timestamp":"2024-11-09T22:31:05Z","content_type":"text/html","content_length":"578304","record_id":"<urn:uuid:03ddc87d-a5cd-475e-8a9f-374af24873e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00366.warc.gz"}
Stress Results Written in Stress Results Written in HyperMesh .res Format Static, Eigenvalue, Transient and Multibody Loadcases ALL, DIRECT and TENSOR Options von Mises Stress Maximum Principal Stress von Mises Stress (Z1) von Mises Stress (Z2) von Mises Stress (mid) P1 (major) Stress (Z1) P1 (major) Stress (Z2) P1 (major) Stress (mid) P1 (major) Stress (max) P3 (minor) Stress (Z1) P3 (minor) Stress (Z2) P3 (minor) Stress (mid) P3 (minor) Stress (min) Normal X Stress (Z1) Normal X Stress (Z2) Normal X Stress (mid) Normal Y Stress (Z1) Normal Y Stress (Z2) Normal Y Stress (mid) Shear XY Stress (Z1) Shear XY Stress (Z2) Shear XY Stress (mid) Principal Stress Angle (Z1) Principal Stress Angle (Z2) Principal Stress Angle (mid) Signed von Mises Stress (solid) P1 (major) Stress (solid) P2 ( mid ) Stress (solid) P3 (minor) Stress (solid) Normal X Stress (solid) Normal Y Stress (solid) Normal Z Stress (solid) Shear XY Stress (solid) Shear YZ Stress (solid) Shear XZ Stress (solid) VON Option von Mises Stress PRINC Option von Mises Stress Maximum Principal Stress Frequency Response Loadcases Normal X Stress (Z1) (comp) Normal X Stress (Z2) (comp) Normal Y Stress (Z1) (comp) Normal Y Stress (Z2) (comp) Shear XY Stress (Z1) (comp) Shear XY Stress (Z2) (comp) Normal X Stress (solid) (comp) Normal Y Stress (solid) (comp) Normal Z Stress (solid) (comp) Shear XY Stress (solid) (comp) Shear YZ Stress (solid) (comp) Shear XZ Stress (solid) (comp) 1. "von Mises Stress" and "Maximum Principal Stress" apply to 1D, 2D, and 3D elements simultaneously. Other results apply to 2D or 3D elements exclusively. There are no specific results for 1D 2. For frequency response loadcases, (comp) may be replaced by (real) (imag) (magn) and/or (phas), depending on the complex format request. 3. "Maximum Principal Stress" is the maximum absolute principal stress: max(abs(P1(Z1)),abs(P1(Z2)),abs(P3(Z1)),abs(P3(Z2))) for shells. max(abs(P1),abs(P2),abs(P3)) for solids. 4. "P1 (major) Stress (max)" is the maximum major principal stress: max(P1(Z1),P1(Z2)). 5. "P3 (minor) Stress (min)" is the minimum minor principal stress: min(P3(Z1),P3(Z2)). 6. "Signed von Mises Stress" is the von Mises stress with traction/compression sign: sign(P1+P2+P3) * VonMises.
{"url":"https://help.altair.com/hwsolvers/os/topics/solvers/os/stress_results_written_in_hm_bulk_res_r.htm","timestamp":"2024-11-06T08:47:03Z","content_type":"application/xhtml+xml","content_length":"37926","record_id":"<urn:uuid:cbe1727f-975c-4ab3-b421-01e870df04ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00705.warc.gz"}
Primordial Regular Black Holes: Thermodynamics and Dark Matter Observatoire de la Côte d’Azur, Laboratoire Lagrange, 06304 Nice CEDEX, France Submission received: 2 March 2018 / Revised: 2 May 2018 / Accepted: 2 May 2018 / Published: 3 May 2018 The possibility that dark matter particles could be constituted by extreme regular primordial black holes is discussed. Extreme black holes have zero surface temperature, and are not subjected to the Hawking evaporation process. Assuming that the common horizon radius of these black holes is fixed by the minimum distance that is derived from the Riemann invariant computed from loop quantum gravity, the masses of these non-singular stable black holes are of the order of the Planck mass. However, if they are formed just after inflation, during reheating, their initial masses are about six orders of magnitude higher. After a short period of growth by the accretion of relativistic matter, they evaporate until reaching the extreme solution. Only a fraction of 3.8 × 10^−22 of relativistic matter is required to be converted into primordial black holes (PBHs) in order to explain the present abundance of dark matter particles. 1. Introduction The detection of gravitational waves emitted during the merger of two black holes [ ] represents a robust demonstration of the reality of these objects. Previously, the study of the motion of several individual stars around Sgr A*, a radio source located in the galactic center, led to the conclusion that the orbits of those stars are controlled by the gravitation of a “black” object having a mass of about 4 × 10 ]. These observations strongly suggest the presence of a supermassive black hole in the center of the Milky Way, since no other adequate alternative for the nature of such a massive object was proposed up until now. Thus, the existence of “stellar” black holes with masses of few tens of the solar mass, or supermassive black holes with masses of six up to nine orders of magnitude the mass of the Sun, seems to be well established. From a simple mathematical point of view, a black hole represents a region of the space–time causally disconnected from observers located at arbitrarily large distances. The surface separating both regions of the space–time is the event horizon that, in fact, is a one-way membrane, since observers inside the horizon can receive signals from outside, but the reverse is not true. A massive object whose mass has been reduced to a “point singularity” of the space–time after it underwent gravitational collapse, represents an unpleasant physical situation because singularities denote points of the space–time where the classical theory breaks down. Einstein equations admit both past and future singularities hidden by an event horizon [ ], but this theory in incomplete, because it ignores the presence of quantum effects. Such effects are expected to become significant in the high curvature regions existing near the singularity, which modify the space–time structure and make unreliable general relativity predictions. Loop quantum gravity (LQG) has emerged in the past decades as a possible candidate for a quantum gravity theory, and investigations of the interior inside the event horizon based on LQG led to solutions free of the classical singularity. This is a consequence of the space–time continuum of general relativity being replaced by a discrete quantum geometry, which remains regular at the classical singularity. A complete treatment of the space–time of a black hole in LQG is still lacking, but different studies suggest that a singularity is not formed at the end of the gravitational collapse [ ]. However, there are many uncertainties about the solution after the “crossing” of the classical singularity, or even about what replaces the black hole singularity. Some investigations [ ] indicate that the structure of the solution allows the existence of a Cauchy horizon near r = Thus, after the gravitational collapse, LQG solutions suggest that the interior inside of a black hole can be described by a singularity-free Reissner–Nordström space–time, including a Cauchy and an event horizon. The LQG picture described above has similar analogs in general relativity, where regular metrics describing black holes have been proposed by Bardeen [ ] and Hayward [ ], among others. These solutions are non-singular, and both have an inner Cauchy horizon as well as an outer event horizon. Inside the Cauchy horizon, both geometries behave like a de Sitter space–time. It is interesting to mention that already in the sixties, Andrei Sakharov [ ] had the intuition that at very high densities, the matter approaches a vacuum state with a finite density. Such a non-divergence implies that the local geometry should be described by a de Sitter In this paper, some aspects of these regular solutions will be reviewed, as well as, in particular, their associated thermodynamic properties. Both Bardeen and Hayward space–times are described by metrics characterized by free parameters that define the mass density distribution. Here, these scale parameters are fixed under the assumption that the radius of the Cauchy horizon is equivalent to the minimum distance derived from LQG. Under this condition, extremal black holes have masses of the order of the Planck mass. Then, based on thermodynamic principles, it will be shown that primordial regular black holes of about 10 times the Planck mass can be formed at the end of the inflationary epoch, when the oscillations of the inflaton field are intense, and reheating occurs. These newly formed black holes have a short period of growth, and then they evaporate until reaching masses close to the extremal case. Such black hole remnants are possible candidates for dark matter particles. It will be shown that only a small fraction of relativistic matter (3.8 × 10 ) needs to collapse into black holes in order to explain the present dark matter abundance. The paper is organized as follows. In Section 2 , the main properties of the Bardeen and the Hayward black holes are reviewed, while the thermodynamic properties of these objects are discussed in Section 3 . Then, in Section 4 , the formation of regular black holes in the early universe is considered, and finally, in Section 5 , the main results are discussed. 2. Regular Black Holes Since the original investigation of regular black holes by Bardeen [ ], different studies have been addressed to the analysis of non-singular space–times [ ]. In the case of static space–times, the considered general metric (in geometric units) is: $d s 2 = − f ( r ) d t 2 + f − 1 ( r ) d r 2 + r 2 d Ω 2$ $f ( r )$ is the lapse function. The zeros of the lapse function define the position of the event horizon and the inner Cauchy horizon, when it exists. In the case of the Bardeen geometry, the lapse function is defined by: $f ( r ) = 1 − 2 M r 2 ( r 2 + g 2 ) 3 / 2$ In the above equation, is the mass of the black hole, and is a suitable scale. Ayon-Beato and Garcia [ ] interpreted the scale parameter as the monopole charge of a magnetic field in the context of a non-linear electrodynamic theory. Here, such a parameter is considered simply as a scale defining the mass distribution derived from Einstein equations, that is: $ρ ( r ) = 3 M g 2 4 π ( r 2 + g 2 ) 5 / 2$ Notice that when $r ≫ g$ , the lapse function reduces to the Schwarzschild geometry, while when $r → 0$ , the metric becomes essentially de Sitter, namely, regular at r = 0. Define now two dimensionless variables $x = r 2 M and γ = g 2 M$ It worth mentioning that the quantities above are dimensionless in geometric units, but their real physical dimensions are always recovered when necessary. Studying the zeros of the lapse function, the existence or not of horizons depends on the following condition: if $γ < 2 / 27$, two horizons exist, while if the inequality is not satisfied, there are no horizons. In the case of equality, the Cauchy and the event horizons coincide at the dimensionless coordinate $x H = 8 / 27$. This corresponds to the case of an extreme Bardeen black hole. The existence of horizons depends on the ratio between the scale parameter and the mass of the black hole. Is it possible to estimate the scale parameter in an independent way? A positive answer can be given within the following context: we will consider the extreme case for reasons that will be explained later, and we will consider that the common horizon radius $r H$ is fixed by the minimum distance from the origin derived from the Riemann curvature invariant computed in terms of the volume operator in LQG [ ]. Hence: $l P$ is the Planck distance scale. From the relation above and the previous results, one obtains trivially that $g = ( π / 4 ) l P$ , and that the mass of an extreme Bardeen black hole is: $M P$ is the Planck mass. Thus, our hypothesis concerning the scale parameter leads to a mass of the order of the Planck mass for an extreme Bardeen black hole. Using these results and Equation (2), the central density can be estimated (here the physical constants were recovered): $ρ o = 3 27 4 π 2 ℏ c l P 4$ This result should be compared with the expected density derived from loop quantum cosmology (LQC) at the bounce (see, for instance, Craig [ $ρ 0 = 3 32 π 2 γ I 3 ℏ c l P 4$ In the equation above, $γ I ≃ 0.2375$ is the so-called Barbero–Immirzi parameter. The numerical coefficient in Equation (7) is 0.395, while in Equation (8), it is about 0.409. Thus, under our assumptions, the expected central energy density for the extreme Bardeen black hole is comparable to the maximum density of the universe, which was attained at the instant of the bounce within the LQC scenario. Similar computations can be performed in the case of the Hayward [ ] metric that is characterized by the lapse function: $f ( r ) = 1 − 2 M r 2 ( r 3 + 2 M L 2 )$ is a scale parameter. In this case, the energy density distribution resulting from Einstein equations is: $ρ ( r ) = 3 M 2 L 2 2 π ( r 3 + 2 M L 2 ) 2$ Introducing as before the dimensionless quantities $x = r / 2 M$ and $β = L / 2 M$, the analysis of the roots of the lapse function indicate that there a critical value for the scale parameter $β * = 2 / 27$, corresponding to the critical coordinate $x * = 2 / 3$. If b is smaller than the critical value, two horizons exist while in the opposite situation, there is no black hole solution. The critical value defines the extreme case when both horizons coincide. The scale parameter is fixed by assuming as before that the radial coordinate of the critical solution is equal to the minimum distance derived from LQG. In this case, one obtains: From this result, the mass of the extreme Hayward black hole is: which is a value slightly greater than the Planck mass. The central energy density in this case is: $ρ 0 = 9 4 π 2 ℏ c l P 4$ which is smaller than that derived for a Bardeen black hole approximately by factor of two. 3. Thermodynamics of Regular Black Holes An important breakthrough in the theory of black holes was the recognition that the laws of the mechanics governing the structure of these objects are analogous to the laws of thermodynamics, when the gravity at the horizon and the surface of the horizon are associated respectively to temperature and entropy [ ]. Such an analogy was reinforced by the discovery by Hawking [ ] that black holes can emit radiation as a grey body at the temperature defined by the horizon gravity. In fact, the emission spectrum, including particles other than photons, has a Planckian form only if the black hole is uncharged and non-rotating [ ]. As a consequence of such a radiation, small black holes “evaporate”, and primordial black holes with masses less than 3 × 10 g have already disappeared by now. It is worth mentioning that the Hawking radiation reinforces the connection between mechanic and thermodynamic laws, suggesting that the horizon surface should be interpreted as the physical entropy and the surface gravity as the physical temperature of the black hole. The evaporation process raises some questions as, for instance: do black holes evaporate completely without leaving a remnant? Is the singularity suppressed at the end of the evaporation process [ ]? String theory suggests a possible modification of the Heisenberg uncertainty principle, such as: $Δ x → Δ p → ≈ ℏ 2 ( 1 + α 2 l P 2 Δ p → 2 ℏ 2 )$ is a constant of the order of the unity representing the string tension. Using the above relation, it is possible from first principles to estimate the associated black hole temperature at the horizon (see, for instance, Adler et al. [ $k T = M c 2 π α 2 [ 1 − 1 − α 2 M P 2 4 M 2 ]$ For masses greater than the Planck mass, the usual result is recovered. However, Equation (15) indicates that during the evaporation process, the black hole mass reaches a minimum value $M min = α M P / 2$ ; otherwise, the temperature becomes imaginary. In this case, the remnant of the evaporation process has a mass of the order of the Planck mass. However, according to Equation (15), such a remnant has a finite temperature, which is in fact a maximum, given by: This is an unpleasant physical situation, since the surface temperature of the remnant is not zero, and no radiation is allowed, since the mass cannot decrease. However, a thermally stable situation exists for extreme black holes, since the horizon temperature is zero. This is the case for an extreme Reissner–Nordström black hole, as well as for extreme Bardeen and Hayward black holes, as we shall see below. The horizon temperature is given by the well-known relation: Using the lapse function corresponding to the Bardeen metric and the dimensionless variables defined previously, one obtains for the temperature: $T H = 1 8 π M ( 1 − 2 γ 2 x H 2 ) x H − 1 / 3$ Recalling that for the extreme case $γ * = 2 / 27$ $x * = 8 / 27$ , it is trivial to verify from Equation (18) that the horizon temperature is zero. For non-extreme Bardeen black holes, the variation of the temperature as a function of the horizon radius is shown Figure 1 An inspection of Figure 1 reveals that for $( r H / l P ) ≫ 1$ , the horizon temperature of the Bardeen black hole approaches the Schwarzschild behavior, as expected, decaying proportionally to the inverse of the horizon radius. For smaller masses, deviations between both temperatures become important when the horizon radius is of the order of $80 l P$ . Contrary to the Schwarzschild case, the Bardeen black hole has a maximum temperature at the horizon radius $r H ≃ 2.39 l P$ . For still smaller masses, the temperature drops quite fast, reaching the zero value (extreme case) at the horizon radius $r H = ( π / 2 ) l P$ Similar calculations can be performed for the case of the Hayward metric, which permits obtaining the horizon temperature in terms of the dimensionless quantities defined previously, that is: $T H = 1 8 π M ( 1 − 3 β 2 x H 2 ) x H − 1$ Since for an extreme Hayward black hole $x * = 2 / 3$ $β * = 2 / 27$ , it is easy to verify that the temperature is zero, as expected. The temperature for Hayward black holes of different masses was computed numerically, and it is shown in Figure 2 Notice that already for black holes with a horizon radius higher than $7 l P$ the temperatures for both black holes are practically indistinguishable. For lower masses, the Schwarzschild temperature diverges, while for the Hayward black hole, the temperature reaches a maximum near $r H ≃ 2.17 l P$ and becomes zero again at $r H = ( π / 2 ) l P$, which is the extreme case. This later value is equal to the precedent case, since, by assumption, the horizon radius of both black holes was assumed to be equal to the minimum distance derived from LQG. It is interesting to evaluate the specific heat of these regular black hole solutions, which give some additional insight on their thermal properties. Define the quantity: which represents the dimensionless specific heat. In order to evaluate the derivative above, it is necessary to specify a prescription for the energy (including the contribution of the gravitational field), which is an ill defined quantity in the general relativity theory. A compilation of energy-momentum complexes for different prescriptions was performed by Virbhadra [ ], and here, the Einstein formulation was adopted. In this case, for the metric defined by Equation (1), the energy enclosed by a spherical surface of area A and radius $r = ( A / 4 π ) 1 / 2$ $E = c 4 r 2 G [ 1 − f ( r ) ]$ If the considered surface is the horizon, $f ( r H ) = 0$ and the energy is proportional to the horizon radius. Consequently, in the case of the Schwarzschild metric, the energy is simply given by the mass of the black hole, that is, $E = M c 2$ . Since for a Schwarzschild black hole, the horizon temperature is inversely proportional to the mass, from Equation (20), one obtains a negative specific heat, which is a well-known result. This means that if the black hole absorbs energy, its temperature decreases. However, the situation is not exactly the same for the regular black holes here discussed. Firstly, since the energy enclosing the horizon depends directly on the horizon radius, its trivial to show that: where the variable was defined by Equation (4), and its value is derived from the zeros of the lapse function. For an extremal Hayward black hole, $x = 2 / 3$ , while for a Bardeen black hole, $x = 8 / 27$ . This means that these extremal black holes have energies smaller than a Schwarzschild black hole of the same mass. For larger masses $x → 1$ , all of the black holes of a given mass have the same energy inside the horizon. Secondly, since the energy and the temperature are functions of the horizon radius, the specific heat can be computed $c V = 1 k ∂ E ∂ r H ∂ r H ∂ T$ Computing the derivatives using the expressions for the temperature derived above, for the Hayward regular black hole, one obtains: $c V = 2 π r H 2 k l P 2 ( 9 L 2 r H 2 − 1 ) − 1$ This result indicates that the specific heat for Hayward black holes is positive if the horizon radius is in the range $3 L$ (extremal black hole) and 3L. The upper limit corresponds to the temperature maximum and the specific heat diverges here. Beyond this critical value, the specific heat becomes negative. This behavior is consistent with the trend observed in the horizon temperature curve (see Figure 2 ). The extremal case has T = 0, and the temperature increases as the mass increases, since the specific heat is positive. The maximum temperature occurs for $r H = 3 L = 3 π / 2 l P ≈ 2.17 l P$ , as mentioned previously. Once the specific heat becomes negative, the temperature decreases as the mass of the black hole increases, following the behavior observed for Schwarzschild black holes. A similar behavior occurs for the Bardeen case. It is interesting to compute also the emission rate (luminosity) due to the Hawking process for these regular black holes. Assuming that the horizon radiates like a black body, the luminosity of the Bardeen black hole is given by: $L B = K B γ 2 x 2 / 3 [ 1 − 2 γ 2 x 2 ] 4$ where the quantities have the same meaning as before, and are derived by computing the zeros of the lapse function. The constant is defined as: $K B = σ s 8 π 4 l P 2 ( ℏ c k ) 4$ $σ s$ is the Stephan radiation constant and is the Boltzmann constant. Similar computations can be performed for the Hayward metric, and one obtains: $L H = K H β 2 x − 2 [ 1 − 3 β 2 x ] 4$ The constant in the equation above satisfies $K H = 3 K B / 4$ , and the dimensionless quantities are again as before. Figure 3 shows a plot of the luminosities normalized either to as a function of the horizon radius. A simple inspection of Figure 3 indicates that the luminosity for both regular black holes does not diverge as in the Schwarzschild case, since they are zero for the extreme case, when $r * = ( π / 2 ) l P$ . In both cases, the luminosity reaches a maximum respectively at $r H ≈ 2.8 l P$ for the Hayward black hole, and at $r H ≈ 3.6 l P$ for the Bardeen black hole. 4. Primordial Regular Black Holes In the previous sections, we have seen that the Bardeen and the Hayward regular black holes have a geometric structure similar to that derived from investigations of the gravitational collapse based on LQG; in other words, a space–time including two horizons without any singularity. The extreme case has zero surface temperature, and it is thermally stable if the black hole is isolated, which is in agreement with the detailed investigations performed in the case of the Reissner–Nordström metric [ ]. Under the assumption that the horizon radius of these extreme regular black holes are equal to the minimal distance derived from LQG, it is possible to conclude that their masses are comparable to the Planck mass. These objects could be possible candidates to be identified with dark matter particles if they were produced in the early universe. This possibility was already suggested by MacGibbon [ ] in the late eighties. He postulated the existence of black holes with Planck masses, which would be relics of the Hawking process. The formation of black holes in the early universe was already considered either by Zeldovich and Novikov [ ], or by Hawking [ ]. These black holes are expected to be formed by the gravitational collapse of primordial density fluctuations in the radiation-dominated phase of the early universe. In order to collapse against matter pressure, the collapsing region must be larger than the Jeans length at maximum expansion. Moreover, the condition that the gravitational radius should be smaller than the particle horizon fixes the maximum mass of the black hole that can be formed in a given instant of time. Two aspects play a central role in the formation of primordial black holes (PBHs): first, for each horizon-sized region, there exists a critical threshold density contrast , above which the collapse occurs. Comparing the Jeans and the horizon lengths at the time when the collapsing region breaks away from the Hubble expansion, one finds that the critical density contrast must be of the order of the unity. The second key assumption concerns the final mass of the black hole, which is commonly supposed to be approximately close to the horizon mass at the epoch of formation. Investigations using a variety of initial density perturbation profiles, based on self-similarity and scaling led to a relation between the PBH mass and the horizon-scale of the form [ $M = K M H ( δ − δ c ) η$ is the horizon mass scale, and are constants. Since the PBH mass goes to zero as the density contrast is close to , the existence of critical phenomena suggests the possibility that masses at formation could be much smaller than the horizon scale. Recent studies indicate that PBHs with a broad mass spectrum can be formed in the high peaks of the co-moving curvature power spectrum resulting from single field inflation [ ]. However, the converted mass fraction into black holes is sensitive to possible non-Gaussianities in the amplitude distribution of such large and rare density fluctuations [ Astronomical observations can put severe constraints on the mass of PBHs that are able to explain the observed cosmological dark matter abundance. Data on extragalactic γ-rays, the femtolensing of γ-ray bursts, white-dwarf explosions, neutron-star captures, and quasar microlensing have been reviewed by Kühnel and Freese [ ], and no severe limits exist either for PBH masses less than 10 or higher than 10 M . Hence, present astronomical data do not impose any constraint on the existence of Planck mass black holes, and on the interpretation of these objects as dark matter particles. However, an opposite direction has been taken by some authors, who have suggested that dark matter candidates are more massive black holes either with masses of about 30 M ] or in the range of 10–10 Lower Limits for PHBs Masses from Thermodynamics As we have seen, PBH masses cannot be larger than the horizon scale. On the low side of the mass spectrum, Equation (28) suggests that PBH with very small masses can be formed. We will assume that these PBHs are formed just after inflation during reheating. At end of the inflationary period, the inflaton field is subjected to strong oscillations and decay. Those oscillations can be the origin of density fluctuations that satisfy the conditions fixed by Equation (28), and thereby forming black holes. However, the PBH masses cannot be arbitrarily small, and limits are fixed by thermodynamics. Consider a small spherical perturbation with a co-moving volume V. The entropy of such a perturbation if constituted by relativistic matter is: $α < 1$ be the efficiency of the gravitational collapse. In this case, the mass of the resulting black hole is $M = α ( ε V ) / c 2$ . Since the entropy of the collapsing matter cannot be larger than that of the resulting black hole, the following condition must be satisfied: $4 M c 2 3 α k T < π r H 2 l P 2$ Using the dimensionless quantities defined in the previous section, in the case of the Bardeen space–time, the following condition relates the matter temperature with the scale parameter and the black hole horizon $x H$ $k T ≥ 4 3 α π 3 γ x H 2 M P c 2$ For numerical purposes, the collapse efficiency is taken as $α = 0.5$ , and the reheating temperature is taken as 10 GeV. Recalling that $x H$ are connected by the equation $f ( x , γ ) = 0$ , the numerical solution of these equations gives $γ ≈ 1.7 × 10 − 7$ . The associated black hole mass is: $M M P = π 4 γ ≈ 2.6 × 10 6$ Similarly, for the Hayward case, the following condition must be satisfied: $k T ≥ 4 β α 3 π 3 M P c 2 x H 2$ Assuming the same conditions as before, one obtains $β ≈ 10 − 7$ , and a black hole mass: $M M P = π 12 1 β ≈ 5 × 10 6$ These results indicate that PBHs formed at reheating have masses that are a few million times higher than the masses of the extreme case. Hence, a natural question arises: do these PBHs evaporate until the stable state is attained, or do they grow by accretion of relativistic matter? In order to answer this question, the evaporation and the accretion timescales must be compared. The evaporation timescale can be estimated from the relations for the Hawking luminosity derived previously, since $t e v a p = M c 2 / L$ . At formation, the Hawking luminosity of a Bardeen black hole is $L = 1.25 × 10 42$ erg/s, implying an evaporation timescale of $4.0 × 10 − 20$ s. A similar calculation for a Hayward black hole leads to a timescale of $1.7 × 10 − 19$ s. On the other side, the accretion timescale can be defined as $t a c c = M / ( d M / d t )$ . In order to estimate the accretion rate, we have followed the same procedure as that by de Freitas Pacheco [ ] concerning the accretion of relativistic matter by a Reissner–Nordström black hole. Since a detailed analysis of the accretion flow is beyond the purposes of this paper, only the main points are recalled here. The position of the critical point and of the radial velocity there can be generalized for a metric defined by Equation (1), and are given respectively by: $r c = 4 f ( r c ) f ′ ( r c ) V 2 ( 1 − V 2 )$ $f ( r )$ $f ′ ( r )$ are respectively the lapse function and its derivative with respect to the radial coordinate taken at the critical point. The generalized sound velocity is defined by: $V 2 = d lg ( P + ε ) d lg n − 1$ In the case of a relativistic fluid $( P + ε ) ∝ n 4 / 3$ , and one obtains trivially that $V = 1 / 3$ . The radial component of the four-velocity at the critical point is given: $u c 2 = V 2 ( 1 − V 2 ) f ( r c )$ A numerical solution of these equations for the case of the Hayward metric is shown in Figure 4 These calculations indicate significant deviations with respect to the Schwarzschild space–time for masses close to the Planck value. For larger masses, the critical radius and the radial velocity at this point approach asymptotically the Schwarzschild results, that is, $r c → 3 r g / 2$ and $u c → 1 / 6$. Notice that the crossing of the critical point occurs always subsonically. Once the critical radius and the radial velocity at this point were computed, the accretion rate can be estimated from the equation: $d M d t = 4 π c r c 2 T t r$ $T i k$ is the stress-energy tensor of the accreting fluid evaluated at the critical point. For the Hayward black hole formed at reheating, the estimated accretion rate is $9.1 × 10 26$ , which corresponds to an accretion timescale of $1.2 × 10 − 25 s − 1$ . This value is six orders of magnitude smaller than the evaporation timescale derived above, indicating that the newly formed black hole will grow. The scenario is the same for the Bardeen case. However, the accretion rate either for the Hayward or Bardeen black holes depends on the energy density of the cosmic relativistic matter, which varies with the temperature as $ε ∝ T 4$ . Due to the fast expansion of the universe, the temperature decreases with a timescale $t c o l = T / | d T / d t | = 1 / H$ . At reheating, this is about $4.7 × 10 − 31$ s, which is several orders of magnitude smaller than the two other timescales. This means that these PBHs have initially a very short phase of growth, and then, the evaporation process dominates until the extreme situation is reached. If the density fluctuations giving origin to PBHs are assumed to be spherically symmetric with a Gaussian distribution and are scale invariant, the mass spectrum of the formed black holes is of the form [ This implies that the average mass of PBHs is $< M > = 3 M min$, where $M min$ is the minimum mass estimated previously on the basis of thermodynamic arguments. Consequently, most of the formed PBHs have masses approximately of the order of the minimum value, which will evolve according to the picture above. Finally, it is useful to estimate the energy fraction of PBHs formed at reheating required to explain the present dark matter abundance. A simple calculation gives the ratio between the PBH energy density and the relativistic matter energy density at formation: $ρ r e h ε r e h = Ω d m Ω γ ( g 0 g r e h ) 1 / 3 ( T 0 T r e h )$ $Ω d m$ $Ω γ$ are the present density parameters of dark matter and radiation, the $g i$ s are the number of degrees of freedom at the present epoch and at reheating, and the Ts are the temperatures at the same considered cosmological times. Numerically, Equation (40) gives the initial fraction between dark matter and radiation $3.8 × 10 − 22$ . Hence, only a very small fraction of the relativistic matter needs to undergo the gravitational collapse in order to explain the observed amount of dark matter. Notice that such a small value is a consequence of the different dilution factors for non-relativistic and relativistic matter as the universe expands as well as the huge mass of the particles (~10 ), implying presently that only a low-particle density is required to explain the observations. 5. Discussions Up until now, there has been no direct or indirect evidence for dark matter, which would be respectively the consequence of collisions with baryons in the laboratory, or the annihilation resulting from collisions between particles and antiparticles of dark matter. Gravitational effects remain the only source of inference concerning the existence of such a form of exotic matter. In particular, primordial density fluctuations in a universe constituted only by baryons, whose amount is fixed by primordial nucleosynthesis and the density of relic photons, will never reach the non-linear regime in timescales of 13–14 Gyr. Consequently, galaxies could not presently exist [ ]. However, among particles issued from the Standard Model (SM), there are no relic candidates with the required DM abundance that is able to explain the observations. Neutrinos are generally excluded because they are not massive enough, and decouple relativistically from the cosmic plasma, constituting a model for “hot” dark matter, which has problems with the matter power spectrum at scales smaller than $10 14 − 10 15 M ⊙$ . As a result of these difficulties, modifications of the Standard Model have been proposed, in particular a minimal supersymmetric extension (MSSM). In this model, the neutralino, the lightest supersymmetric particle, is the “preferred” candidate [ ]. However, from the experimental side, there are many difficulties with this theory, since up to now, no signal of supersymmetry has been seen in experiments related to the decay of B mesons, which do not indicate the presence of “exotic” particles such as charginos and/or neutralinos [ These tensions between astronomical and physical data led to alternative proposals such as modifications of general relativity theory [ ], dark matter particles having masses of about few MeV/c ], or on the contrary, having masses around few TeV/c , resulting from the SO(10) breaking [ ]. In the present work, the possibility that primordial regular black holes could be identified with dark matter particles was investigated. This possibility is not new, and past studies always had difficulties with the existence or not of remnants left by the evaporation process. Investigations of the gravitational collapse based on LQG suggest the appearance of a non-singular space–time with a Reissner–Nordström-like metric or, in other words, including two horizons. This behavior is well reproduced by regular black holes, whose geometry is described either by the Bardeen or the Hayward The Bardeen and the Hayward space–times approach the de Sitter solution in the region inside the Cauchy (or inner) horizon, implying that the equation of state of matter is similar to that of the vacuum ( $P = − ρ$ ). In both metrics, there is a critical solution in which the two horizons coincide, representing the case of an extreme black hole. Extreme black holes have zero surface temperature, and consequently, no Hawking emission is present. These objects are thermally stable, and can be imagined as being in its “ground” state. The basic assumption of the present investigation is to consider that the horizon radius of these extreme black holes is equal to the minimum distance derived from the Riemann invariant computed from LQG. Under this hypothesis, the extreme black hole mass either in the Bardeen or in the Hayward case is of the order of Planck mass. Consequently, our hypothetical dark matter particle does not evaporate, and does not hide any space–time singularity. It is worth mentioning the work by Dymnikova and Khoplov [ ], who considered regular black holes whose space–time is asymptotically de Sitter instead of Minkowski. These regular black holes have three horizons: the inner or Cauchy, the event, and the cosmological. According to these authors, regular PBHs with de Sitter interiors are formed when the collapse of a primordial fluctuation does not lead to a central singularity, stopping at a given very high density with a vacuum equation of state, as conjectured by Sakharov [ ] many years ago. In this case, the de Sitter vacuum is formed with an energy density corresponding to that of the GUT symmetry restoration scale. Under these conditions, the Dymnikova–Khoplov regular black hole has a mass of about $2.7 × 10 7 M P$ If our considered regular black holes are formed at the end of inflation, when the strong oscillations and decay of the inflaton field occur, their minimum mass is fixed by the condition that the entropy of the collapsing matter must be lower than the resulting black hole entropy. If the reheating temperature is 10 GeV, the minimum masses are respectively $2.6 × 10 6 M P$ for the Bardeen solution, and $5.0 × 10 6 M P$ for the Hayward case. Recently, a similar scenario has been investigated [ ], in which black hole formation occurs during the oscillatory phase after inflation in conditions of slow reheating. The authors have estimated that the minimum black hole mass at formation is $M min = 4 π M P ( M P / H * ) ∼ 10 6 M P$ , since the expansion rate during inflation derived from Planck 2015 is $H * ≈ 10 14$ GeV. Notice that this value compares quite well with our own estimates based on thermodynamic arguments. As we have shown above, the newly formed black holes have initially a short phase of growth, followed by an evaporation phase, in which they lose mass until the stable extreme condition is reached. If the initial mass spectrum has the form given by Equation (39), most of the PBHs have masses that are few times the minimum value, and therefore, all of these objects will follow the same evolutionary path leading to the same end point. In conclusion, dark matter particles could be constituted by regular PBHs with masses around 10 , and only a very small fraction of the relativistic matter at reheating is needed to be converted into PBHs in order to explain the observed dark matter abundance, in agreement with the estimates made by Carr et al. [ Conflicts of Interest The authors declare no conflict of interest. 1. Abbott, B.P.; Abbott, R.; Abbott, T.D.; Abernathy, M.R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.X.; et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 2016, 116, 061102. [Google Scholar] [CrossRef] [PubMed] [Green Version] 2. Abbott, B.P.; Abbott, R.; Abbott, T.D.; Abbott, F.; Acernese, K.; Ackley, C.; Adams, T.; Adams, P.; Addesso, R.X.; Adhikari, V.B.; et al. GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence. Phys. Rev. Lett. 2017, 119, 141101. [Google Scholar] [CrossRef] [PubMed] 3. Parsa, M.; Eckart, A.; Shahzamanian, B.; Karas, V.; Zajaček, M.; Zensus, J.A.; Straubmeier, C. Investigating the Relativistic Motion of the Stars Near the Supermassive Black Hole in the Galactic Center. ApJ 2017, 845, 22. [Google Scholar] [CrossRef] 4. Hawking, S. Singularities in the Universe. Phys. Rev. Lett. 1966, 17, 444–445. [Google Scholar] [CrossRef] 5. Hawking, S.; Penrose, R. The Singularities of Gravitational Collapse and Cosmology. Proc. R. Soc. Lond. Ser. A 1970, 314, 529–548. [Google Scholar] [CrossRef] 6. Modesto, L. Disappearance of black hole singularity in quantum gravity. Phys. Rev. D 2004, 70, 124009. [Google Scholar] [CrossRef] 7. Böhmer, C.G.; Vandersloot, K. Loop Quantum Dynamics of the Schwarzschild Interior. Phys. Rev. D 2007, 76, 104030. [Google Scholar] [CrossRef] 8. Gambini, R.; Pullin, J. Black holes in Loop Quantum Gravity: The complete space-time. Phys. Rev. Lett. 2008, 101, 161301. [Google Scholar] [CrossRef] [PubMed] 9. Modesto, L. Black hole interior from Loop Quantum Gravity. Adv. High Energy Phys. 2008, 459290. [Google Scholar] [CrossRef] 10. Bardeen, J.M. Non-singular general relativistic gravitational collapse. In Proceedings of the International Conference GR5, Tbilisi, Georgia; 1968; p. 174. [Google Scholar] 11. Hayward, S.A. Formation and Evaporation of Nonsingular Black Holes. Phys. Rev. Lett. 2006, 96, 031103. [Google Scholar] [CrossRef] [PubMed] 12. Sakharov, A.D. The Initial Stage of an Expanding Universe and the Appearance of a Nonuniform Distribution of Matter. Sov. Phys. 1966, 22, 241. [Google Scholar] 13. Bambi, C.; Modesto, L. Rotating regular black holes. Phys. Lett. B 2013, 721, 329–334. [Google Scholar] [CrossRef] 14. Toshmatov, B.; Ahmedov, B.; Abdujabbarov, A.; Stuchlik, Z. Rotating Regular Black Hole Solution. Phys. Rev. D 2014, 89, 104017. [Google Scholar] [CrossRef] 15. Azreg-Aïnou, M. Generating rotating regular black hole solutions without complexification. Phys. Rev. D 2014, 90, 064041. [Google Scholar] [CrossRef] 16. Ayon–Beato, E.; Garcıa, A. The Bardeen Model as a Nonlinear Magnetic Monopole. Phys. Lett. B 2000, 493, 149–152. [Google Scholar] [CrossRef] 17. Craig, D.A. Dynamical eigenfunctions and critical density in loop quantum cosmology. Class. Quantum Gravity 2013, 30, 035010. [Google Scholar] [CrossRef] 18. Bardeen, J.M.; Carter, B.; Hawking, S. The four laws of black hole mechanics. Commun. Math. Phys. 1973, 31, 161–170. [Google Scholar] [CrossRef] 19. Bekenstein, J.D. Black holes and entropy. Phys. Rev. D 1973, 7, 2333–2346. [Google Scholar] [CrossRef] 20. Hawking, S. Particle creation by black holes. Commun. Math. Phys. 1975, 43, 199–220. [Google Scholar] [CrossRef] 21. Page, D. Particle emission rates from a black hole: Massless particles from an uncharged and nonrotating hole. Phys. Rev. D. 1976, 13, 198–206. [Google Scholar] [CrossRef] 22. Hossenfelder, S.; Koch, B.; Bleicher, M. Trapping black hole remnants. arXiv, 2005; arXiv:hep-ph/0507140. [Google Scholar] 23. Bonanno, A.; Reuters, M. Space-time structure of an evaporating black hole in quantum gravity. Phys. Rev. D 2006, 73, 083005. [Google Scholar] [CrossRef] 24. Adler, R.J.; Chen, P.; Santiago, D.I. The Generalized Uncertainty Principle and Black Hole Remnants. Gen. Relativ. Gravit. 2001, 31, 2101–2108. [Google Scholar] [CrossRef] 25. Virbhadra, K.S. Naked singularities and Seifert’s conjecture. Phys. Rev. D 1999, 60, 104041. [Google Scholar] [CrossRef] 26. Anderson, P.R.; Hiscock, W.A.; Loranz, D.J. Semi-classical Stability of Extreme Reissner-Nordström Black Hole. Phys. Rev. Lett. 1995, 74, 4365–4368. [Google Scholar] [CrossRef] [PubMed] 27. Moretti, V. Wightman functions’ behavior on the event horizon of an extremal Reissner-Nordström black hole. Class. Quantum Gravity 1996, 13, 985–1006. [Google Scholar] [CrossRef] 28. MacGibbon, J.H. Can Planck-mass relics of evaporating black holes close the universe? Nature 1987, 329, 308–309. [Google Scholar] [CrossRef] 29. Zeldovich, Y.B.; Novikov, I.D. The Hypothesis of Cores Retarded during Expansion and the Hot Cosmological Model. Sov. Astron. 1967, 10, 602–603. [Google Scholar] 30. Hawking, S. Gravitationally collapsed objects of very low mass. Month. Not. R. Astron. Soc. 1971, 152, 75–78. [Google Scholar] [CrossRef] 31. Evans, C.R.; Coleman, J.S. Critical phenomena and self-similarity in the gravitational collapse of radiation fluid. Phys. Rev. Lett. 1994, 72, 1782–1785. [Google Scholar] [CrossRef] [PubMed] 32. Niemeyer, J.C.; Jedamzik, K. Near-Critical Gravitational Collapse and the Initial Mass Function of Primordial Black Holes. Phys. Rev. Lett. 1998, 80, 5481–5484. [Google Scholar] [CrossRef] 33. Ezquiaga, J.M.; Garcia-Bellido, J.; Morales, E.R. Primordial Black Hole production in Critical Higgs Inflation. arXiv, 2017; arXiv:astro-ph/1705.04861. [Google Scholar] 34. Franciolini, G.; Kehagias, A.; Matarrese, S.; Riotto, A. Primordial Black Holes from Inflation and non-Gaussianity. arXiv, 2018; arXiv:astro-ph/1801.09415. [Google Scholar] 35. Kühnel, F.; Freese, K. Constraints on Primordial Black Holes with Extended Mass Functions. Phys. Rev. D 2017, 95, 083508. [Google Scholar] [CrossRef] 36. Nishikawa, H.; Kovetz, E.D.; Kamionkowski, M.; Silk, J. Primordial black hole mergers in dark-matter spikes. arXiv, 2017; arXiv:astro-ph/1708.08449. [Google Scholar] 37. Frampton, P.H. Theory of dark matter. arXiv, 2017; arXiv:hep-ph/1705.04373. [Google Scholar] 38. De Freitas Pacheco, J.A. Relativistic accretion into a Reissner-Nordström black hole revisited. J. Thermodyn. 2012, 2012, 791870. [Google Scholar] [CrossRef] 39. Carr, B. Primordial Black Holes: Do They Exist and Are They Useful? arXiv, 2005; arXiv:astro-ph/0511743. [Google Scholar] 40. Blumenthal, G.R.; Faber, S.; Primack, J.R.; Rees, M.J. Formation fo galaxies and large-scale structure with cold dark matter. Nature 1984, 311, 517–525. [Google Scholar] [CrossRef] 41. Davis, M.; Efstathiou, G.; Frenk, C.S.; White, S.D. The evolution of large-structure in a universe dominated by dark matter. ApJ 1985, 292, 371–394. [Google Scholar] [CrossRef] 42. Bertone, G.; Hooper, D.; Silk, J. Particle dark matter: Evidence, candidates and constraints. Phys. Rep. 2005, 405, 279–390. [Google Scholar] [CrossRef] 43. Aaij, R.; Abellan Beteta, C.; Adametz, A.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; Albrecht, J.; Alessio, F. Strong constraints on rare decays $B S 0 → μ + μ −$ and $B 0 → μ + μ −$. Phys. Rev. Lett. 2012, 108, 231801. [Google Scholar] [CrossRef] [PubMed] 44. Bechtle, P.; Bringmann, T.; Desh, K.; Dreiner, H.; Hamer, M.; Hensel, C.; Krämer, M.; Nguyen, N.; Porod, W.; Prudent, X.; et al. Constrained supersymmetry after two years of LHC data: A global view with Fittino. JHEP 2012, 1206, 098. [Google Scholar] [CrossRef] 45. De Felice, A.; Tsujikawa, S. Cosmology of a Covariant Galileon Field. Phys. Rev. Lett. 2010, 105, 111301. [Google Scholar] [CrossRef] [PubMed] 46. Fayet, P. U-boson production in e^+e^− annihilations, Ψ and Υ decays and light dark matter. Phys. Rev. D 2007, 75, 115017. [Google Scholar] [CrossRef] 47. Olive, K. Supersymmetric Dark Matter or Not. arXiv, 2016; arXiv:hep-ph/1604.07336. [Google Scholar] 48. Dymnikova, I.; Khoplov, M. Regular black hole remnants and graviatoms with de Sitter interior as heavy dark matter candidates probing inhomogeneity of early Universe. arXiv, 2015; arXiv:gr-qc/ 1510.01351. [Google Scholar] 49. Carr, B.; Dimopoulos, K.; Owen, C.; Tenkanen, T. Primordial Black Hole Formation during Slow Reheating after Inflation. arXiv, 2018; arXiv:astro-ph/1804.08639. [Google Scholar] Figure 1. Normalized horizon temperature for Bardeen (black curve) and Schwarzschild (red curve) black holes as a function of the horizon radius in units of the Planck scale distance. Figure 2. Normalized horizon temperature for Hayward (black curve) and Schwarzschild (red curve) black holes as a function of the horizon radius in units of the Planck scale distance. Figure 3. Normalized luminosities for the Bardeen (left panel) black hole and the Hayward (right panel) black hole as a function of the horizon radius in units of the Planck distance scale. Figure 4. Variation of the critical radius in units of the gravitational radius as a function of the Hayward black hole mass in units of the Planck mass (black curve). The radial component of the four-velocity of the flow is also shown as a function of the black hole mass (blue curve). © 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:// Share and Cite MDPI and ACS Style Pacheco, J.A.d.F. Primordial Regular Black Holes: Thermodynamics and Dark Matter. Universe 2018, 4, 62. https://doi.org/10.3390/universe4050062 AMA Style Pacheco JAdF. Primordial Regular Black Holes: Thermodynamics and Dark Matter. Universe. 2018; 4(5):62. https://doi.org/10.3390/universe4050062 Chicago/Turabian Style Pacheco, José Antonio de Freitas. 2018. "Primordial Regular Black Holes: Thermodynamics and Dark Matter" Universe 4, no. 5: 62. https://doi.org/10.3390/universe4050062 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2218-1997/4/5/62","timestamp":"2024-11-01T18:58:12Z","content_type":"text/html","content_length":"483587","record_id":"<urn:uuid:5385ff38-a42f-4878-b857-2452e3ed14a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00117.warc.gz"}
Man Doth Not Invest by Earnings Yield Alone Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives. The big question The most popular indicator of the attractiveness of the stock market – Shiller’s Cyclically-Adjusted Price Earnings ratio (CAPE) — is currently at 39x in the US, higher than it’s been 98% of the time for the past 120 years. What’s a thinking investor to make of this? Should he stay clear of the US stock market, or stick to some pre-set strategic allocation to equities, or is there something else going on? In this note, we’ll argue that CAPE is far from irrelevant, but on its own, it doesn’t tell an investor how much stock exposure to have. CAPE basics When the CAPE ratio is high, the prospective return of the stock market is low. This finding makes logical and intuitive sense, and is borne out in historical data. We can say something more specific and powerful: 1/CAPE is a pretty good, though imperfect, predictor of the inflation-adjusted return of the stock market.^2 The measure of 1/CAPE is known as the Cyclically-Adjusted Earnings Yield (or just “Earnings Yield”), because it’s calculated as Earnings divided by Price. If you invest in the stock market when the Earnings Yield is 6%, your best expectation is that you’ll earn a long-term return (after inflation) of 6%. This is telling us that, contrary to popular belief, when Earnings Yield is low we shouldn’t expect to lose money from the Earnings Yield reverting to some average higher level, and vice versa. In other words, the predictive power of Earnings Yield over a long horizon is not improved by assuming that it is mean-reverting (For a deeper dive, see our 2017 article “Market Multiple Mean-Reversion: Red Light or Red Herring?”). The chart below illustrates this, using a horizon of ten years: Chart 1: Next 10-Year Real Return vs. Earnings Yield at Start US Equities, 1900 – 2021’ The first reaction of everyone who has seen this chart — your authors included — is: “Ooooeeee! Investor Hall of Fame, here I come!” Alas, a backtest – as shown in the chart below – pours ice-cold water on these dreams: a simple dynamic approach based on Earnings Yield failed to deliver a higher Sharpe Ratio^3 than a static allocation over the entire 120-year period for which we have data, and has actually under-performed since 1943.^4 This result is one reason you will find so few investment products that offer a dynamic asset allocation strategy based on Earnings Yield or similar metrics. See Asness et al. “Market Timing: Sin a Little” (2017) for a lively and detailed description of the wellspring of this cold water. Chart 2: Static vs. Conventional Dynamic Asset Allocation: US Stocks and T-Bills, 1900 – 2021 Worth another look? This result is puzzling though, because it just seems like basic common sense that it should be better to have more exposure when the market is offering higher expected returns, and we’ve seen that Earnings Yield has some power as an indicator of when to expect those higher returns. The rest of this note is devoted to exploring an alternative, more internally consistent approach to dynamic asset allocation using Earnings Yield as the driver, which has historically delivered the improved performance we’d expect. When Occam’s razor shaves too close The historical analysis of the use of Earnings Yield to dynamically allocate between US stocks and US T-bills presented in Chart 2 is done in Occam’s spirit of maximum simplicity. The rule sets the equity allocation to: 1. Be proportional to the Earnings Yield at each point in time, 2. Average 65% over the whole sample, the same as for the benchmark Static Strategy, and 3. Never be negative or in excess of 100% (i.e. no shorting and no leverage).^5 We’ll call this the “Conventional Dynamic Strategy.” There are a number of problems with this approach, including: 1. The asset allocation decision in this strategy is comparing the attractiveness of equities to the attractiveness of T-bills. However, Earnings Yield is not a predictor of the relative attractiveness of stocks versus T-bills; it is only a predictor of the future real return of equities.^6 2. Changes in riskiness of the stock market are ignored. It is intuitive that all else equal, an investor would want to have less allocated to equities when they are expected to be more volatile. 3. For the equity allocation to average 65% over the period requires knowing what the Earnings Yield of the stock market was over the whole sample, which a non-clairvoyant investor who wanted to follow this strategy could not have known. A more consistent asset allocation rule based on excess earnings yield If we want to use Earnings Yield to decide how much to invest in the stock market, to be consistent we also need to evaluate alternatives to stocks in terms of their expected real return. It seems natural to turn to US Inflation-indexed bonds (TIPS) as the relevant low-risk alternative to stocks, since the yield on TIPS is a measure of their expected real return, and so provides a directly comparable measurement to the Earnings Yield of equities. Indeed, there are strong arguments that “the (inflation) indexed perpetuity is the riskless asset for a long-term investor, since it finances a constant consumption stream over time,” as suggested by Harvard professors Campbell and Viceira in “Who Should Buy Long-Term Bonds” (2001).^7 For practical purposes, long-term TIPS are pretty close to the inflation-indexed perpetuity they suggest. It seems more natural to think about the attractiveness of stocks relative to inflation-protected bonds, rather than just by considering the level of Earnings Yield in isolation. For example, if the Earnings Yield of the stock market were 4% and the real yield on TIPS were also 4%, why would we want to own any equities?^8 Or for a historical illustration, consider that the Earnings Yield of the US stock market was about 2.7% at the end of 2000 and also at the end of 2021 – but the ten-year TIPS yield was 3.6% back in 2000 and -0.7% at the end of 2021. Would a rational investor choosing between equities and TIPS want to have the same exposure to equities at both points in time, just because the Earnings Yield was the same? We think most investors would agree that they should want to own more equities at the end of 2021 than 21 years earlier. And yet, the conventional analysis that uses the market’s Earnings Yield without reference to the real return offered by safe assets suggests owning the same amount of equities in both cases. We propose three changes to the (disappointing) Conventional Dynamic Strategy presented in Chart 2, setting the allocation to equities at each point in time to be: 1. Proportional to the excess of the stock market’s Earnings Yield above the real yield of inflation-protected bonds (US TIPS). We’ll refer to this measure as “Excess Earnings Yield.” 2. Inversely proportional to the risk (measured as variance) of the stock market as might reasonably have been estimated by an investor at the point in time of the asset allocation decision. 3. At the level a Utility-Maximizing investor, with a typical and stable degree of Constant Relative Risk-Aversion (CRRA), would choose based on the estimates of Excess Expected Return and Risk set out in 1 and 2 above.^9 By constructing the allocation decision from first principles in this way, we address the problem in the Conventional Strategy of needing to know the average level of Earnings Yield over the whole period. We will call this the “Excess Earnings Yield Dynamic Strategy.” Determining exposure to equities as a function of expected excess return, risk, and investor risk-aversion to maximize Expected Utility under standard assumptions is known as the Merton Rule (see footnote 9). For this analysis, we’ll assume an investor with a degree of risk-aversion we found typical in a survey we conducted in 2018. This level of risk-aversion is such that an investor would choose to allocate 62.5% to equities if faced with an excess expected equity market return of 5% per annum, and equity riskiness of 20% per annum.^10 The chart below shows the allocation to equities resulting from the Merton Rule, which we use in our historical analysis from the end of 1997 (the start of the TIPS market) to the end of 2021.^11 In the first 3 1/2 years of the study, the desired allocation to equities was zero, because the Earnings Yield of the stock market was relatively low, TIPS yields were high, and therefore the Excess Earnings Yield was negative. Chart 3: Allocation to US Equities Based on Merton Share vs. 10-Year TIPS 1997 – 2021 The chart below shows the performance that these allocations would have generated, compared to a static allocation of 65% in stocks and 35% in bonds (TIPS being the type of bond). The Excess Earnings Yield Dynamic Strategy performed much better, delivering a return about 2% pa higher than the Static Strategy, with lower risk and a nearly 50% higher Sharpe Ratio.^12 By contrast, over the same period starting in 1997, the Conventional Dynamic Strategy illustrated in Chart 2 generated a return about 1.5% pa lower than the comparable Static Strategy but with roughly the same Sharpe Ratio. Of course, this is a very short window, and we certainly are not suggesting that you should follow this approach or avoid the conventional approach solely based on this back-test. Chart 4: Excess Earnings Yield, Dynamic vs. Static Allocation US Equities and 10-Year TIPS 1997-2021, Logarithmic Scale It would be nice to see this analysis taken back further – unfortunately, the US Treasury has only been issuing TIPS since 1997. However, we do think it’s possible to construct a decent hypothetical history of long-term US real interest rates going all the way back to 1900, which we present in the chart below.^13 Chart 5: Ten-Year Real Yield Series Actual & Hypothetical Using this hypothetical history of US real rates, we get the chart below, which shows the performance results back to 1900. Bottom line: the Excess Earnings Yield Dynamic Strategy did a lot better than a Static Strategy. Not only did the Excess Earnings Yield Dynamic Strategy do much better in terms of absolute return and quality of return than the 65/35 Static Strategy, but perhaps even more remarkable, it outperformed being 100% in US equities over the entire period, which generated a lower total return of 10.0% with 40% more risk. Chart 6: Excess Earnings Yield, Dynamic vs. Static Allocation US Equities and 10-Year TIPS 1900-2021, Logarithmic Scale One more thing to consider is that it’s hard to say how an investor would have decided on a 65/35 stock/bond asset allocation to begin with, without use of some sort of framework, such as the Merton Rule, that put a price on risk. If an investor in 1900 were thinking about how much to invest in equities based on some other objective – such as maximizing Expected Wealth – he would have tried to invest the most that he could in equities with maximum leverage. Following such an approach, the investor would have likely gone bust in the 1929-1933 stock market meltdown of over 85%, and possibly in the several other greater-than-50% market declines experienced over this period. In the Appendix, we provide details of all our assumptions and sources of data, and find that the Base-Case historical result just outlined is robust to changes in many of the assumptions. We also show the significant improvement delivered historically from including Time-Series Momentum as an additional indicator of the expected risk and/or the expected return of equities. Improvement in Sharpe ratio is a twofer (squared!) Since 1900, the Excess Earnings Yield Dynamic Strategy has generated a Sharpe Ratio about 25% higher than that of the Static Strategy. Just how big a deal is a one-quarter increase in the Sharpe Ratio on one’s investment portfolio?^14 A very big deal indeed! A one-quarter increase, whether it comes from a higher expected excess return or lower risk, delivers a compound benefit to an investor in that it: 1. Provides a one-quarter higher return per unit of risk, and, 2. It also increases the optimal allocation to equities by one-quarter, generating an additional one-quarter improvement. So, a one-quarter increase in Sharpe Ratio generates roughly double that improvement (a 56% improvement, to be exact!) in the Risk-Adjusted Return of the investor’s portfolio.^15 An improvement of this magnitude in expected Risk-Adjusted Return, compounded over the long-term horizons over which individuals typically save and invest for retirement, can make truly life-changing enhancements to investor outcomes. Can everyone be a dynamic asset allocator? An Excess Earnings Yield Dynamic Strategy is not an approach that all investors can pursue at the same time. Economists would say that such a strategy is not macro-consistent. This is a pretty stringent test of an investment approach. Even a static asset allocation strategy that aims to keep a fixed fraction of wealth in equities would fall foul of this test. In fact, the only strategy that all investors can pursue at the same time is buy-and-hold at global market-cap weights. Whenever an investor is considering pursuing a strategy that not everyone can follow, he needs to have a good look in the mirror and ask why he is different from the average investor.^16 An Excess Earnings Yield Dynamic Strategy is probably a good fit for long-term investors who expect their risk-aversion to remain steady through time, and who are willing and able to estimate expected real returns and risk offered by their investments. We believe it’s never a good idea to adopt an investment strategy based primarily on historical simulations. However, when you believe a strategy makes sense a priori, it is worthwhile to challenge and update the strength of that belief with a look at the empirical evidence. Before looking at the historical record, we firmly believed it made intuitive sense to dynamically change one’s allocation to equities based on their expected return relative to the appropriate safe asset, and the empirical record reinforced that belief. It is time to correct the record regarding the efficacy of Dynamic Asset Allocation using the market’s Earnings Yield as a key input. And it is also time to differentiate this disciplined approach grounded in theory from the many seat-of-the-pants dynamic approaches that go under the pejorative heading of “Market Timing.” The magnitude of improvement in welfare that is available to investors who are willing and well-suited to vary their exposure to equities as their expected excess real return and risk change over time is too big to be left on the table. Data and sources S&P 500 Stock Index Prices (1870 – 2021 monthly) Standard and Poors, Online Data: Robert Shiller S&P 500 Earnings and Dividends Online Data: Robert Shiller US T-Bill Rates Online Data: Robert Shiller, St. Louis Federal Reserve US Ten-Year Treasury Yield St. Louis Federal Reserve, US Department of the Treasury UK Ten-Year Inflation-Linked Bond Yield (1985-1997) King and Low (2014) US Ten-Year TIPS Yield (1997 – 2021) St. Louis Federal Reserve, US Department of the Treasury US CPI Inflation (1880 – 2021) Online Data: Robert Shiller Implied Inflation Forecasts (1955 – 1970) Kozicki-Tinsley (2006), Ilmanen (2011) Survey-based Inflation Forecasts (May 1970 – November 1984) Philadelphia Fed, Cleveland Fed, Blue Chip Economic Indicators, Livingston Survey Construction of US Ten-Year TIPS Yields and Total Return Series from 1900 to 2021 From 1997 to 2021, we use Ten-Year TIPS yields directly. From 1985 to 1997, we use Ten-Year UK Inflation-Linked Bond yields. From 1900 to 1984, we use the US nominal Ten-Year bond yield minus an estimate for the ten years of US inflation, and we subtract a further 0.5% from the resultant real yield as a representation of a risk-premium that investors are likely to have demanded to bear inflation risk. The prospective inflation forecast we use from 1900 to 1984 is the average of survey data from 1970 to 1984, implied inflation forecasts from Kozicki-Tinsley (2006) from 1955 to 1970, a weighted average of realized 20-year, 10-year, 5-year and 1-year inflation with weights of 40%, 30%, 20% and 10% respectively from 1933 to 1955, and the weighted average of realized inflation itself averaged with 0 from 1900 to 1933 to represent some bounding of expectations at 0 inflation during the period the US was on the gold standard. Our approach to constructing this series owes a debt to Antti Ilmanen (2011). Construction of Equity Market Volatility Forecast 1900 – 2021 We calculate a series of rolling 10-year equity volatility and rolling 2-year equity volatility from monthly closing prices of the S&P500. We then take a weighted average of the life-to-date average of the rolling 10-year volatility and the most recent 2-year volatility. We put 75% and 25% weight on the 10-year and 2-year volatility measures, both expressed as variances, and then take the square root of that weighted average to arrive at the spot estimate of equity volatility that an investor might reasonably have used in deciding how much equity exposure to take using the Merton Rule. The chart below shows the volatility estimate we used in the historical simulation. Chart 7: Equity Market Volatility Estimate Used in Historical Simulation Shiller Cyclically-Adjusted Earnings Yield and Excess Earnings Yield Histories Chart 8: Shiller Cyclically-Adjusted Earnings Yield US, Dec. 1899 – Dec. 2021 Just a Good Draw? We cannot say whether or not the past 120 years were just a favorable period of time for dynamic asset allocation. However, we can answer the question of how much better we would have expected dynamic asset allocation to perform given the range of expected excess returns equities offered at different times. To do this, we ran a simulation in which half the time, the excess expected return of equities was 1% and the other half of the time it was 9%, which roughly matched the spread of expected excess returns experienced in the past 120 years.^17 We found that dynamically scaling the exposure to equities over many simulated histories delivered a roughly 30% average improvement in the Sharpe Ratio versus a static strategy. Against this backdrop, the historical experience of the past 120 years appears to be just a little bit worse than we’d have expected. The simulation also suggests that over a shorter horizon of 40 years, the dynamic asset allocation has an 85% probability of generating a higher return and a 65% chance of resulting in a higher Sharpe Ratio than a static weight strategy. Robustness of Simulation Results to Different Assumptions There are many other popular metrics used in dynamic asset allocation strategies, such as Tobin’s Q, Equity Market Value to GDP and Aggregate Investor Allocation to Equities (AIAE), to name a few. We prefer Earnings Yield because it directly gives an estimate for the long-term real return of the equity market, whereas all the other metrics need to be regressed against their historical averages in order to provide a return estimate. A survey-based forecast of future earnings may be better than using the past ten years of inflation-adjusted earnings as done in the Cyclically-Adjusted Earnings Yield, but we do not have that survey data going back very far and so could not run the historical simulation on that basis. Another metric, Cyclically-Adjusted Dividend yield plus dividend growth, closely relates to Earnings Yield and might be effectively used in conjunction with it, but this metric suffers from requiring an estimate of growth and being more sensitive to changes over time in corporate earnings payout policies. We consider Time Series Momentum an indicator of prospective risk (it can also be thought of as a return indicator with much the same practical effect), which can be effectively used in combination with Earnings Yield to significantly improve risk-adjusted returns. We give results for the joint application of Excess Earnings Yield and Momentum in the table below, and also in the chart below. We explored a range of different assumptions applied to the historical simulation. Below we describe each change in assumptions and the resultant Sharpe Ratio for the dynamic and static strategies over the entire period and the period since the introduction of inflation-protected bonds in 1985 in the UK. Chart 9: Excess Earnings Yield, Dynamic vs. Static Allocation Using Momentum as Risk Proxy US Equities and 10-Year TIPS 1900 – 2021, Logarithmic Scale Decade by Decade Results The Excess Earnings Yield Dynamic Strategy experienced a lower Sharpe ratio in 3 of the 12 decades examined. Higher Turnover A dynamic strategy is likely to experience higher turnover than a static strategy, and hence will incur higher transactions costs and possibly a higher tax cost as well. In our simulation with monthly rebalancing, the average turnover of the dynamic strategy was 29% per annum, versus 10% for the static weight strategy. Both of these turnover figures could be reduced by rebalancing less frequently and less fully to targets. Implementing a dynamic strategy is more complex and takes more of an investor’s attention, although on the other hand, a rules-based dynamic approach may be easier for an investor to stick with as it can scratch the investor’s itch to feel responsive in the face of a changing world. If Expected Equity Returns Are Inversely Related to Changes in Market Level By construction, when the market drops over a short period of time, the Cyclically-Adjusted Earnings Yield will go up, because Cyclically-Adjusted Earnings is based on the past ten years of earnings, which hardly changes from day to day. If an investor believes that the Expected Return of the stock market goes up when the market falls, then he should want a higher allocation to equities than suggested by the basic Merton Rule. This extra amount of equities was called “hedging demand” by Merton (1971), because it represents a hedge against the investment opportunity set faced by the investor. When the market goes down, the investor’s portfolio value goes down, but the increase in attractiveness of his investment opportunities offsets some of that loss in value, and so he can afford to own more equities. Pushing in the opposite direction of this hedging demand is the tendency for the market to be more volatile when it falls, which the market for options exhibits through the volatility “skew.” We view these phenomena as important, but not changing the basic conclusion that dynamic asset allocation driven by estimated expected return and risk is a sensible approach to As per the assumptions in the Merton Rule described above, risk-adjusted return is calculated by subtracting from the expected or realized excess return of a portfolio the cost of risk defined as: Where ƒ is the fraction of the portfolio allocated to the risky asset, and Victor Haghani is founder & CIO and James White is CEO of Elm Partners Management, LLC, a Philadelphia-based asset manager. Further Reading and References: • Asness, C., Moskowitz, T., and Pedersen, L. (2013). “Value and Momentum Everywhere.” Journal of Finance 68 (3): 929–985. • Asness, C., Ilmanen, A., and Maloney, T. (2017). “Market Timing: Sin a Little.” Journal of Investment Management 15 (3): 23-40. • Campbell, J. and Shiller, R. (1988). “Stock Prices, Earnings and Expected Dividends.” Journal of Finance 43 (3): 661-676. • Campbell, J. and Viceira, L. (2001). “Who Should Buy Long-Term Bonds?” American Economic Review 91 (1): 99-127. • Cochrane, J. (2022). “Portfolios for Long-Term Investors.” Review of Finance 26 (1): 1-42. • Cochrane, J. (2011). “Presidential address: Discount rates.” Journal of Finance 66 (4): 1047–1108. • Haghani, V. and White, J. (2018). “Measuring the Fabric of Felicity.” Elm Wealth. https://elmwealth.com/measuring-the-fabric-of-felicity/ • Haghani, V. and White, J. (2017). “What if High Stock Values Revert to Normal Levels?” Bloomberg. https://www.bloomberg.com/opinion/articles/2017-10-02/ • Haghani, V. and White, J. (2017). “What Our Market Return Forecasts Really Mean: Equity Convexity and Investment Sizing.” Elm Wealth. https://elmwealth.com/ • Haghani, V. and White, J. (2017). “Market Multiple Mean-Reversion: Red Light or Red Herring?” Elm Wealth. https://elmwealth.com/market-multiple-mean-reversion-red-light-red-herring/ • Haghani, V. and White, J. (2020). “Taking Stock.” Elm Wealth. https://elmwealth.com/taking-stock/ • Ilmanen, A. (2011) Expected Returns. London: Wiley. • Keimling, N. (2016). “Predicting Stock Market Returns Using the Shiller CAPE — An Improvement Towards Traditional Value Indicators?” SSRN Electronic Journal. https://dx.doi.org/10.2139/ • King, M. and Low, D. (2014) “Measuring the `World’ Real Interest Rate.” NBER Working Paper Series 19887. https://www.nber.org/papers/w19887 • Kozicki, S. and Tinsley, P. (2006). “Survey-Based Estimates of the Term Structure of Expected U.S. Inflation.” Bank of Canada Working Paper No. 2006-46. https://dx.doi.org/10.2139/ssrn.953959 • Merton, R. (1969). “Lifetime Portfolio Selection under Uncertainty: The Continuous-Time Case.” Review of Economics and Statistics 51 (3): 247-257. • Merton, R. (1971). “Optimum Consumption and Portfolio Rules in a Continuous-Time Model.” Journal of Economic Theory 3 (4): 373-413. • Merton, R. (1973). “An Intertemporal Capital Asset Pricing Model.” Econometrica 41 (5): 867-887. • Micaletti, R. (2021) “Market Timing Using Aggregate Equity Allocation Signals.” Alpha Architect. https://alphaarchitect.com/2021/04/29/market-timing-using-aggregate-equity-allocation-signals/ • Rintamaki, P. (2021) “Total Wealth Portfolio Compositioin and Stock Market Returns.” SSRN Electronic Journal. papers.ssrn.com/sol3/papers.cfm?abstract_id=3924180 • Samuelson, P. (1994). “The long-term case for equities.” Journal of Portfolio Management 21 (1): 15-24. • Jivraj, F. and Shiller, R. (2018). “The Many Colours of CAPE.” Yale ICF Working Paper No. 2018-22. http://dx.doi.org/10.2139/ssrn.3258404 1. ^1This is not an offer or solicitation to invest, nor should this be construed in any way as tax advice. Past returns are not indicative of future performance. ^2A variety of corporate-growth models can produce the result that real equity returns will be centered around the earnings yield. One basic condition under which real returns will equal the earnings yield would be if company earnings can grow with inflation with all earnings paid out currently to shareholders. While these models are all caricatures of the real world in a variety of ways, they nonetheless provide a solid starting point for thinking about expected stock market returns and making sense of long-term historical data. For a more up-to-date evaluation of CAPE as a predictor of real equity returns, particularly assessed in non-US equity markets, see Keimling (2016). They conclude: “Existing research indicates that the cyclically adjusted Shiller CAPE has predicted long-term returns in the S&P500 since 1881 fairly reliably for periods of more than 10 years. Furthermore, the results of this paper indicate that this was also the case for 16 other international equity markets in the period from 1979 to 2015.” ^3A measure of risk-adjusted return. ^4Unless otherwise stated, all historical analyses presented in this note are exclusive of trading costs and taxes. ^5The simplest asset allocation rule that meets these three requirements is: k* is the allocation to equities, EY is the Earnings Yield of the stock market at the time of the asset allocation decision, and ^6Comparing Earnings Yield to the yield on T-bills also would not be consistent, as Earnings Yield is a real return estimate while the yield on T-bills is a nominal return estimate. Using Earnings Yield minus the T-bill rate as the asset allocation driver results in the same conclusion conveyed by Chart 2. A Dynamic Asset allocation rule based solely on the Earnings Yield of the stock market would make sense if the expected real return of T-bills was constant through time, but we know this is not the case. ^7As Stanford economist John Cochrane further elaborates in “Portfolios for Long-Term Investors” (2021), “Their (Campbell and Viceira’s) proposition is obvious if you look at the payoffs. An (inflation) indexed perpetuity gives a perfectly steady stream of real income, which can finance a steady risk-free stream of consumption. It is the risk-free payoff stream.” ^8This statement ignores taxes, which generally favors holding equities for taxable US investors. There are other reasons an investor may want to own some equities under these circumstances, such as to avoid putting 100% faith in the Earnings Yield metric, as a form of hedging demand as described in Merton (1971) or as a partial hedge of an affluent investor’s consumption basket. ^9The formula we use is known as the Merton Rule and is: ^10Using the Merton Rule with ^11We assume that the Earnings Yield is an indicator of the real Arithmetic return of equities, although there is a good argument that Earnings Yield is predicting the real Geometric return. See Haghani and White, “What Our Market Return Forecasts Really Mean: Equity Convexity and Investment Sizing,” (2017). We constrain the allocation to equities to be between 0% and 100%, i.e. no shorting, no leverage. Relaxing the no-shorting and no-leverage constraints does not change the results materially. ^12In calculating the Sharpe Ratio, we are adding ^13Of particular note is the decade following WWII, during which we estimate ten-year TIPS would have traded at an average yield of -1.5%. During this period, the ten-year nominal Treasury bond yield averaged 2.5% and inflation ran at about 5%, touching 20% in the years directly following the end of the war. ^14A long-term investor may choose to measure risk in terms of the long-term real annuity value of his wealth – for example, using a perpetual inflation-protected bond as his numeraire. See Appendix for Sharpe Ratio of the Excess Earnings Yield Dynamic Strategy with returns measured relative to 10-year TIPS, which also shows a roughly one-quarter improvement versus a static strategy. ^15The improvement is (5/4)^^2 – 1 = 9/16 = 56%. ^16See John Cochrane’s “Portfolios for Long-Term Investors” (2021), pp 19-20 for a deeper discussion of the “Average Investor” theorem and the “Look-in-the-Mirror” test.
{"url":"https://www.advisorperspectives.com/articles/2022/02/25/man-doth-not-invest-by-earnings-yield-alone","timestamp":"2024-11-03T03:19:00Z","content_type":"text/html","content_length":"170081","record_id":"<urn:uuid:7be8f15f-2d98-4e75-bda6-be8a8d71e04c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00476.warc.gz"}
Rotating tensiometer for the measurement of the elastic modulus of deformable particles In a spinning drop tensiometer, the interfacial tension between two immiscible fluids can be inferred from the equilibrium shape of a drop suspended in a denser rotating immiscible liquid [B. Vonnegut, Rev. Sci. Instrum. 13, 6 (1942)RSINAK0034-674810.1063/1.1769937]. For small deformations of the droplet, an analytical solution for the droplet's shape exists [H. A. Stone and J. W. M. Bush, Q. Appl. Math. 54, 551 (1996)QAMAAY0033-569X10.1090/qam/1402409]. Similarly, we derive an analytical solution for the deformation dynamics of an initially spherical elastic particle suspended in a denser viscous rotating liquid. At long times, the particle attains a steady-state deformed shape that depends on the rotational Bond number, from which it is possible to get a measurement of the particle's elastic modulus, thus giving a proof of concept for a rotating tensiometer. Direct numerical simulations are used to validate the theory and identify its limits of applicability. All Science Journal Classification (ASJC) codes • Computational Mechanics • Modeling and Simulation • Fluid Flow and Transfer Processes Dive into the research topics of 'Rotating tensiometer for the measurement of the elastic modulus of deformable particles'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/rotating-tensiometer-for-the-measurement-of-the-elastic-modulus-o","timestamp":"2024-11-14T14:26:17Z","content_type":"text/html","content_length":"51474","record_id":"<urn:uuid:a9848f90-64c1-4f6c-be40-1368c736f22e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00630.warc.gz"}
Professor McGuckian's answer to Percentiles of the Normal Curve Question #131 Hi Mr. McGuckian, I have a question. Why on the example 88.5 from chapter 5.3, which asks what the smallest 5% change in waist circumference has to draw the line on the bell curve on the right (due to the fact that the people who lost more would be on the negative, and on homework question #36 the top 2% of dieters (which I assume would be the ones who had more waist reduction) are drawn in the bell curve as the line on the right? Shouldn't this problem follow the same logic than example 88.5 and the people who had more waist reduction be on the left? I am confused. Thank you for your help. Adriana Nava See the professor's answer below. Hi Adriana, It is because, in problem 36, they have described the loss as a positive amount. They say that they had an average decrease of 4 cm. In problem 88.5, they describe the waist change as a negative amount. They say the average was -4.69 inches. If you are on the left hand side of the curve in problem 36, you are dealing with numbers like: 1 cm, 2 cm, 3 cm, .... These amounts represent a smaller decrease. However, in example 88.5, the values on the left of the curve would be numbers like: -7 inches, -6 inches, -5 inches, .... These values would actually represent greater losses because a change of -6 inches is better than a change of only -4.69 inches. Thus, the smaller changes actually sit on the right of the mean for example 88.5. It is all about the scale being negative. Hopefully that makes sense, Professor McGuckian
{"url":"https://www.statsprofessor.com/solution/131","timestamp":"2024-11-14T05:34:26Z","content_type":"text/html","content_length":"47592","record_id":"<urn:uuid:8e699a60-a951-40ad-9567-93489441b2a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00370.warc.gz"}
E vs. M - What's the Difference? | This vs. That E vs. M What's the Difference? E and M are both essential components of the electromagnetic spectrum, with E representing electric fields and M representing magnetic fields. While electric fields are created by stationary charges, magnetic fields are generated by moving charges. Both fields are interconnected and can influence each other, leading to the propagation of electromagnetic waves. E and M are fundamental in understanding the behavior of light, radio waves, and other forms of electromagnetic radiation. Photo by Lennie Schmutz on Unsplash Attribute E M Definition Energy Matter Units Joules (J) Kilograms (kg) Forms Chemical, kinetic, potential, etc. Solid, liquid, gas, plasma Conservation Law of conservation of energy Law of conservation of mass Transfer Transferred through work or heat Transferred through physical contact Photo by Annie Spratt on Unsplash Further Detail When it comes to comparing the attributes of E and M, there are several key factors to consider. E and M are both important concepts in the field of physics, with E representing energy and M representing mass. Understanding the differences and similarities between these two attributes can help us gain a deeper understanding of the fundamental principles that govern the universe. Energy, denoted by the symbol E, is a scalar quantity that represents the ability of a system to do work. It can exist in various forms such as kinetic energy, potential energy, and thermal energy. On the other hand, mass, denoted by the symbol M, is a scalar quantity that represents the amount of matter in an object. It is a fundamental property of matter and is often measured in kilograms. The unit of energy in the International System of Units (SI) is the joule (J), which is defined as the work done by a force of one newton acting over a distance of one meter. Mass, on the other hand, is typically measured in kilograms (kg) in the SI system. While energy is a measure of the ability to do work, mass is a measure of the amount of matter present in an object. One of the key relationships between energy and mass is described by Einstein's famous equation, E=mc^2. This equation shows that energy and mass are interchangeable and can be converted into one another. In other words, mass can be converted into energy and vice versa. This relationship has profound implications for our understanding of the universe and has been confirmed through experiments such as nuclear reactions. Another important aspect to consider when comparing energy and mass is the principle of conservation. Energy is a conserved quantity, meaning that it cannot be created or destroyed, only transferred from one form to another. Mass is also a conserved quantity, meaning that the total mass of a closed system remains constant over time. This conservation principle plays a crucial role in many physical phenomena and is a fundamental law of nature. Energy and mass can be transformed from one form to another through various processes. For example, when an object is in motion, it possesses kinetic energy, which can be converted into potential energy when the object is lifted to a higher position. Similarly, mass can be converted into energy through nuclear reactions, as demonstrated by the equation E=mc^2. These transformations highlight the dynamic nature of energy and mass in the universe. Energy and mass have numerous applications in various fields of science and technology. Energy is essential for powering machines, generating electricity, and fueling our daily activities. Mass plays a crucial role in determining the gravitational force between objects, as described by Newton's law of universal gravitation. Understanding the properties of energy and mass is essential for advancing our knowledge and technology. In conclusion, the attributes of energy and mass, represented by E and M respectively, are fundamental concepts in physics that play a crucial role in our understanding of the universe. While energy represents the ability to do work, mass represents the amount of matter in an object. The relationship between energy and mass, as described by Einstein's equation E=mc^2, highlights the interconnected nature of these two attributes. By studying and comparing the properties of energy and mass, we can gain a deeper insight into the fundamental principles that govern the physical Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.
{"url":"https://thisvsthat.io/e-vs-m","timestamp":"2024-11-13T11:19:26Z","content_type":"text/html","content_length":"13189","record_id":"<urn:uuid:b052eeee-0707-4b0b-86fe-64eabc7d1485>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00886.warc.gz"}
Microbiology - Online Tutor, Practice Problems & Exam Prep In this video, we're going to begin our lesson on generation times. Scientists can actually measure the growth rate of a microbial population by calculating its generation time. The generation time, also sometimes referred to as the doubling time, is the amount of time it takes for a population to double in the number of cells. In other words, the generation time or the doubling time represents how long it takes for binary fission to occur and for binary fission to create a new generation of cells. Recall from our previous lesson videos that binary fission is the process by which prokaryotic cells divide. The generation time is simply how long binary fission takes. Different microbes tend to have different generation times. Some microbes will divide really slowly, whereas other microbes will divide very fast. If we look at our image below, we can get a better understanding of the different generation times. On the left, notice we are showing you a microbe that divides very slowly, like this turtle, which we know moves very slow. You can see that over a period of 30 minutes, this prokaryotic cell is able to divide into 2 cells. The generation time for this microbe is 30 minutes. On the right side of the image, we're showing you a microbe that divides very fast, like this bunny rabbit you see here. Notice that in half the time, in just 15 minutes, this microbe is able to divide to create a new generation of cells. Over a period of 30 minutes, these cells are able to divide once again. This microbe on the right is going to have a much faster generation time. You can see that shorter times represent faster binary fission, whereas longer times represent more extended binary fission processes. This concludes our brief introduction to generation times. Later, in our next video, we'll discuss how scientists can use these generation times to predict how many cells there will be after a given amount of time. So I'll see you all in that next video to talk about that.
{"url":"https://www.pearson.com/channels/microbiology/learn/jason/ch-7-prokaryotic-cell-structures-functions/generation-times?chapterId=49adbb94","timestamp":"2024-11-12T10:04:57Z","content_type":"text/html","content_length":"466270","record_id":"<urn:uuid:5a0a21f8-2578-4894-b96f-d95fac5a8fdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00327.warc.gz"}
Identifying an Unknown Force in a System of Unbalanced Forces Question Video: Identifying an Unknown Force in a System of Unbalanced Forces Physics A cyclist supplies a force of 250 N to her bicycle. She and the bicycle together have a mass of 130 kg. The bicycle accelerates at 1.5 m/s² as it travels into a headwind that applies a 15 N force in the opposite direction to the bicycle’s velocity, and friction acts on the bicycle in the same direction as the wind. How much force, in newtons, is supplied by friction? Video Transcript A cyclist supplies a force of 250 newtons to her bicycle. She and the bicycle together have a mass of 130 kilograms. The bicycle accelerates at 1.5 meters per second squared as it travels into a headwind that applies a 15-newton force in the opposite direction to the bicycle’s velocity. And friction acts on the bicycle in the same direction as the wind. How much force, in newtons, is supplied by friction? Okay, so let’s start by underlining all of the important parts of the question, so we don’t miss anything out. Firstly, we know that we’ve got a cycle and a cyclist. And the cyclist applies a force of 250 newtons to her bicycle. She and the bicycle together have a mass of 130 kilograms. We’re told that the bicycle accelerates at 1.5 meters per second squared. Unfortunately for the cyclist, she’s travelling into a headwind that applies a 15-newton force in the opposite direction to the bicycle’s velocity. And there’s also the force of friction which acts on the bicycle in the same direction as the wind. We’re asked to work out how much force, in newtons, is supplied by friction. So now that we’ve underlined all the important parts, let’s label some quantities. Let’s start with the force that the cyclist applies to her bicycle. We’ll call this 𝐹 sub cyc. This force is 250 newtons. And of course, this force is going to act in the direction of the bicycle’s velocity because on a bicycle, when you push with your legs, you want to move forward, right? Now, when we come to drawing a diagram later, we’ll set that the cyclist is travelling to the right. This is an arbitrary choice, but it’s important to be consistent. So let’s just say the cyclist is trying to travel to the right. In that case, the force 𝐹 sub cyc is also acting towards the right. This will become important later. For now, let’s move on to the mass of the bicycle and the cyclist together. Let’s call this mass 𝑚, and this happens to be 130 kilograms. We could then also look at the acceleration of the bicycle. The acceleration is 1.5 meters per second squared. And from contacts, we can realize that this is towards the right as well, in the direction of travel. Because remember, the cyclist is pushing with a force of 250 newtons. Unless she’s going up against a massive headwind or trying to climb up a hill, she’s not going to be accelerating in the opposite direction to her travel. In other words, she is not going to be slowing down because it’s really rare on a bike to be pedaling along hard and slowing down. You can probably tell I’m not a cyclist, right? Anyway, she would only be slowing down if there was a massive force of gravity because she was trying to go up hill or something. Or, if there was a huge headwind, which there isn’t. We know that the headwind force is 15 newtons. The other option for her to be slowing down as if friction was massive. But generally, in everyday life, we can pedal hard enough to overcome friction. Otherwise, cycling wouldn’t be possible. So anyway, she’s accelerating to the right as well. Now, the next thing that we know is that the force applied by the headwind, we’ll call this 𝐹 sub wind, is 15 newtons. Now, we also know that this force is in the opposite direction to the bicycle’s velocity. So this force is trying to slow her down. And hence, it acts towards the left. But it’s a relatively small force. It’s only 15 newtons, compared to the 250 that she’s putting into the bike so that she can move forward. And finally, the force that we’re trying to actually find out, we’ll call this 𝐹 sub fric, for the frictional force. And we don’t know what this is. But we do know that the frictional force acts in the same direction as the wind. So it acts towards the left. Now, it’s all well and good discussing all of these directions of travel. But, we don’t actually have a diagram to show us what’s going on properly. So let’s draw one. So here it is, our slightly surrealist interpretation of a cyclist and a bicycle. But the important thing is that she is wearing a helmet for safety reasons. So anyway, let’s get to labelling all the forces on the bike and the cyclist. We’ve said the 𝐹 sub cyc acts to the right and 𝐹 sub wind and 𝐹 sub fric are acting towards the left. As well as this, we know the mass of the bicycle and the cyclist combined. And we know the acceleration as well. And actually, these last two quantities are quite useful. Knowing the mass and the acceleration is going to allow us to use Newton’s second law of motion. This law tells us that the resultant force on an object, 𝐹, is equal to the mass of the object, 𝑚, multiplied by the acceleration of the object, 𝑎. And since we already know the mass and the acceleration, we can therefore work out the resultant force on the bike. So let’s say that the resultant force on the bike — 𝐹 sub res, that’s what we’ll call it — is equal to the mass of the cyclist and the bicycle together multiplied by the acceleration of that whole object. Now, this is a really useful expression. But we can find another one using the values 𝐹 sub wind, 𝐹 sub fric, and 𝐹 sub cyc. Because remember, the resultant force on any object is simply the sum of all those forces, when you take into account the directions in which they’re acting. In other words, 𝐹 sub res, the resultant force, is equal to 𝐹 sub cyc, the force exerted by the cyclist towards the right, minus 𝐹 sub wind — the force trying to hold back the cyclist due to the headwind, and it’s negative because it’s acting in the opposite direction — and also minus 𝐹 sub fric, the other force trying to hold back the cyclist. And at that point, we’ve accounted for all of the forces and all of the directions. But, oh look! We’ve got two expressions for 𝐹 sub res. Why don’t we equate the two. 𝐹 sub res is equal to 𝑚𝑎, as we’ve seen here. But also, this is equal to, 𝐹 sub res is equal to 𝐹 sub cyc minus 𝐹 sub wind minus 𝐹 sub fric. And at this point, we don’t need to worry about the resultant force anymore. So let’s get rid of it, bye bye. And now we’re left with a really useful expression because we already know what 𝑚 is. We already know what 𝑎 is, same with 𝐹 sub cyc, same with 𝐹 sub wind. And we’re trying to find out what 𝐹 sub fric is. So we know all of the quantities in this equation apart from the one we’re trying to find out. Brilliant news! Let’s substitute some stuff in then. 𝑚 times 𝑎 becomes 130 times 1.5. And the right-hand side becomes 250 minus 15 minus 𝐹 sub fric. And 250 minus 15 becomes 235. And the left-hand side becomes 195. At this point, we can rearrange. Add 𝐹 sub fric to both sides. So it cancels on the right-hand side, which leaves us with 𝐹 sub fric plus 195 is equal to 235. And then subtract 195 from both sides as well. It cancels on the left-hand side. So we’re left with 𝐹 sub fric is equal to 40. 40 is the value of the right-hand side. And remember, we need to put the units of newtons. The question wanted us to give an answer in newtons. And luckily, all of the values that we used in our working out — newtons, kilograms, meters per second squared, and newtons — happened to be in the standard units. So 𝐹 sub fric is also going to be in the standard units of newtons. And so, this becomes our final answer. The force supplied by friction is 40 newtons.
{"url":"https://www.nagwa.com/en/videos/970169138940/","timestamp":"2024-11-11T11:36:45Z","content_type":"text/html","content_length":"256068","record_id":"<urn:uuid:557297e3-cbd3-4b03-a407-4b6010d07883>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00132.warc.gz"}
Large Block DES Newsletter [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Large Block DES Newsletter Large Block DES Newsletter Vol. I, No. 1 Feb. 28, 1994 Terry Ritter, Ed. Current Standings for the Large-Block DES Proposals: I. NxM DES: A B v v k1 -> DES1 k2 -> DES2 v v C D Exchange Right 4 Bytes E F v v k3 -> DES3 k4 -> DES4 v v G H Falls to meet-in-the-middle like double-DES. Falls to a practical attack by Biham, now called "fix-in-the-middle." II. NxM DES Found Weak Announcement of above. III. Isolated Double-DES 2x construct found weak in original article. The 1x construct: k1 -> DES1 km -> XOR k2 -> DES2 was found weak by Chris Dodd <[email protected]> who pointed out that two different blocks of known-plaintext (A,D) and (A',D') will allow matching (B XOR B') and (C xor X'). (This is similar to Biham's "fix-in-the-middle.") Good going Chris! Also found by Stefan Lucks <[email protected]>. IV. Ladder-DES A B | k1 | v v | XOR <- DES1-----| | | | k2 | | v v |---- DES2 -> XOR | | | k3 | v v | XOR <- DES3 ----| | | | k4 | | v v |---- DES4 -> XOR | | v v C D Joseph C. Konczal <[email protected]> points out that the construct is indeed vulnerable to meet-in-the-middle. I agree, but note that this seems to imply a 112-bit search. Since we don't need more than 112 or 120 bits of strength, I don't see it as a problem. (Indeed, if we could get more strength, we might want to trade it for speed anyway.) 112 bits (or so) is the design goal, which should be enough for a couple of decades. In a normal cipher design, I would expect each key bit to contribute toward strength, but these are hardly normal cipher designs. Especially when we try to expand block size, extra keys may simply provide another small block with the same strength as a previous small block. Keys will be delivered electronically, so the relatively rare delivery of 2x or 4x or even 8x the expected key material should not pose a serious However, Biham reports: "ladder DES is not more secure than 2**88 steps and 2**64 chosen plaintexts." Now, 2^88 cipherings is 2^32 times as strong as the 2^56 currently in DES (and larger than Skipjack), but hardly the 2^112 intended. For the current design the current options 1) live with the 2^88 strength (so far!), 2) design the rest of the system to prevent chosen plaintexts, or 3) prevent more than, say, 2^32 block cipherings under a single key. Actually, we need to know exactly what the problem is, and the limits of it, before we can propose a fix, or decide whether the ladder-DES scheme is unfixable. Three substantially different constructs proposed; of these, two fall, and one is wounded. To review, the intent is to find some relatively-simple construct which builds on the assumed strength of DES to deliver wide blocks and something like 112 bits of strength, with less processing than triple-DES. (I see no need for super-strength, unless it is free.) We still do not know whether or not this is possible.
{"url":"https://cypherpunks.venona.com/date/1994/03/msg00057.html","timestamp":"2024-11-12T03:42:35Z","content_type":"text/html","content_length":"8225","record_id":"<urn:uuid:91ed3cb1-31de-4ee0-ac47-118baebec85d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00124.warc.gz"}
Transfer function model with identifiable parameters An idtf model represents a system as a continuous-time or discrete-time transfer function with identifiable (estimable) coefficients. Use idtf to create a transfer function model, or to convert Dynamic System Models to transfer function form. A SISO transfer function is a ratio of polynomials with an exponential term. In continuous time, $G\left(s\right)={e}^{-\tau s}\frac{{b}_{n}{s}^{n}+{b}_{n-1}{s}^{n-1}+...+{b}_{0}}{{s}^{m}+{a}_{m-1}{s}^{m-1}+...+{a}_{0}}.$ In discrete time, In discrete time, z^–k represents a time delay of kT[s], where T[s] is the sample time. For idtf models, the denominator coefficients a[0],...,a[m–1] and the numerator coefficients b[0],...,b[n] can be estimable parameters. (The leading denominator coefficient is always fixed to 1.) The time delay τ (or k in discrete time) can also be an estimable parameter. The idtf model stores the polynomial coefficients a[0],...,a[m–1] and b[0],...,b[n] in the Denominator and Numerator properties of the model, respectively. The time delay τ or k is stored in the IODelay property of the model. Unlike idss and idpoly, idtf fixes the noise parameter to 1 rather than parameterizing it. So, in $y=Gu+He$, H = 1. A MIMO transfer function contains a SISO transfer function corresponding to each input-output pair in the system. For idtf models, the polynomial coefficients and transport delays of each input-output pair are independently estimable parameters. You can obtain an idtf model object in one of three ways. • Estimate the idtf model based on input-output measurements of a system using tfest. The tfest command estimates the values of the transfer function coefficients and transport delays. The estimated values are stored in the Numerator, Denominator, and IODelay properties of the resulting idtf model. When you reference numerator and denominator properties, you can use the shortcuts num and den. The Report property of the resulting model stores information about the estimation, such as handling of initial conditions and options used in estimation. For example, you can use the following commands to estimate and get information about a transfer function. sys = tfest(data,nx); num = sys.Numerator; den = sys.den; For more examples of estimating an idtf model, see tfest. When you obtain an idtf model by estimation, you can extract estimated coefficients and their uncertainties from the model. To do so, use commands such as tfdata, getpar, or getcov. • Create an idtf model using the idtf command. For example, create an idtf model with the numerator and denominator that you specify. You can create an idtf model to configure an initial parameterization for estimation of a transfer function to fit measured response data. When you do so, you can specify constraints on such values as the numerator and denominator coefficients and transport delays. For example, you can fix the values of some parameters, or specify minimum or maximum values for the free parameters. You can then use the configured model as an input argument to tfest to estimate parameter values with those constraints. For examples, see Create Continuous-Time Transfer Function Model and Create Discrete-Time Transfer Function. • Convert an existing dynamic system model to an idtf model using the idtf command. For example, convert the state-space model sys_ss to a transfer function. For a more detailed example, see Convert Identifiable State-Space Model to Identifiable Transfer Function For information on functions you can use to extract information from or transform idtf model objects, see Object Functions. Create Transfer Function Model sys = idtf(numerator,denominator) creates a continuous-time transfer function model with identifiable parameters. numerator specifies the current values of the transfer function numerator coefficients. denominator specifies the current values of the transfer function denominator coefficients. sys = idtf(numerator,denominator,Ts) creates a discrete-time transfer function model with sample time Ts. sys = idtf(___,Name,Value) creates a transfer function with the properties specified by one or more Name,Value pair arguments. Specify name-value pair arguments after any of the input argument combinations in the previous syntaxes. Convert Dynamic System Model to Transfer Function Model sys = idtf(sys0) converts any dynamic system model sys0 to idtf model form. Input Arguments sys0 — Dynamic system dynamic system model Any dynamic system to convert to an idtf model. When sys0 is an identified model, its estimated parameter covariance is lost during conversion. If you want to translate the estimated parameter covariance during the conversion, use translatecov. Numerator — Values of transfer function numerator coefficients vector | cell array Values of transfer function numerator coefficients, specified as a row vector or a cell array. For SISO transfer functions, the values of the numerator coefficients are stored as a row vector in the following order: • Descending powers of s or p (for continuous-time transfer functions) • Ascending powers of z^–1 or q^–1 (for discrete-time transfer functions) Any coefficient whose initial value is not known is stored as NaN. For MIMO transfer functions with Ny outputs and Nu inputs, Numerator is a Ny-by-Nu cell array of numerator coefficients for each input/output pair. For an example of a MIMO transfer function, see Create MIMO Discrete-Time Transfer Function. If you create an idtf model sys using the idtf command, sys.Numerator contains the initial values of numerator coefficients that you specify with the numerator input argument. If you obtain an idtf model by identification using tfest, then sys.Numerator contains the estimated values of the numerator coefficients. For an idtf model sys, the property sys.Numerator is an alias for the value of the property sys.Structure.Numerator.Value. Denominator — Values of transfer function denominator coefficients vector | cell array Values of transfer function denominator coefficients, specified as a row vector or a cell array. For SISO transfer functions, the values of the denominator coefficients are stored as a row vector in the following order: • Descending powers of s or p (for continuous-time transfer functions) • Ascending powers of z^–1 or q^–1 (for discrete-time transfer functions) The leading coefficient in Denominator is fixed to 1. Any coefficient whose initial value is not known is stored as NaN. For MIMO transfer functions with Ny outputs and Nu inputs, Denominator is an Ny-by-Nu cell array of denominator coefficients for each input-output pair. For an example of a MIMO transfer function, see Create MIMO Discrete-Time Transfer Function. If you create an idtf model sys using theidtf command, sys.Denominator contains the initial values of denominator coefficients that you specify with the denominator input argument. If you obtain an idtf model sys by identification using tfest, then sys.Denominator contains the estimated values of the denominator coefficients. For an idtf model sys, the property sys.Denominator is an alias for the value of the property sys.Structure.Denominator.Value. Variable — Transfer function display variable 's' (default) | 'p' | 'z^-1' | 'q^-1' Transfer function display variable, specified as one of the following values: • 's' — Default for continuous-time models • 'p' — Equivalent to 's' • 'z^-1' — Default for discrete-time models • 'q^-1' — Equivalent to 'z^-1' The value of Variable is reflected in the display, and also affects the interpretation of the Numerator and Denominator coefficient vectors for discrete-time models. When Variable is set to 'z^-1' or 'q^-1', the coefficient vectors are ordered as ascending powers of the variable. For an example of using the Variable property, see Specify Transfer Function Display Variable. IODelay — Transport delays 0 (default) | numeric array Transport delays, specified as a numeric array containing a separate transport delay for each input-output pair. For continuous-time systems, transport delays are expressed in the time unit stored in the TimeUnit property. For discrete-time systems, transport delays are expressed as integers denoting delay of a multiple of the sample time Ts. For a MIMO system with Ny outputs and Nu inputs, set IODelay as an Ny-by-Nu array. Each entry of this array is a numerical value representing the transport delay for the corresponding input-output pair. You can set IODelay to a scalar value to apply the same delay to all input-output pairs. If you create an idtf model sys using the idtf command, then sys.IODelay contains the initial values of the transport delay that you specify with a name-value pair argument. If you obtain an idtf model sys by identification using tfest, then sys.IODelay contains the estimated values of the transport delay. For an idtf model sys, the property sys.IODelay is an alias for the value of the property sys.Structure.IODelay.Value. Structure — Information about estimable parameters structure property values | array of structure property values Property-specific information about the estimable parameters of the idtf model, specified as a single structure or an array of structures. • SISO system — Single structure. • MIMO system with Ny outputs and Nu inputs — Ny-by-Nu array. The element Structure(i,j) contains information corresponding to the transfer function for the (i,j) input-output pair. Structure.Numerator, Structure.Denominator, and Structure.IODelay contain information about the numerator coefficients, denominator coefficients, and transport delay, respectively. Each parameter in Structure contains the following fields. Field Description Examples sys.Structure.Numerator.Value contains the initial or estimated values of SISO Value Parameter values. Each property is an alias of the corresponding Value entry in the Structure property of numerator coefficients. sys.Numerator is an alias of the value of this property. sys.NaN represents unknown parameter values. sys.Numerator{i,j} is the alias of the MIMO property sys.Structure sys.Structure.IODelay.Minimum = 0.1 constrains the transport delay to values Minimum Minimum value that the parameter can assume during estimation. greater than or equal to 0.1. sys.Structure.IODelay.Minimum must be greater than or equal to zero. sys.Structure.IODelay.Maximum = 0.5 constrains the transport delay to values less Maximum Maximum value that the parameter can assume during estimation. than or equal to 0.5. sys.Structure.IODelay.Maximum must be greater than or equal to zero. Boolean specifying whether the parameter is a free estimation variable. If you want to fix the value of a Free parameter during estimation, set the corresponding Free value to false. For denominators, the value of Free sys.Structure.Denominator.Free = false fixes all of the denominator coefficients for the leading coefficient, specified by sys.Structure.Denominator.Free(1), is always false (the leading in sys to the values specified in sys.Structure.Denominator.Value. denominator coefficient is always fixed to 1). Scale Scale of the value of the parameter. The estimation algorithm does not use Scale. Info Structure array that contains the fields Label and Unit for storing parameter labels and units. Specify 'Time' parameter labels and units as character vectors. NoiseVariance — Variance of model innovations scalar | matrix Variance (covariance matrix) of the model innovations e, specified as a scalar or matrix. • SISO model — Scalar • MIMO model with N[y] outputs — N[y]-by-N[y] matrix An identified model includes a white Gaussian noise component e(t). NoiseVariance is the variance of this noise component. Typically, the model estimation function (such as tfest) determines this Report — Summary report report field values This property is read-only. Summary report that contains information about the estimation options and results for a transfer function model obtained using estimation commands, such as tfest and impulseest. Use Report to find estimation information for the identified model, including: • Estimation method • Estimation options • Search termination conditions • Estimation data fit and other quality metrics If you create the model by construction, the contents of Report are irrelevant. sys = idtf([1 4],[1 20 5]); If you obtain the model using estimation commands, the fields of Report contain information on the estimation data, options, and results. load iddata2 z2; sys = tfest(z2,3); InitializeMethod: 'iv' InitializeOptions: [1×1 struct] InitialCondition: 'auto' Display: 'off' InputOffset: [] OutputOffset: [] EstimateCovariance: 1 Regularization: [1×1 struct] SearchMethod: 'auto' SearchOptions: [1×1 idoptions.search.identsolver] WeightingFilter: [] EnforceStability: 0 OutputWeight: [] Advanced: [1×1 struct] For more information on this property and how to use it, see the Output Arguments section of the corresponding estimation command reference page and Estimation Report. InputDelay — Input delay for each input channel 0 (default) | scalar | vector Input delay for each input channel, specified as a scalar value or numeric vector. For continuous-time systems, specify input delays in the time unit stored in the TimeUnit property. For discrete-time systems, specify input delays in integer multiples of the sample time Ts. For example, setting InputDelay to 3 specifies a delay of three sample times. For a system with N[u] inputs, set InputDelay to an N[u]-by-1 vector. Each entry of this vector is a numerical value that represents the input delay for the corresponding input channel. You can also set InputDelay to a scalar value to apply the same delay to all channels. Estimation treats InputDelay as a fixed constant of the model. Estimation uses the IODelay property for estimating time delays. To specify initial values and constrains for estimation of time delays, use sys.Structure.IODelay. OutputDelay — Output delay for each output channel 0 (default) For identified systems such as idtf, OutputDelay is fixed to zero. Ts — Sample Time 0 (default) | -1 | positive scalar Sample time, specified as one of the following. • Continuous-time model — 0 • Discrete-time model with a specified sampling time — Positive scalar representing the sampling period expressed in the unit specified by the TimeUnit property of the model • Discrete-time model with unspecified sample time — -1 Changing this property does not discretize or resample the model. Use c2d and d2c to convert between continuous- and discrete-time representations. Use d2d to change the sample time of a discrete-time system. TimeUnit — Units for time variable 'seconds' (default) | 'nanoseconds' | 'microseconds' | 'milliseconds' | 'minutes' | 'hours' | 'days' | 'weeks' | 'months' | 'years' Units for the time variable, the sample time Ts, and any time delays in the model, specified as a scalar. Changing this property does not resample or convert the data. Modifying the property changes only the interpretation of the existing data. Use chgTimeUnit to convert data to different time units. InputName — Input channel names '' (default) | character vector | cell array Input channel names, specified as a character vector or cell array. • Single-input model — Character vector, for example, 'controls' • Multi-input model — Cell array of character vectors Alternatively, use automatic vector expansion to assign input names for multi-input models. For example, if sys is a two-input model, enter: sys.InputName = 'controls'; The input names automatically expand to {'controls(1)';'controls(2)'}. When you estimate a model using an iddata object data, the software automatically sets InputName to data.InputName. You can use the shorthand notation u to refer to the InputName property. For example, sys.u is equivalent to sys.InputName. You can use input channel names in several ways, including: • To identify channels on model display and plots • To extract subsystems of MIMO systems • To specify connection points when interconnecting models InputUnit — Input channel units '' (default) | character vector | cell array Input channel units, specified as a character vector or cell array. • Single-input model — Character vector • Multi-input Model — Cell array of character vectors Use InputUnit to keep track of input signal units. InputUnit has no effect on system behavior. InputGroup — Input channel groups struct with no fields (default) | struct Input channel groups, specified as a structure. The InputGroup property lets you divide the input channels of MIMO systems into groups so that you can refer to each group by name. In the InputGroup structure, set field names to the group names, and field values to the input channels belonging to each group. For example, create input groups named controls and noise that include input channels 1, 2 and 3, 5, respectively. sys.InputGroup.controls = [1 2]; sys.InputGroup.noise = [3 5]; You can then extract the subsystem from the controls inputs to all outputs using the following syntax: OutputName — Output channel names '' (default) | character vector | cell array Output channel names, specified as a character vector or cell array. • Single-input model — Character vector, for example, 'measurements' • Multi-input model — Cell array of character vectors Alternatively, use automatic vector expansion to assign output names for multi-output models. For example, if sys is a two-output model, enter: sys.OutputName = 'measurements'; The output names automatically expand to {'measurements(1)';'measurements(2)'}. When you estimate a model using an iddata object data, the software automatically sets OutputName to data.OutputName. You can use the shorthand notation y to refer to the OutputName property. For example, sys.y is equivalent to sys.OutputName. You can use output channel names in several ways, including: • To identify channels on model display and plots • To extract subsystems of MIMO systems • To specify connection points when interconnecting models OutputUnit — Output channel units '' (default) | character vector | cell array Output channel units, specified as a character vector or cell array. • Single-input model — Character vector, for example, 'seconds' • Multi-input Model — Cell array of character vectors Use OutputUnit to keep track of output signal units. OutputUnit has no effect on system behavior. OutputGroup — Output channel groups struct with no fields (default) | struct Output channel groups, specified as a structure. The OutputGroup property lets you divide the output channels of MIMO systems into groups and refer to each group by name. In the OutputGroup structure, set field names to the group names, and field values to the output channels belonging to each group. For example, create output groups named temperature and measurement that include output channels 1 and 3, 5, respectively. sys.OutputGroup.temperature = [1]; sys.OutputGroup.measurement = [3 5]; You can then extract the subsystem from all inputs to the measurement outputs using the following syntax: Name — System Name '' (default) | character vector System name, specified as a character vector, for example, 'system_1'. Notes — Notes on system 0-by-1 string (default) | string | character vector Any text that you want to associate with the system, specified as a string or a cell array of character vectors. The property stores whichever data type you provide. For instance, if sys1 and sys2 are dynamic system models, you can set their Notes properties as follows. sys1.Notes = "sys1 has a string."; sys2.Notes = 'sys2 has a character vector.'; ans = "sys1 has a string." ans = 'sys2 has a character vector.' UserData — Data to associate with system [] (default) | any MATLAB^® data type Data to associate with the system, specified as any MATLAB data type. SamplingGrid — Sampling grid [] (default) | struct Sampling grid for model arrays, specified as a structure. For arrays of identified linear (IDLTI) models that you derive by sampling one or more independent variables, this property tracks the variable values associated with each model. This information appears when you display or plot the model array. Use this information to trace results back to the independent variables. Set the field names of the data structure to the names of the sampling variables. Set the field values to the sampled variable values associated with each model in the array. All sampling variables must be numeric and scalar valued, and all arrays of sampled values must match the dimensions of the model array. For example, suppose that you collect data at various operating points of a system. You can identify a model for each operating point separately and then stack the results together into a single system array. You can tag the individual models in the array with information regarding the operating point. nominal_engine_rpm = [1000 5000 10000]; sys.SamplingGrid = struct('rpm', nominal_engine_rpm) Here, sys is an array containing three identified models obtained at 1000, 5000, and 10000 rpm, respectively. For model arrays that you generate by linearizing a Simulink^® model at multiple parameter values or operating points, the software populates SamplingGrid automatically with the variable values that correspond to each entry in the array. Object Functions In general, any function applicable to Dynamic System Models is applicable to an idtf model object. These functions are of four general types. • Functions that operate and return idtf model objects enable you to transform and manipulate idtf models. For instance: □ Use merge to merge estimated idtf models. □ Use c2d to convert an idtf from continuous to discrete time. Use d2c to convert an idtf from discrete to continuous time. • Functions that perform analytical and simulation functions on idtf objects, such as bode and sim • Functions that retrieve or interpret model information, such as advice and getpar • Functions that convert idtf objects into a different model type, such as idpoly for time domain or idfrd for frequency domain The following lists contain a representative subset of the functions that you can use with idtf models. Transformation and Manipulation translatecov Translate parameter covariance across model transformation operations setpar Set attributes such as values and bounds of linear model parameters chgTimeUnit Change time units of dynamic system d2d Resample discrete-time model d2c Convert model from discrete to continuous time c2d Convert model from continuous to discrete time merge Merge estimated models Analysis and Simulation sim Simulate response of identified model predict Predict state and state estimation error covariance at next time step using extended or unscented Kalman filter, or particle filter compare Compare identified model output with measured output impulse Impulse response plot of dynamic system; impulse response data step Step response of dynamic system bode Bode frequency response of dynamic system Information Extraction and Interpretation tfdata Access transfer function data get Access model property values getpar Obtain attributes such as values and bounds of linear model parameters getcov Parameter covariance of identified model advice Analysis and recommendations for data or estimated linear models Conversion to Other Model Structures idpoly Polynomial model with identifiable parameters idss State-space model with identifiable parameters idfrd Frequency response data or model Create Continuous-Time Transfer Function Model Specify a continuous-time, single-input, single-output (SISO) transfer function with estimable parameters. The initial values of the transfer function are given by the following equation: num = [1 4]; den = [1 20 5]; G = idtf(num,den) G = s + 4 s^2 + 20 s + 5 Continuous-time identified transfer function. Number of poles: 2 Number of zeros: 1 Number of free coefficients: 4 Use "tfdata", "getpvec", "getcov" for parameters and their uncertainties. Created by direct construction or transformation. Not estimated. G is an idtf model. num and den specify the initial values of the numerator and denominator polynomial coefficients in descending powers of $s$. The numerator coefficients with initial values 1 and 4 are estimable parameters. The denominator coefficients with initial values 20 and 5 are also estimable parameters. The leading denominator coefficient is always fixed to 1. You can use G to specify an initial parameterization for estimation with tfest. Create Transfer Function with Known Input Delay and Specified Attributes Specify a continuous-time, SISO transfer function with known input delay. The transfer function initial values are given by the following equation: Label the input of the transfer function with the name 'Voltage' and specify the input units as volt. Use name-value pair arguments to specify the delay, input name, and input unit. num = 5; den = [1 5]; input_delay = 5.8; input_name = 'Voltage'; input_unit = 'volt'; G = idtf(num,den,'InputDelay',input_delay,... $G$ is an idtf model. You can use G to specify an initial parameterization for estimation with tfest. If you do so, model properties such as InputDelay, InputName, and InputUnit are applied to the estimated model. The estimation process treats InputDelay as a fixed value. If you want to estimate the delay and specify an initial value of 5.8 s, use the IODelay property instead. Create Discrete-Time Transfer Function Specify a discrete-time SISO transfer function with estimable parameters. The initial values of the transfer function are given by the following equation: Specify the sample time as 0.2 seconds. num = [1 -0.1]; den = [1 0.8]; Ts = 0.2; H = idtf(num,den,Ts); num and den are the initial values of the numerator and denominator polynomial coefficients. For discrete-time systems, specify the coefficients in ascending powers of ${z}^{-1}$. Ts specifies the sample time for the transfer function as 0.2 seconds. H is an idtf model. The numerator and denominator coefficients are estimable parameters (except for the leading denominator coefficient, which is fixed to 1). Create MIMO Discrete-Time Transfer Function Specify a discrete-time, two-input, two-output transfer function. The initial values of the MIMO transfer function are given by the following equation: $H\left(z\right)=\left[\begin{array}{cc}\frac{1}{z+0.2}& \frac{z}{z+0.7}\\ \frac{-z+2}{z-0.3}& \frac{3}{z+0.3}\end{array}\right]$. Specify the sample time as 0.2 seconds. nums = {1,[1,0];[-1,2],3}; dens = {[1,0.2],[1,0.7];[1,-0.3],[1,0.3]}; Ts = 0.2; H = idtf(nums,dens,Ts); nums and dens specify the initial values of the coefficients in cell arrays. Each entry in the cell array corresponds to the numerator or denominator of the transfer function of one input-output pair. For example, the first row of nums is {1,[1,0]}. This cell array specifies the numerators across the first row of transfer functions in H. Likewise, the first row of dens, {[1,0.2],[1,0.7]}, specifies the denominators across the first row of H. Ts specifies the sample time for the transfer function as 0.2 seconds. H is an idtf model. All of the polynomial coefficients are estimable parameters, except for the leading coefficient of each denominator polynomial. These coefficients are always fixed to 1. Specify Transfer Function Display Variable Specify the following discrete-time transfer function in terms of q^-1: Specify the sample time as 0.1 seconds. num = [1 0.4]; den = [1 0.1 -0.3]; Ts = 0.1; convention_variable = 'q^-1'; H = idtf(num,den,Ts,'Variable',convention_variable); Use a name-value pair argument to specify the variable q^-1. num and den are the numerator and denominator polynomial coefficients in ascending powers of ${q}^{-1}$. Ts specifies the sample time for the transfer function as 0.1 seconds. H is an idtf model. Gain Matrix Transfer Function Specify a transfer function with estimable coefficients whose initial value is given by the following static gain matrix: $H\left(s\right)=\left[\begin{array}{ccc}1& 0& 1\\ 1& 1& 0\\ 3& 0& 2\end{array}\right]$ M = [1 0 1; 1 1 0; 3 0 2]; H = idtf(M); H is an idtf model that describes a three input (Nu = 3), three output (Ny = 3) transfer function. Each input-output channel is an estimable static gain. The initial values of the gains are given by the values in the matrix M. Convert Identifiable State-Space Model to Identifiable Transfer Function Convert a state-space model with identifiable parameters to a transfer function with identifiable parameters. Convert the following identifiable state-space model to an identifiable transfer function. $\begin{array}{l}\underset{}{\overset{\sim }{x}}\left(t\right)=\left[\begin{array}{cc}-0.2& 0\\ 0& -0.3\end{array}\right]x\left(t\right)+\left[\begin{array}{c}-2\\ 4\end{array}\right]u\left(t\right)+ \left[\begin{array}{c}0.1\\ 0.2\end{array}\right]e\left(t\right)\\ y\left(t\right)=\left[\begin{array}{cc}1& 1\end{array}\right]x\left(t\right)\end{array}$ A = [-0.2, 0; 0, -0.3]; B = [2;4]; C = [1, 1]; D = 0; K = [0.1; 0.2]; sys0 = idss(A,B,C,D,K,'NoiseVariance',0.1); sys = idtf(sys0); A, B, C, D, and K are matrices that specify sys0, an identifiable state-space model with a noise variance of 0.1. sys = idtf(sys0) creates an idtf model sys. Estimate Transfer Function Model by Specifying Number of Poles Load the time-domain system-response data in timetable tt1. Set the number of poles np to 2 and estimate a transfer function. np = 2; sys = tfest(tt1,np); sys is an idtf model containing the estimated two-pole transfer function. View the numerator and denominator coefficients of the resulting estimated model sys. ans = 1×2 2.4554 176.9856 ans = 1×3 1.0000 3.1625 23.1631 To view the uncertainty in the estimates of the numerator and denominator and other information, use tfdata. Create Array of Transfer Function Models Create an array of transfer function models with identifiable coefficients. Each transfer function in the array is of the form: The initial value of the coefficient $a$ varies across the array, from 0.1 to 1.0, in increments of 0.1. H = idtf(zeros(1,1,10)); for k = 1:10 num = k/10; den = [1 k/10]; H(:,:,k) = idtf(num,den); The first command preallocates a one-dimensional, 10-element array, H, and fills it with empty idtf models. The first two dimensions of a model array are the output and input dimensions. The remaining dimensions are the array dimensions. H(:,:,k) represents the ${k}^{th}$ model in the array. Thus, the for loop replaces the ${k}^{th}$ entry in the array with a transfer function whose coefficients are initialized with $a=k/10$. Version History Introduced in R2012a
{"url":"https://es.mathworks.com/help/ident/ref/idtf.html","timestamp":"2024-11-03T22:27:29Z","content_type":"text/html","content_length":"213352","record_id":"<urn:uuid:8dd0658d-afd5-4bed-837a-b4e820543b73>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00477.warc.gz"}
Confessions of a Multiverse Skeptic Okay, the title of my post is a little misleading. A more accurate, but less catchy, title for my post would be, “Confessions of a Skeptic of the Multiverse Objection to the Fine-Tuning Argument.” Whew! Just trying saying that five times fast! On a serious note, I’ve mentioned before that I am not convinced by appeals to the multiverse hypothesis to probabilistic versions of the fine-tuning argument (FTA). In this post, I will try to explain why. Informal Critique of the Multiverse Objection According to the multiverse objection (M), a ‘fine-tuned’ universe is just as probable on naturalism as on theism since, for all we know, there could be multiple (or even infinite) universes. Since the physical laws in each of these universes are random, there is bound to be at least one, if not many, life-permitting universes; we just happen to live in a life-permitting universe. The problem with M is that, in the absence of any independent evidence for a multiverse, the multiverse hypothesis is ad hoc. On the assumption that naturalism is true, we have little or no antecedent reason to expect a multiverse to exist. Therefore, unless or until physicists or cosmologists discover evidence that a multiverse actually exists, the multiverse is a weak objection to probabilistic versions of the FTA. Formal Critique of the Multiverse Objection Consider the following formulation of FTA. >!: much greater than F: the universe is fine-tuned for life T: theism N: naturalism Argument Formulated (1) F is known to be true. (2) Pr(F | T) >! Pr(F | N). (3) N is not intrinsically much more probable than T. (4) Other evidence held equal, Pr(T) > Pr(N). The Multiverse Objection According to the multiverse objection (M), (2) is false because, for all we know, there could be multiple (or even infinite) universes. Since the physical laws in each of these universes are random, there is bound to be at least one, if not many, life-permitting universe. We just happen to live in a life-permitting universe. At first glance, M seems irrelevant to the above formulation of FTA, since (2) compares the antecedent probability of F on theism to the antecedent probability of F on naturalism, not naturalism conjoined with an auxiliary hypothesis about the multiverse. So how could M be relevant to (2)? Those of you who have read my other recent postings can probably predict what I’m going to write next. Using the probability calculus, we can measure the effect that an auxiliary hypothesis like M has on Pr(F/N). In order to assess the evidential significance of an auxiliary hypothesis like M, we would simply need to consider a weighted average, as follows: Pr(F/N) = Pr(M/N) x Pr(F/M&N;) + Pr(~M/N) x Pr(F/~M&N;) This formula is an average because Pr(M/N) + Pr(~M/N) = 1. It is not a simple straight average, however, since those two values may not equal 1/2. The weighted average formula above gives us some insight into what would need to be the case in order for M to be a good defeater for the FTA. I assume we all agree that the second half of the right-hand side of that equation, Pr(~M/N) x Pr(F/~M&N;), is not going to be useful for deriving a high value for Pr(F/N). (Otherwise, there would be no need to introduce M in the first place!) So we’re stuck with the first half of the right-hand side: Pr(M/N) x Pr(F/M&N;). In order for M to be a good defeater of the FTA, then, Pr(M/N) x Pr(F/M&N;) needs to be high, the higher the better. The problem, however, is that we have little or no reason to believe that Pr(M/N) is high, i.e., we have little or no reason on naturalism (alone) to expect multiple universes. If Pr(M/N) is not high, then there is no reason to believe that Pr(F/N), as a weighted average of Pr(M/N) and Pr(~M/N), is high. So, unless there is independent evidence for M–i.e., evidence that is independent of the evidence for F–it appears that using M as a defeater against FTA fails and fails miserably.
{"url":"https://secularfrontier.infidels.org/2012/06/confessions-of-a-multiverse-skeptic/","timestamp":"2024-11-14T07:17:25Z","content_type":"text/html","content_length":"41471","record_id":"<urn:uuid:6c1c2fff-1137-47d4-9b19-37bc0649664b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00870.warc.gz"}
Shading and coloring your graph You can shade and color your graphs using the polygon() command. To use the polygon() command, you must specify the horizontal and vertical axis limits, but you must also include the x and y variables as the middle arguments. Let's create a quadratic curve and shade under it with a light green selected from the Hexadecimal Color Chart: x <- 1:100 y <- 3*x^2 + 2*x + 7 plot(x, y) lines(x, y) polygon(cbind(c(min(x), x, max(x)), c(min(y), y, min(y))), col="#00CC66") Here is the graph: Using this approach, the polygon() command shades under the curve, between the minimum and maximum values of the x variable and below the y variable. The syntax involving cbind() is an elegant way of including the relevant limits. The following example is more complex. It uses the rnorm() command to simulate values from a normal distribution, with a given mean and standard deviation. By default, random values with a mean of 0 and a standard deviation of 1 are produced. For...
{"url":"https://subscription.packtpub.com/book/data/9781783554553/2/ch02lvl1sec28/summary","timestamp":"2024-11-02T14:54:39Z","content_type":"text/html","content_length":"174894","record_id":"<urn:uuid:d07ca98a-23bf-402b-bb54-8aecc0f30b6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00518.warc.gz"}
a. A toy car is accelerated from rest to 0.620 m / s in 20.0 ms.... a. A toy car is accelerated from rest to 0.620 m / s in 20.0 ms. The mass of the cart is 560 g. Determine the average power applied to the cart. b. An elevator with 8.00 x 102 kg of mass hangs from a cable. The elevator accelerates downward at a rate of 1.80 m / s2. Determine the cable tension.
{"url":"https://justaaa.com/physics/213399-a-a-toy-car-is-accelerated-from-rest-to-0620-m-s","timestamp":"2024-11-03T04:26:50Z","content_type":"text/html","content_length":"38574","record_id":"<urn:uuid:c48b961a-099f-4687-9bc7-e8f1084cba14>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00696.warc.gz"}
Finding the Value of Tangent Given a Point on a Circle - P.PDFKUL.COM Finding the Value of Tangent Given a Point on a Circle Find the Value of Tangent for Each Circle. 1. Finding the Value of Tangent Given a Point on a Circle Find the Value of Tangent for Each Circle. 1.
{"url":"https://p.pdfkul.com/finding-the-value-of-tangent-given-a-point-on-a-circle_5b50946b097c47e82a8b4568.html","timestamp":"2024-11-08T01:13:53Z","content_type":"text/html","content_length":"51781","record_id":"<urn:uuid:52ecba00-8bf2-4b55-bb2f-77d3c229d2c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00465.warc.gz"}
Density Plot Maker Easily create a density plot and reveal underlying patterns Density plots are easy to create in Displayr. Simply enter in your data, and Displayr will generate the visualization for you. Then you’ll be able to see the distribution of your data and identify important characteristics like the mean, median, mode and skewness. Customizable, flexible and visually appealing Displayr’s density plot maker lets you customize the appearance of your density plot including different colors, sizes, labels and font options. There are many options to choose from, so you can create a density plot that is as visually appealing as it is informative. Easily publish, embed, or export to PowerPoint With Displayr, you can quickly publish or export your graph or entire presentation to PowerPoint with just a few clicks. Additionally, duplicating your density plot and replacing the data is a simple and efficient process, allowing you to create new density plots with minimal effort. Make your density plot in 3 easy steps Step 2 Follow the prompts to connect, type, or paste your data and create your density plot. Next, add your other visualizations and text annotations. Step 3 Add some filters to make your report interactive and publish or export to PowerPoint or as a PDF with one click. Density plots are a type of data visualization, used to show the distribution of a set of continuous data points. They are best used when you are trying to understand the shape of the distribution of some data over a continuous interval or time period. Instead of plotting data directly, density plots use an algorithm to estimate the shape of the distribution first, before drawing it. The chart is a variation of the histogram, but it uses an algorithm to smooth out the noise and therefore showing a smoother shape to the distribution. The peaks of a density plot help audiences see where values are concentrated over the interval. It can therefore show you important features of the data distribution including, the mean, median, mode, skewness, and any outliers. For this reason, density plots are often used in fields such as statistics, data analysis, and research. When creating a density plot it is best to start with raw data – values that show observations of a particular quantity. This could include, number of hours spent online, the average price of tomatoes each day for a year, the number of visitors to a museum each day over a year, etc. For a density plot to be effective, you should have enough observations in your dataset for it to be a meaningful estimate of the distribution. Avoid using figures that are aggregated or only form a small set. Don't limit yourself to just density plots Ready to create more stunning visualizations? In addition to using our density plot maker, we've got a variety of other awesome ways to visualize your data. Whether it’s histograms, line graphs, or radar charts, Displayr can help you transform your data into whatever story you want to tell! And just like density plots, you can customize colors, fonts, and sizes and have a play with Displayr’s cool features. Even better, combine different graphs to create a truly impressive infographic or presentation. What are you waiting for? Instantly visualize your data Instantly visualize what you are learning. Displayr is a robust, collaborative analysis and reporting tool built for humans, not robots. SQL, R, and no-code work in harmony together so you can analyze, visualize, and build your report simultaneously in the same app. What is a density plot? A density plot shows the distribution of data for a variable over a time period or continuous interval. The density plot is a smoothed variation of a histogram and uses kernel smoothing to smooth out noise in the data. The peak of the density plot shows where values are the most concentrated over time. Density plots are also known as a Kernel density plot or a density trace graph. What is a density plot vs histogram? Density plots are smoothed variations of histograms. A histogram shows values from a selected column as binned distribution in the form of bars. A density plot takes this and uses kernel smoothing to smooth out the noise. This helps form a smooth curve across bins which helps create a more defined distribution shape. Density plots can be preferred to histograms because they are better when you want to determine the distribution shape. This is because they are not affected by the number of bins (bars on the histogram). For example, it would be difficult to form a distribution shape from a histogram with a few bins, but possible with a density plot. How do you analyse a density plot? Looking at the density curve on a density plot should give you an idea of the shape of distribution. The peak of the density curve is where the values are most concentrated. However, a density plot may also have more than one peak of frequently occurring values. If a density curve has only one peak, the distribution is described as unimodal. If it has two peaks, it is called a bimodal Looking at whether the curve skews to the left or right also indicates the location of the mean and median. If a density curve has no skew, the mean is equal to the median. If it skews left, the mean is less than the median. If it skews right, the mean is greater than the median. How do you create a density plot in Displayr? To use Displayr’s density plot creator, you need to sign up first, confirm your email by clicking on the confirmation link that you’ll receive, and then follow the prompts to create your first density plot. Learn more about density plots
{"url":"https://www.displayr.com/graph-makers/density-plot/","timestamp":"2024-11-06T14:50:11Z","content_type":"text/html","content_length":"312540","record_id":"<urn:uuid:942a9365-9830-4204-8523-9ed3271fceb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00197.warc.gz"}
Miata Gearing Calculator Online Home » Simplify your calculations with ease. » Mechanical Calculators » Miata Gearing Calculator Online Immersing oneself in the world of vehicles and mechanics often requires a deep understanding of complex concepts. One such tool vital for automotive enthusiasts, particularly those fond of the Mazda Miata, is the Miata Gearing Calculator. This intuitive tool offers indispensable help in calculating and optimizing the gear ratio. The Calculator, as the name suggests, is a specialized computational tool that helps evaluate the gear ratio of a Miata, which is a popular model of Mazda. The gear ratio is the relationship between the rotational speeds of various mechanical components within the vehicle’s drive system, influencing overall performance and efficiency. Detailed Explanation of the Calculator’s Working The Miata Gearing Calculator operates using four crucial input parameters: ring gear teeth, pinion gear teeth, output shaft RPM, and input shaft RPM. It uses these variables to compute the gear ratio, offering invaluable insights into the vehicle’s operational characteristics. The calculated gear ratio allows for optimized vehicle performance and enhanced driving experiences. The Mathematical Formula and Its Components The formula used by the Calculator is: Gear Ratio = (Ring Gear Teeth / Pinion Gear Teeth) * (Output Shaft RPM / Input Shaft RPM). Here, ‘Ring Gear Teeth’ represents the number of teeth on the ring gear of the differential, while ‘Pinion Gear Teeth’ corresponds to the number of teeth on the pinion gear. ‘Output Shaft RPM’ is the rotational speed of the output shaft, typically the driveshaft or axle shafts, and ‘Input Shaft RPM’ is the rotational speed of the input shaft, which is usually the engine crankshaft. Real-world Example Suppose your Miata has a ring gear with 41 teeth and a pinion gear with 10 teeth. If the input shaft RPM is 3000, and the output shaft RPM is 900, using the Miata Gearing Calculator, the gear ratio comes out to be 1.37. Vehicle Customization For automotive enthusiasts who enjoy modifying or customizing their vehicles, the Calculator can provide insights into optimizing gear ratios for desired performance outcomes. Professional Racing In professional car racing, understanding gear ratios can be the difference between winning and losing. The Calculator enables teams to make data-driven decisions for their vehicle’s setup. Education and Training Educational institutions and training centers use the Miata Gearing Calculator to provide hands-on learning experiences for students studying automotive engineering or related fields. Frequently Asked Questions (FAQs) Can the Miata Gearing Calculator be used for other car models? While the Miata Gearing Calculator is specifically designed for the Mazda Miata, it can provide approximate results for other vehicles as well, given the same input parameters are available. How accurate is the Miata Gearing Calculator? The accuracy of the Miata Gearing Calculator relies heavily on the precision of the input values. Provided accurate input data, the calculator can provide highly accurate and useful output for the gear ratio. The Calculator proves to be a highly efficient and user-friendly tool for automotive enthusiasts, racing professionals, and engineering students. Its utility in optimizing vehicle performance and facilitating learning makes it an indispensable resource in the realm of automotive mechanics. Leave a Comment
{"url":"https://calculatorshub.net/mechanical-calculators/miata-gearing-calculator/","timestamp":"2024-11-06T22:19:47Z","content_type":"text/html","content_length":"113659","record_id":"<urn:uuid:cd785fc3-fa8a-4bcb-9df2-caaf9239feef>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00328.warc.gz"}
The light deflection by a Kerr’s black hole, that is in an axially symmetric gravitational field generated by a non-electrically charged spinning black hole, is a phenomenon predicted by general relativity (Kerr’s metric). This metric generalizes the spherical symmetry of a stationary black hole (Schwarzschild’s metric). Refer to appendix for detailed calculations and additional information. Note: to avoid any confusion, the writing simplification \(c=G=1\) is not used in the present document and all equations are written explicitly. The elementary displacement of the photon is a like-light vector and its scalar product is zero,^1 (hence the name « null geodesics » for the photon trajectories). Assuming that the gravitational field is axially symmetrical and applying the Kerr’s metric (see its limits in the conclusion), the scalar product of the elementary displacement \(\overrightarrow{ds} \) \((cdt, dr, d\theta, d\varphi)\) in Boyer-Lindquist’s coordinates can be written: \(ds^2=-\left(1-\frac{2GM}{c^2\Sigma}\right)c^2dt^2-\frac{4GMar\sin^2{\theta}}{c^2\Sigma}cdtd\varphi+\frac{\Sigma}{\Delta}dr^2+\Sigma d\theta^2\) with \(G\) gravitational constant, \(c\) speed of light in a vacuum, \(M\) mass of the black hole, \(a=\frac{J}{cM}\) with \(J\) spin angular momentum of the black hole, \(\Delta=r^2-\frac{2GM}{c^2} r+a^2\) and \(Σ=r^2+a^2\cos^2{\theta}\). The coefficients of the metric are independent of \(t\) and \(\varphi\): the geometry of Kerr spacetime is therefore stationary and axially symmetrical. Note: in the asymptotic region \(r \gg \frac{2GM}{c^2}\), the coordinate \(r\) is interpreted as the physical distance between the photon and the center of the black hole. Parametric equations of motion The invariance of the energy \(\varepsilon\), the angular momentum component \(l_z\) on the spinning axis of the black hole and the Carter’s constant \(Q\) enables to get the four parametric equations of motion of the photon and to calculate the light deflection by Kerr’s black holes: varepsilon^2}{ c^2\Sigma^2}\) \(\left(\frac{d\theta}{d\lambda}\right)^2=\left(c^2\frac{Q}{\varepsilon^2}+\cos^2\theta\left(a^2-c^2\frac{l_z^2}{\varepsilon^2\sin^2\theta}\right)\right)\frac{\varepsilon^2}{ c^2\Sigma^2}\) \(\frac{dct}{d\lambda}=\left((r^2+a^2)^2-\Delta a^2\sin^2\theta-2mar\ c\frac{l_z}{\varepsilon}\right)\frac{\varepsilon}{c\Delta\Sigma}\) with \(r\) radial coordinate, \(\theta\) colatitude, \(\varphi\) longitude, \(t\) time measured by a static observer, \(\lambda\) an affine parameter and \(m=\frac{GM}{c^2}\) homogeneous to the Note that the coordinate system is undefined at the poles \(\theta=0\) and \(\theta=\pi\). In the following, the value \(R_s=2m\) and the dimensionless values \(\bar{r}=\frac{r}{m}\) and Kerr’s parameter \(\bar{a}=\frac{a}{m}\) will be used. Convention used for \(\bar{a}\ne 0\): the \(z\)-axis is the spinning axis of the black hole. When \(\bar{a}>0\), the spin of the black hole is trigonometric and when \(\bar{a}<0\), its spin is It is assumed that \(|\bar{a}|\) lies between \(0\) and \(1\), limits included, except for the over extreme Kerr’s spacetime described briefly before the conclusion. Photon trajectories In the general case, photon trajectories near a spinning black hole can be found by integration of each of the 4 parametric equations, according to the affine parameter \(\lambda\). The initial values to be taken into account are \(r_0\), \(\theta_0\), \(\varphi_0\), \(t_0\), and the signs of \(\frac{dr}{d\lambda}_0\) and\(\frac{d\theta}{d\lambda}_0\). The trajectory of the photon is fully determined by the constants \(M\), \(a\), \(\frac{l_z}{\varepsilon}\) and \(\frac{Q}{\varepsilon^2}\). Fig. A – Photon trajectory coming from \(\infty\) and deflected by an extreme Kerr’s black hole \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-6\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq 9.863\)© For a given value of \(\frac{l_z}{\varepsilon}\), there is a critical value \(\frac{Q_{crit}}{\varepsilon^2}\): – if \(\frac{Q}{\varepsilon^2}>\frac{Q_{crit}}{\varepsilon^2}\) the photon coming from \(\infty\) will be deflected by the black hole and continue towards \(\infty\), – if \(\frac{Q}{\varepsilon^2}<\frac{Q_{crit}}{\varepsilon^2}\) the photon coming from \(\infty\) will be absorbed by the black hole, – if \(\frac{Q}{\varepsilon^2}=\frac{Q_{crit}}{\varepsilon^2}\) the photon coming from \(\infty\) will be captured by the black hole on an orbit. Fig. B – Photon trajectory coming from \(\infty\) and virtually captured by an extreme Kerr’s black hole \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-6\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq\frac{c^2Q_{crit}}{m^2\varepsilon^2}\simeq 9.627\)© Photon orbits A constant radial coordinate \(r\) is given by annulling the potential of the first parametric equation and its derivative with respect to \(r\), which leads after calculation to \(\bar{r}\) being a root of the polynomial \(+2\bar{a}\bar{r}\cos i\sqrt{3\bar{r}^4+(1-3\sin^2i)\bar{a}^2\bar{r}^2-\bar{a}^4\sin^2i}\) \(i\) being the constant angle of inclination of the angular momentum \(\overrightarrow{l}\) with respect to the spinning axis of the black hole. For given \(m\), \(a\) and \(i\), there are at least one root \(\bar{r}\) between 0 and 4 giving an orbit for the photon. If \(i\in[0,\pi/2[\) the orbit is prograde (same direction of spin as the black hole), and if \(i\in ]\pi/2,\pi]\) the orbit is retrograde (opposite direction of spin to the black hole). The roots of the polynomial \(q(\bar{r})\) are difficult to calculate analytically except in the following cases: – equatorial prograde orbit (\(\cos i=1\)) \(\Rightarrow\bar{r}_{prograde}=2\left (1+\cos\left (\frac{2}{3}\arccos\left(-\bar{a}\right)\right)\right)\), Fig. C – Polar orbit around an extreme Kerr’s black hole \(\bar{a}=1\) \(\frac{c^2Q}{m^2\varepsilon^2}=11+8\sqrt{2}\)© More details – equatorial retrograde orbit (\(\cos i=-1\)) \(\Rightarrow\bar{r}_{retrograde}=2\left (1+\cos\left (\frac{2}{3}\arccos\left(\bar{a}\right)\right)\right)\), – polar orbit (\(\sin^2i=1\)) \(\Rightarrow\bar{r}_{polar}=1+2\sqrt{1-\frac{\bar{a}^2}{3}}\cos\left(\frac{1}{3}\arccos\left(\frac{1-\bar{a}^2}{\left(1-\frac{\bar{a}^2}{3}\right)^\frac{3}{2}}\right)\ The event horizon of a spinning black hole has the dimensionless radial coordinate \(\bar{r}_h=1+\sqrt{1-\bar{a}^2}\). Note: when \(\bar{a}=0\), the above formulas lead to the special case of the Schwarzschild’s metric \(\Rightarrow\bar{r}_{prograde}=\bar{r}_{retrograde}=\bar{r}_{polar}=3\) that is \(r=3m=\frac{3}{2} R_s\) and \(r_h=2m=R_s\) with \(R_s=\frac{2GM}{c^2}\). Fig. D – Polar orbit around an extreme Kerr’s black hole \(\bar{a}=1\) \(\frac{c^2Q}{m^2\varepsilon^2}=11+8\sqrt{2}\) (top view)© Each orbit is defined by its constant dimensionless radial coordinate value \(\bar{r}_c\) and by the constant inclination \(i\) of the angular momentum of the photon \(\overrightarrow{l}\), associated with this value. There are therefore an infinite number of photon « spheres » with constant dimensionnless radial coordinates \(\bar{r}\in[0,4]\), the bound 4 being reached for \(|\bar{a}|=1\). Fig. E – Orbit example around an extreme Kerr’s black hole \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-1\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq 25.856\)© Furthermore, the geometric shape of each orbit is not really a sphere, but an ellipsoid of radius \(\sqrt{r^2+a^2}\sin\theta\) (in Cartesian Boyer-Lindquist’s coordinates) and colatitude \(\theta\) between a value \(\theta_{lim}\) and a value \(\pi-\theta_{lim}\), a function of \(\bar{a},\bar{r}_c\) and \(\sin^2i\). Parametric equations With \(l_z=l\cos i\), \(Q=l^2\sin^2i\) and defining the impact parameter \(b_{crit}=c\frac{l}{\varepsilon}\), the equations of the trajectory are: with the 3 parametric equations which become: \(\left(\frac{d\theta}{d\lambda}\right)^2=\left(b_{crit}^2\left(1-\frac{\cos^2i}{\sin^2\theta}\right)+a^2\cos^2\theta\right)\frac{\varepsilon^2}{\Sigma^2 c^2}\) \(\frac{d\varphi}{d\lambda}=\left(2mar_c+\left(\Sigma-2mr_c\right)b_{crit}\frac{\cos ⁡i}{\sin^2⁡\theta}\right)\frac{\varepsilon}{\Delta\Sigma c}\) \(\frac{cdt}{d\lambda}=\left(\left(r_c^2+a^2\right)^2-\Delta a^2 \sin^2\theta-2mar_cb_{crit}\cos ⁡i\right)\frac{\varepsilon}{\Delta\Sigma c}\) The value of the critical impact parameter can be calculated using the formula: Animated trajectories Examples of photon trajectories with near-capture by an extreme Kerr’s black hole animation \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-6\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq 9.863\)© animation \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-6\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq 9.634\)© animation \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-6\) \(\frac{c^2Q}{m^2\varepsilon^2}\simeq 9.627\)© Examples of photon orbits with different \(\bar{a}\) and \(\frac{cl_z}{m\varepsilon}\) polar orbit animation \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=0\) \(b_{crit}\simeq 4.724\)© \(\bar{a}=1\) \(\frac{cl_z}{m\varepsilon}=-1\) \(b_{crit}\simeq 5.182\)© \(\bar{a}=0.5\) \(\frac{cl_z}{m\varepsilon}=2\) \(b_{crit}\simeq 4.658\)© A Kerr’s black hole mathematically has four centered regions, each included in the other and defined by mathematical hypersurfaces. From largest to smallest: – outer ergosphere \(r_{ergoext}=m (1+\sqrt{1-\bar{a}^2\cos^2⁡\theta})\) – event horizon \(r_h=m (1+\sqrt{1-\bar{a}^2})\) – Cauchy’s horizon \(r_{Cauchy}=m (1-\sqrt{1-\bar{a}^2})\) – inner ergosphere \(r_{ergoint}=m (1-\sqrt{1-\bar{a}^2\cos^2⁡\theta})\). \(r_{ergoint}\) and \(r_{ergoext}\) are the roots of the equation \(\Sigma-2mr=0\) and \(r_{Cauchy}\) and \(r_h\) are the roots of the equation \(\Delta=0\). For \(|\bar{a}|\in ]0,1[\), the four regions are distinct, and for \(|\bar{a}|=1\), the event horizon and Cauchy’s horizon are merged. \(\bar{a}=0\) corresponds to the Schwarzschild’s black hole, where the event horizon and outer ergosphere are merged, and there is no Cauchy’s horizon or inner ergosphere. The hypersurface that delimits the outer ergosphere is a stationarity limit, which means that any particle – material or photon – that crosses it cannot be motionless. Note: once it has crossed the event horizon, a particle can return to it, but can never cross it in the other direction. Presence of regions The central body is by definition a Kerr’s black hole, so the two regions defined by the outer ergosphere and the event horizon (merged with the Cauchy’s horizon for an extreme Kerr’s black hole) physically exist. The other regions (defined by the Cauchy’s horizon for a non-extreme Kerr’s black hole and by the inner ergosphere) can only exist if the physical body of the black hole is « inside » them. Fig. F – Kerr’s black hole \(\bar{a}=0.95\) (side view) Outer ergosphere, event horizon, Cauchy horizon and inner ergosphere© Fig. G – Kerr’s black hole \(\bar{a}=0.95\) (top view)© Fig. H – Kerr’s black hole \(\bar{a}=0.95\) (exploded view) with the singularity circle bordering the inner ergosphere© The parametric equations seen above show that a zero value of either \(\Delta\) or \(\Sigma\) does not give a definition of the motion of the photon. \(\Delta=r^2-\frac{2GM}{c^2}r+a^2= 0\) occurs when the photon crosses the event horizon or the Cauchy’s horizon: it is a simple singularity of the Boyer-Lindquist’s coordinates, which generalizes the singularity of the Schwarzschild’s coordinates in \(r=R_s\)^3. The singularity in \(r\) such as \(Σ=r^2+a^2\cos^2{\theta}=0\) is a true singularity, just as the singularity in \(r=0\) of the Schwarzschild’s metric^4. This is the circle of Cartesian radius \(|a|\) whose center is that of the black hole, located in its equatorial plane. This circle borders the inner ergosphere. Apparent image or shadow No photon with an impact parameter below \(b_{crit}\) can reach an outside observer, which results in a « shadow » without any star image. If this observer is located at a great distance from the black hole and in its equatorial plane, the apparent outline of a Kerr’s black hole can be determined by the 2 coordinates^5: \(\alpha=-c\frac{l_z}{\varepsilon}\) and \(\beta=\pm c\frac{\sqrt{Q}}{\varepsilon}\), that is: \(\frac{\alpha}{m}=\frac{\bar{r}_c^3-3\bar{r}_c^2+\bar{a}^2\bar{r}_c+\bar{a}^2}{\bar{a}(\bar{r}_c-1)}\) and \(\frac{\beta}{m}=\pm\sqrt{\frac{-\bar{r}_c^3(\bar{r}_c^3-6\bar{r}_c^2+9\bar{r}_c-4\bar{a}^ with \(\bar{r}_c\) dimensionless radial coordinates of photon orbits varying between a value \(\bar{r}_{c_{min}}\) and a value \(\bar{r}_{c_{max}}\). Fig. I – Kerr’s black hole \(\bar{a}=0.95\) (side view) with its regions and its shadow© Over extreme Kerr’s spacetime When \(|\bar{a}|>1\), the Kerr’s spacetime is said to be over extreme and \(\Delta\) has no root, so there is no event horizon or Cauchy’s horizon, implying that the massive object is not a black hole. It has a naked singularity (circle of Cartesian radius \(|a|\)) with adjacent outer and inner ergospheres with colatitude \(\in [\arccos{\frac{1}{|\bar{a}|}},\pi-\arccos{\frac{1}{|\bar{a}|}}]\) that form a kind of open torus. It is a mathematical object whose physical existence is currently unlikely. Fig. J – Over extreme Kerr object \(\bar{a}=1.5\) (exploded view) with its outer (medium grey) and inner (dark grey) ergospheres© As most celestial objects rotate on themselves, the axially symmetric Kerr’s metric provides an absolutely accurate representation of the countless black holes that populate the universe, the Schwarzschild’s metric being a special case, obtained with a zero Kerr’s parameter. The structure of a spinning black hole is extremely simple: just two real numbers, m and a, are needed to describe it fully. The light deflection by Kerr’s black holes and the trajectories or orbits of photons can be precisely calculated using the Kerr’s metric. Note that this metric does not apply to a spinning star: its metric cannot be described by just a few scalar parameters, even outside the star. It depends on the distribution of mass and momentum inside the star.
{"url":"https://site.nicolasfleury.ovh/light-deflection-by-kerr-black-holes/","timestamp":"2024-11-13T01:51:44Z","content_type":"text/html","content_length":"104634","record_id":"<urn:uuid:ad737d27-7817-4fa2-8e64-d74ecdcf90e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00281.warc.gz"}
Multiplication Table Worksheet 12 Mathematics, especially multiplication, creates the foundation of various scholastic self-controls and real-world applications. Yet, for several learners, understanding multiplication can position a challenge. To address this difficulty, teachers and moms and dads have actually welcomed a powerful device: Multiplication Table Worksheet 12. Intro to Multiplication Table Worksheet 12 Multiplication Table Worksheet 12 Multiplication Table Worksheet 12 - This page has printable multiplication tables Includes tables that are completely filled in partly filled in and blank and games on this page to help your students master basic facts with 12 as a factor Multi Digit Multiplication Multiplication 2 Digits Times 1 Digit On this page you have a large selection of 2 digit by 1 digit It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers Relevance of Multiplication Technique Understanding multiplication is critical, laying a strong structure for advanced mathematical concepts. Multiplication Table Worksheet 12 offer structured and targeted technique, cultivating a deeper understanding of this fundamental math operation. Evolution of Multiplication Table Worksheet 12 12 Times Tables Worksheets Pdf 12 multiplication table 12 Times Tables Worksheets Pdf 12 multiplication table Grade 5 multiplication worksheets Multiply by 10 100 or 1 000 with missing factors Multiplying in parts distributive property Multiply 1 digit by 3 digit numbers mentally Multiply in columns up to 2x4 digits and 3x3 digits Mixed 4 operations word problems For the 12 times table worksheet you can choose between three different sorts of exercise In the first exercise you have to draw a line from the sum to the correct answer In the second exercise you have to enter the missing number to complete the sum correctly In the third exercise you have to answer the sums which have been shuffled From typical pen-and-paper exercises to digitized interactive styles, Multiplication Table Worksheet 12 have actually evolved, dealing with varied understanding styles and preferences. Kinds Of Multiplication Table Worksheet 12 Basic Multiplication Sheets Basic workouts focusing on multiplication tables, helping students develop a strong arithmetic base. Word Trouble Worksheets Real-life circumstances incorporated into problems, boosting important thinking and application skills. Timed Multiplication Drills Tests developed to enhance rate and precision, aiding in fast psychological math. Benefits of Using Multiplication Table Worksheet 12 MULTIPLICATION TABLE WORKSHEET Learningenglish esl MULTIPLICATION TABLE WORKSHEET Learningenglish esl Here you will find all the times tables exercises on worksheets For instance there are tables worksheets for 3th grade that you can print here Multiplication table worksheets 1 times table worksheets 2 times table worksheets 11 and 12 times tables You can also use the worksheet generator to create your own multiplication facts Download Free 12 times table worksheets 12 times table worksheet PDF is an awesome tool that encourages kids to efficiently develop a perfect learning skill An excellent knowledge in multiplying by 12 activities enables kids to get along with any tricky multiplication task other than x12 As a result this worksheet offers well designed Enhanced Mathematical Skills Regular technique hones multiplication proficiency, improving total mathematics capacities. Enhanced Problem-Solving Talents Word troubles in worksheets create analytical thinking and strategy application. Self-Paced Knowing Advantages Worksheets fit specific learning speeds, promoting a comfortable and adaptable knowing atmosphere. Exactly How to Develop Engaging Multiplication Table Worksheet 12 Including Visuals and Colors Lively visuals and colors capture interest, making worksheets visually appealing and engaging. Consisting Of Real-Life Scenarios Associating multiplication to daily scenarios includes importance and practicality to workouts. Tailoring Worksheets to Various Ability Levels Personalizing worksheets based upon varying proficiency levels ensures comprehensive understanding. Interactive and Online Multiplication Resources Digital Multiplication Tools and Gamings Technology-based sources offer interactive learning experiences, making multiplication appealing and pleasurable. Interactive Internet Sites and Apps On-line systems provide varied and accessible multiplication practice, supplementing traditional worksheets. Customizing Worksheets for Numerous Understanding Styles Aesthetic Learners Visual help and representations aid comprehension for students inclined toward aesthetic learning. Auditory Learners Spoken multiplication troubles or mnemonics accommodate students who comprehend principles through auditory ways. Kinesthetic Students Hands-on activities and manipulatives sustain kinesthetic learners in understanding multiplication. Tips for Effective Application in Discovering Uniformity in Practice Regular method strengthens multiplication skills, advertising retention and fluency. Balancing Repetition and Selection A mix of repetitive workouts and varied trouble layouts preserves passion and comprehension. Giving Useful Responses Comments aids in recognizing locations of renovation, urging continued progress. Challenges in Multiplication Practice and Solutions Motivation and Interaction Hurdles Monotonous drills can bring about disinterest; innovative methods can reignite inspiration. Conquering Worry of Math Adverse understandings around mathematics can impede progression; producing a positive discovering setting is important. Effect of Multiplication Table Worksheet 12 on Academic Performance Studies and Study Findings Research study shows a positive correlation in between consistent worksheet use and boosted mathematics performance. Multiplication Table Worksheet 12 emerge as functional devices, cultivating mathematical efficiency in learners while fitting varied understanding designs. From basic drills to interactive online resources, these worksheets not only improve multiplication abilities yet also advertise important reasoning and analytic capacities. 1 12 Multiplication Worksheet Learning Printable 12 Multiplication Table Worksheet 12 Times Table Worksheets Check more of Multiplication Table Worksheet 12 below Multiplication Table Without Answers Free Printable Worksheet On Multiplication Table Of 12 Word Problems On 12 Times Table Free Printable Times Table Worksheets Multiplication Table Multiplication Chart 0 12 Pdf PrintableMultiplication We Can Multiplication Printable 12 PrintableMultiplication Printable Multiplication Tables No Answers Printable Multiplication Flash Cards Multiplication Facts Worksheets Math Drills It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers Free 12 times table worksheets at Timestables Multiplication Tables These free 12 times table worksheets provide you with an excellent tool to practice and memorise the tables The 12 times table is probably the hardest multiplication table to memorise However there are several tips to help you learn this table quicker Let s take a look at some of the sums 1 x 12 12 alternatively this is 1 x 10 1 x 2 It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers These free 12 times table worksheets provide you with an excellent tool to practice and memorise the tables The 12 times table is probably the hardest multiplication table to memorise However there are several tips to help you learn this table quicker Let s take a look at some of the sums 1 x 12 12 alternatively this is 1 x 10 1 x 2 Multiplication Table Multiplication Chart 0 12 Pdf PrintableMultiplication We Can Worksheet On Multiplication Table Of 12 Word Problems On 12 Times Table Multiplication Printable 12 PrintableMultiplication Printable Multiplication Tables No Answers Printable Multiplication Flash Cards Printable 12X12 Multiplication Table PrintableMultiplication Multiplication By 12 Worksheets Multiplication By 12 Worksheets The 12 Tables Reading 2019 01 22 Frequently Asked Questions (Frequently Asked Questions). Are Multiplication Table Worksheet 12 suitable for every age teams? Yes, worksheets can be tailored to different age and ability levels, making them versatile for various learners. Exactly how commonly should trainees exercise using Multiplication Table Worksheet 12? Constant method is essential. Regular sessions, ideally a few times a week, can produce substantial improvement. Can worksheets alone improve mathematics abilities? Worksheets are an important device however should be supplemented with different knowing methods for detailed skill growth. Are there on-line systems supplying totally free Multiplication Table Worksheet 12? Yes, lots of instructional internet sites use free access to a variety of Multiplication Table Worksheet 12. Exactly how can parents support their children's multiplication technique in the house? Urging consistent method, offering aid, and developing a positive knowing atmosphere are advantageous actions.
{"url":"https://crown-darts.com/en/multiplication-table-worksheet-12.html","timestamp":"2024-11-04T07:49:17Z","content_type":"text/html","content_length":"29113","record_id":"<urn:uuid:e56d1ac1-eb2b-468b-968c-7cc690d34904>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00863.warc.gz"}
Wave dynamics in locally periodic structures by multiscale analysis 2017 Theses Doctoral Wave dynamics in locally periodic structures by multiscale analysis We study the propagation of waves in spatially non-homogeneous media focusing on Schrodinger’s equation of quantum mechanics and Maxwell’s equations of electromagnetism. We assume that medium variation occurs over two distinct length scales: a short ‘fast’ scale with respect to which the variation is periodic, and a long ‘slow’ scale over which the variation is smooth. Let epsilon denote the ratio of these scales. We focus primarily on the time evolution of asymptotic solutions (as epsilon tends to zero) known as semiclassical wavepackets. Such solutions generalize exact time-dependent Gaussian solutions and ideas of Heller and Hagedorn to periodic media. Our results are as follows: 1) To leading order in epsilon and up to the ‘Ehrenfest’ time-scale t ~ log 1/epsilon, the center of mass and average (quasi-)momentum of the semiclassical wavepacket satisfy the equations of motion of the classical Hamiltonian given by the wavepacket’s Bloch band energy. Our first result is to derive all corrections to these dynamics proportional to epsilon. These corrections consist of terms proportional to the Bloch band’s Berry curvature and terms which describe coupling to the evolution of the wavepacket envelope. These results rely on the assumption that the wavepacket’s Bloch band energy is non-degenerate. 2) We then consider the case where, in one spatial dimension, a semiclassical wavepacket is incident on a Bloch band crossing, a point in phase space where the wavepacket’s Bloch band energy is degenerate. By a rigorous matched asymptotic analysis, we show that at the time the wavepacket meets the crossing point a second wavepacket, associated with the other Bloch band involved in the crossing, is excited. Our result can be seen as a rigorous justification of the Landau-Zener formula in this setting. 3) Our final result generalizes the recent work of Fefferman, Lee-Thorp, and Weinstein on one-dimensional ‘edge’ states. We characterize the bound states of a Schrodinger operator with a periodic potential perturbed by multiple well-separated domain wall ‘edge’ modulations, by proving a theorem on the near zero eigenstates of an emergent Dirac operator. • Watson_columbia_0054D_14094.pdf application/pdf 2.61 MB Download File More About This Work Academic Units Thesis Advisors Weinstein, Michael I. Ph.D., Columbia University Published Here July 29, 2017
{"url":"https://academiccommons.columbia.edu/doi/10.7916/D89W0SSM","timestamp":"2024-11-03T06:56:35Z","content_type":"text/html","content_length":"21545","record_id":"<urn:uuid:bde9b82a-056b-4d67-805f-bbf4c0c8e0d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00741.warc.gz"}
3.7: Practice SD Formula and Interpretation Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) You may or may not understand the importance of calculating and understanding the variation of your data. In some data sets, the data values are concentrated closely near the mean; in other data sets, the data values are more widely spread out from the mean. The most common measure of variation, or spread, is the standard deviation. The standard deviation is a number that measures how far data values are from their mean. The Standard Deviation • provides a numerical measure of the overall amount of variation in a data set, and • can be used to determine whether a particular data value is close to or far from the mean. Answering Questions There are a couple common kinds of questions that standard deviations can answer, in addition being foundational for later statistical analyses. First, a standard deviation helps understand the shape of a distribution. Second, a standard deviation can show if a score is extreme. Describing the Shape of a Distribution The standard deviation provides a measure of the overall variation in a data set. The standard deviation is always positive or zero. The standard deviation is small when the data are all concentrated close to the mean, exhibiting little variation or spread. Distributions with small standard deviations have a tall and narrow line graph. The standard deviation is larger when the data values are more spread out from the mean, exhibiting more variation. Distributions with large standard deviations may have a wide and flat lin graph, or they may be skewed (with the outlier(s) making the standard deviation bigger). Suppose that we are studying the amount of time customers wait in line at the checkout at supermarket A and supermarket B. the average wait time at both supermarkets is five minutes. At supermarket A , the standard deviation for the wait time is two minutes; at supermarket B the standard deviation for the wait time is four minutes. Because supermarket B has a higher standard deviation, we know that there is more variation in the wait times at supermarket B. Overall, wait times at supermarket B are more spread out from the average; wait times at supermarket A are more concentrated near the average. Identifying Extreme Scores The standard deviation can be used to determine whether a data value is close to or far from the mean. Suppose that Rosa and Binh both shop at supermarket A. Rosa waits at the checkout counter for seven minutes and Binh waits for one minute. At supermarket A, the mean waiting time is five minutes and the standard deviation is two minutes. The standard deviation can be used to determine whether a data value is close to or far from the mean. Rosa waits for seven minutes: • Seven is two minutes longer than the average of five; two minutes is equal to one standard deviation. • Rosa's wait time of seven minutes is two minutes longer than the average of five minutes. • Rosa's wait time of seven minutes is one standard deviation above the average of five minutes. Binh waits for one minute. • One is four minutes less than the average of five; four minutes is equal to two standard deviations. • Binh's wait time of one minute is four minutes less than the average of five minutes. • Binh's wait time of one minute is two standard deviations below the average of five minutes. • A data value that is two standard deviations from the average is just on the borderline for what many statisticians would consider to be far from the average. Considering data to be far from the mean if it is more than two standard deviations away is more of an approximate "rule of thumb" than a rigid rule. In general, the shape of the distribution of the data affects how much of the data is further away than two standard deviations. (You will learn more about this in later chapters.) The number line may help you understand standard deviation. If we were to put five and seven on a number line, seven is to the right of five. We say, then, that seven is one standard deviation to the right of five because \(5 + (1)(2) = 7\). If one were also part of the data set, then one is two standard deviations to the left of five because \(5 + (-2)(2) = 1\). Figure \(\PageIndex{1}\)- Scale from 0 to 7 (CC-BY by Barbara Illowsky & Susan Dean (De Anza College) from OpenStax) • In general, a value = mean + (#ofSTDEV)*(standard deviation) • where #ofSTDEVs = the number of standard deviations • #ofSTDEV does not need to be an integer • One is two standard deviations less than the mean of five because: \(1 = 5 + (-2)(2)\). (The numbers in parentheses that touch should be multiplied) The equation value = mean + (#ofSTDEVs)*(standard deviation) can be expressed for a sample and for a population. • sample: \(x = \bar{x} + \text{(#ofSTDEV) \times (s)} \) • Population: \(x = \mu + \text{(#ofSTDEV) \times (s)} \) The lower case letter s represents the sample standard deviation and the Greek letter \(\sigma\) (sigma, lower case) represents the population standard deviation. The symbol \(\bar{x}\) is the sample mean and the Greek symbol \(\mu\) is the population mean. Calculating the Standard Deviation If \(x\) is a number, then the difference "\(x\) – mean" is called its deviation. In a data set, there are as many deviations as there are items in the data set. The deviations are used to calculate the standard deviation. If the numbers belong to a population, in symbols a deviation is \(x - \mu\). For sample data, in symbols a deviation is \(x - \bar{x}\). The procedure to calculate the standard deviation depends on whether the numbers are the entire population or are data from a sample. The calculations are similar, but not identical. Therefore the symbol used to represent the standard deviation depends on whether it is calculated from a population or a sample. The lower case letter s represents the sample standard deviation and the Greek letter \(\sigma\) (sigma, lower case) represents the population standard deviation. If the sample has the same characteristics as the population, then s should be a good estimate of \(\sigma\). To calculate the standard deviation, we need to calculate the variance first. The variance is the average of the squares of the deviations (the \(x - \bar{x}\) values for a sample, or the \(x - \mu\) values for a population). The symbol \(\sigma^{2}\) represents the population variance; the population standard deviation \(\sigma\) is the square root of the population variance. The symbol \(s^{2} \) represents the sample variance; the sample standard deviation s is the square root of the sample variance. You can think of the standard deviation as a special average of the deviations. If the numbers come from a census of the entire population and not a sample, when we calculate the average of the squared deviations to find the variance, we divide by \(N\), the number of items in the population. If the data are from a sample rather than a population, when we calculate the average of the squared deviations, we divide by n – 1, one less than the number of items in the sample. Formulas for the Sample Standard Deviation \[s = \sqrt{\dfrac{\sum(x-\bar{x})^{2}}{n-1}} \nonumber \] For the sample standard deviation, the denominator is \(n - 1\), that is the sample size MINUS 1. Example \(\PageIndex{1}\) In a fifth grade class at a private school, the teacher was interested in the average age and the sample standard deviation of the ages of her students. The following data are the ages for a sample of n = 20 fifth grade students. The ages are rounded to the nearest half year in Table \(\PageIndex{1}\), but first let's talk about the context. 1. Who was the sample? Who could this sample represent (population)? The sample is the 20 fifth graders from a private school. The population could be all fifth graders from private schools? 2. What was measured? Age, in years, was measured. This is the DV, the outcome variable. Table \(\PageIndex{1}\)- Ages of a sample of 20 fifth Ages of Sample Fifth Graders 3. What is the mean? \[\bar{x} = \dfrac{(9+9.5+9.5+10+10+10+10+10.5+10.5+10.5+10.5+11+11+11+11+11+11+11.5+11.5+11.5)}{20} = 10.525 = 10.53 \nonumber\] The average age is 10.53 years, rounded to two places. 4. What is the standard deviation? The variance may be calculated by using a table. Then the standard deviation is calculated by taking the square root of the variance. We will explain the parts of the table after calculating s. Table \(\PageIndex{1}\)- Ages of One Fifth Grade Class Data Deviations Deviations^2 x (x – \(\bar{x}\)) (x – \(\bar{x}\))2 9 9 – 10.525 = –1.525 (–1.525)^2 = 2.325625 9.5 9 – 10.525 = –1.525 (–1.025)^2 = (–1.025 * –1.025) = 1.050625 9.5 9.5 – 10.525 = –1.025 (–1.025)^2 = 1.050625 10 9.5 – 10.525 = –1.025 (–0.525)^2 = (–0.525 * –0.525)= 0.275625 10 9.5 – 10.525 = –1.025 (–0.525)^2 = (–0.525 * –0.525)= 0.275625 10 9.5 – 10.525 = –1.025 (–0.525)^2 = (–0.525 * –0.525)= 0.275625 10 10 – 10.525 = –0.525 (–0.525)^2 = (–0.525 * –0.525)= 0.275625 10.5 10.5 – 10.525 = –0.025 (–0.025)^2 = (–0.025 * –0.025)= 0.000625 10.5 10.5 – 10.525 = –0.025 (–0.025)^2 = (–0.025 * –0.025)= 0.000625 10.5 10.5 – 10.525 = –0.025 (–0.025)^2 = (–0.025 * –0.025)= 0.000625 10.5 10.5 – 10.525 = –0.025 (–0.025)^2 = (–0.025 * –0.025)= 0.000625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11 11 – 10.525 = 0.475 (0.475)^2 = (0.475 * 0.475)= 0.225625 11.5 11.5 – 10.525 = 0.975 (0.975)^2 = (0.975 * 0.975)= 0.950625 11.5 11.5 – 10.525 = 0.975 (0.975)^2 = (0.975 * 0.975)= 0.950625 11.5 11.5 – 10.525 = 0.975 (0.975)^2 = (0.975 * 0.975)= 0.950625 \(\displaystyle\sum X\) 0 (basically) 9.7375 The first column in Table \(\PageIndex{1}\) has the data, the second column has has deviations (each score minus the mean), the third column has deviations squared. The first row is the row's title, the second row is the symbols for that column, the rest of the rows are the scores until the bottom row, which is the sum of each of the rows. Take the sum of the last column (9.7375) divided by the total number of data values minus one (20 – 1): \[\dfrac{9.7375}{20-1} = 0.5125 \nonumber\] The sample standard deviation s is the square root of \[\dfrac{SS}{df} \nonumber\]: \[s = \sqrt{0.5125} = 0.715891 \nonumber\] and this is rounded to two decimal places, \(s = 0.72\). The standard deviation of the sample fo 20 fifth graders is 0.72 years. Typically, you do the calculation for the standard deviation on your calculator or computer. When calculations are completed on devices, the intermediate results are not rounded so the results are more accurate. It's also darned easier. So why are spending time learning this outdated formula? So that you can see what's happening. We are finding the difference between each score and the mean to see how varied the distribution of data is around the center, dividing it by the sample size minus one to make it like an average, then square rooting it to get the final answer back into the units that we started with ( age in years). • For the following problems, recall that value = mean + (#ofSTDEVs)(standard deviation). Verify the mean and standard deviation or a calculator or computer. • For a sample: \(x\) = \(\bar{x}\) + (#ofSTDEVs)(s) • For a population: \(x\) = \(\mu\) + (#ofSTDEVs)\(\sigma\) • For this example, use x = \(\bar{x}\) + (#ofSTDEVs)(s) because the data is from a sample 5. Verify the mean and standard deviation on your own. 6. Find the value that is one standard deviation above the mean. Find (\(\bar{x}\) + 1s). 7. Find the value that is two standard deviations below the mean. Find (\(\bar{x}\) – 2s). 8. Find the values that are 1.5 standard deviations from (below and above) the mean. 1. You should get something close to 0.72 years, but anything from 0.70 to 0.74 shows that you have the general idea. 2. (\(\bar{x} + 1s) = 10.53 + (1)(0.72) = 11.25\) 3. \((\bar{x} - 2s) = 10.53 – (2)(0.72) = 9.09\) □ \((\bar{x} - 1.5s) = 10.53 – (1.5)(0.72) = 9.45\) □ \((\bar{x} + 1.5s) = 10.53 + (1.5)(0.72) = 11.61\) Notice that instead of dividing by \(n = 20\), the calculation divided by \(n - 1 = 20 - 1 = 19\) because the data is a sample. For the sample, we divide by the sample size minus one (\(n - 1\)). The sample variance is an estimate of the population variance. After countless replications, it turns out that when the formula division by only N (the size of the sample) is used on a sample to infer the population’s variance, it always under-estimates the variance of the population. Which one has the bigger solution, the one with the smaller denominator or the larger denominator? • \(dfrac{10}{2}= \) • \(dfrac{10}{5}= \) Smaller denominators make the resulting product larger. To solve our problem of using the population’s variance formula on a sample under-estimating the variance, we make the denominator of our equation smaller when calculating variance for a sample. In other words, based on the mathematics that lies behind these calculations, dividing by (\(n - 1\)) gives a better estimate of the What does it mean? The deviations show how spread out the data are about the mean. From Table \(\PageIndex{1}\), The data value 11.5 is farther from the mean than is the data value 11 which is indicated by the deviations 0.97 and 0.47. A positive deviation occurs when the data value (age, in this case) is greater than the mean, whereas a negative deviation occurs when the data value is less than the mean (that particular student is younger than the average age of the class) . The deviation is –1.525 for the data value nine. If you add the deviations, the sum is always zero, so you cannot simply add the deviations to get the spread of the data. By squaring the deviations, you make them positive numbers, and the sum will also be positive. The variance, then, is the average squared deviation. But the variance is a squared measure and does not have the same units as the data. No one knows what 9.7375 years squared means. Taking the square root solves the problem! The standard deviation measures the spread in the same units as the data. The standard deviation, \(s\) or \(\sigma\), is either zero or larger than zero. When the standard deviation is zero, there is no spread; that is, all the data values are equal to each other. The standard deviation is small when the data are all concentrated close to the mean, and is larger when the data values show more variation from the mean. When the standard deviation is a lot larger than zero, the data values are very spread out about the mean; outliers can make \(s\) or \(\sigma\) very large. Exercise \(\PageIndex{1}\) Scenario: Using one baseball professional team as a sample for all professional baseball teams, the ages of each of the players are as follows: Table \(\PageIndex{2}\)- One Baseball Team's Ages Data Deviations Deviations^2 x (x – \(\bar{x}\)) (x – \(\bar{x}\))2 \(\displaystyle\sum X\) = 767 \(\displaystyle\sum X\) should be 0 (basically) \(\displaystyle\sum X\) = ? If you get stuck after the table, don't forget that: \(s=\sqrt{\dfrac{\sum(X-\overline {X})^{2}}{N-1}} \) All of your answers should be complete sentences, not just one word or one number. Behavioral statistics is about research, not math. 1. Who was the sample? Who could this sample represent (population)? 2. What was measured? 3. What is the mean? (Get in the practice of including the units of measurement when answering questions; a number is usually not a complete answer). 4. What is the standard deviation? \[s=\sqrt{\dfrac{\sum(X-\overline {X})^{2}}{N-1}}=\sqrt{\dfrac{S S}{d f}} \] 5. Find the value that is two standard deviations above the mean, and determine if there are any players that are more than two standard deviations above the mean. 1. The sample is 25 players from a professional baseball team. They were chosen to represent all professional baseball players (it says so in the scenario description!). 2. Age, in years, was measured. 3. The mean of the sample (\(\bar{X}\) was 30.68 years. 4. The standard deviation was 6.09 years ( \(s = 6.09\) ), although due to rounding differences you could get something from about 6.05 to 6.12. Don't worry too much if you don't get exactly 6.09; if you are close, then you did the formula correctly! 5. The age that is two standard deviations above the mean is 42.86 years, and none of the players are older than that. [\(\bar{x} + 2s = 30.68 + (2)(6.09) = 42.86 \number \]. What standard deviation show us can seem unclear at first. Especially when you are unfamiliar (and maybe nervous) about using the formula. By graphing your data, you can get a better "feel" for what a standard deviation can show you. You will find that in symmetrical distributions, the standard deviation can be very helpful. Because numbers can be confusing, always graph your data. The standard deviation can help you calculate the spread of data. • The Standard Deviation allows us to compare individual data or classes to the data set mean numerically. • \(s = \sqrt{\dfrac{\sum(x-\bar{x})^{2}}{n-1}}\) is the formula for calculating the standard deviation of a sample. ∑f(x−μ)2N−−−−−−−−−√ Contributors and Attributions • Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114.
{"url":"https://stats.libretexts.org/Workbench/PSYC_2200%3A_Elementary_Statistics_for_Behavioral_and_Social_Science_(Oja)_WITHOUT_UNITS/03%3A_Descriptive_Statistics/3.07%3A_Practice_SD_Formula_and_Interpretation","timestamp":"2024-11-02T17:38:45Z","content_type":"text/html","content_length":"175081","record_id":"<urn:uuid:0679457e-1eb4-4ed1-9cd3-43e4207ba4c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00188.warc.gz"}
Percentage Calculator - cryptocrape.com Percentage Calculator How to Use the Percentage Calculator? 1. Enter Numbers: □ Enter the first number (referred to as X) in the Enter Number 1 input field. □ Enter the second number (referred to as Y) in the Enter Number 2 input field. 2. Select an Operation: From the dropdown menu, choose one of the following operations to calculate: □ What is X% of Y? This option calculates what percentage of Y is represented by X. Example: What is 20% of 100? (Result = 20) □ X is what % of Y? This option calculates the percentage X represents of Y. Example: 50 is what % of 200? (Result = 25%) □ X is Y% of what? This option calculates the value for which Y% is equal to X. Example: 20 is 10% of what? (Result = 200) □ Percentage Change (from Y to X) This option calculates the percentage change from Y to X. It will provide whether the change is an increase or decrease. Example: If Y = 100 and X = 120, the result will show a 20% increase. □ Percentage Difference between X and Y This option calculates the percentage difference between X and Y, showing how much the two values differ in percentage terms. 3. Click “Calculate”: After entering the numbers and selecting an operation, click the Calculate button to display the result. 4. Clear the Fields: □ If you wish to reset the input fields and remove the result, click the Clear button. This calculator supports basic percentage calculations, changes, and comparisons, allowing you to easily perform percentage-based operations with a clean interface.
{"url":"https://cryptocrape.com/percentage-calculator/","timestamp":"2024-11-14T02:05:55Z","content_type":"text/html","content_length":"88218","record_id":"<urn:uuid:d8f970a4-bed3-49bd-ab37-dae4aab015af>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00876.warc.gz"}
[Solved] Make sure to answer all questions. Do not | SolutionInn Answered step by step Verified Expert Solution Make sure to answer all questions. Do not copy and paste It was recently reported that in the United States, 40.3 percent of all births Make sure to answer all questions. Do not copy and paste It was recently reported that in the United States, 40.3 percent of all births are to unmarried mothers. A county health administrator is investigating whether births to unmarried women is higher in her county than in the national average. If so, she will propose additional funding to counsel unmarried mothers. A random sample of 100 births in the county will be looked at. Let p represent the proportion of all women giving birth in the county who are unmarried Consider the following hypotheses. Ho : p = 0,403, H. : p > 0.403 (a) Describe a Type II error in context and a possible consequence. (b) What values of the sample proportion p would represent sufficient evidence to reject the null hypothesis at a significance level of a = 0.057 Suppose the actual proportion of all women giving birth in the county who are unmarried is 0.045. (c) Using the actual proportion of 0.045 and the answer from (b), find the probability that the null hypothesis will be rejected. Show your work. (d) What statistical term describes the probability calculated in (c)? (e) Suppose the size of the sample was greater than 100. How would that effect the probability of rejecting the null hypothesis calculated in (c)? Explain.Aggregated logit model. Consider a binary logit model for car and bus, where the following representative utility functions have been estimated with a sample of 750 individuals belonging to a particular O-D pair of an urban area: Vc = 3.5 - 0.25tc - 0.42ec - 0.1cc vb =-0.251b - 0.42es - 0.1cb where f is in-vehicle time (minutes), e is access time (minute) and c is travel cost (cents). The subscripts b denotes bus and c denotes car. Assume the following average data is known: variable Mode C Car 25 140 Bus 40 50 If this O-D pair has 8,600 person-trips/ day (a) Calculate the probabilities that a person will take bus and car. (b) Calculate the expected number of persons taking bus and car. (c) If the cost of travel by car increases by 10% (i.e., c. is now 154) while other attributes remain unchanged, calculate the expected number of trip using each mode,. Perform your calculations by means of the conventional logit model as well as the incremental logit model There are 3 Steps involved in it Step: 1 Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started
{"url":"https://www.solutioninn.com/study-help/questions/make-sure-to-answer-all-questions-do-not-copy-and-5331146","timestamp":"2024-11-10T23:59:41Z","content_type":"text/html","content_length":"101251","record_id":"<urn:uuid:30fef598-cbfa-4d12-85d1-2cafbd84be10>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00034.warc.gz"}
David Schulenberg Treatise on Intervals With a Review of Fractions and Decimals and Recipes for Some Keyboard Temperaments (c) 2004 David Schulenberg Pitches are commonly expressed in terms of either (a) the frequency or (b) the length of the sounding object. Common units for (a) frequency are Herz (vibrations per second); for (b) length are feet. An interval is a relationship between two pitches. Intervals can be expressed as ratios, and ratios can be expressed either as fractions or as decimal numbers. [See below on fractions and decimals.] Just intonation is any system in which the intervals can be expressed as simple whole-number ratios such as 1/2, 3/4, and so on. Pythagorean intonation is a system of just intonation traditionally ascribed to the mythical ancient Greek mathemetician Pythagoras in which the most common intervals are defined as follows: Unison: 1/1 Octave: 1/2 Fifth: 2/3 Fourth: 3/4 Major third: 4/5 Minor third: 5/6 Major whole step: 8/9 Minor whole step: 9/10 We can express each of these ratios in decimal form [see below on converting fractions to decimal numbers]. This produces the following results. We shall find that decimal numbers are more easily compared than fractions, and they are useful for certain calculations as well. Unison: 1.000 Octave: .500 Fifth: .667 Fourth: .750 Major third: .800 Minor third: .833 Major whole step: .888 Minor whole step: .900 These ratios correspond with those theoretically produced by notes in the harmonic series of an acoustically perfect string or wind instrument as follows: C c g c‘ e‘ g‘ bb-‘ c” d” e” f ”+ Number in series: Ratio to previous pitch – 1/2 2/3 3/4 4/5 5/6 6/7 7/8 8/9 9/10 10/11 Ratio expressed as a decimal: – .5 .667 .75 .8 .833 .857 .875 .888 .9 .909 [Note: numbers 7 and 11 in the series are not used in Western music, even though they form just intervals. The notes produced by these intervals, shown above in brackets, are considered to be out of tune; for example, the seventh note in the series is too low to form a pure minor third to g‘ or a pure major third to d”.] The ratios in the above chart can be thought of as applying to the sounding lengths of pipes or strings that produce the successive notes. For example, an organ pipe sounding e‘ will, in Pythagorean intonation, be 4/5 as long as a pipe sounding c‘, all other things being equal. If, then, the pipe sounding c‘ is two feet long, the length of the pipe for e‘ will be 4/5 of that. (“4/5 of ” means 4/5 times the given length of two feet. To calculate this, you can convert 4/5 to a decimal number and then multiply that result by 2.) The result is 1.6 feet.[See below on converting this to inches.] What if we want to talk about pitch as a function of frequency instead of sounding length? Frequency and length are in inverse proportion; that is, as pitch rises, the sounding length decreases but the frequency increases (and vice versa). Applied to the above chart, this means that for each fraction we must substitute its reciprocal: C c g c‘ e‘ g‘ bb-‘ c” d” e” f ”+ Number in series: Ratio to previous pitch: – 2/1 3/2 4/3 5/4 6/5 7/6 8/7 9/8 10/9 11/10 Ratio expressed as decimal: – 2 1.5 1.33 1.25 1.2 1.167 1.143 1.125 1.111 1.1 These relationships hold for any notes, not just those shown in the above table. Thus, in any octave, the frequency of the higher note is twice that of the lower one: if the first a‘ on the flute has a frequency of 415, the note an octave below it has a frequency half that, or 415/2 = 207.5. If the lowest pipe on a particular organ stop is 8 feet long, then the pipe an octave above that is 4 feet long, and the pipe sounding a fifth above that is 2/3 as long, or 2/3 x 4 = 8/3 or about 1.667 feet long. [To convert this to inches, see below. Suppose now that we want to know the relationship between g‘ and the note one major whole step above it, that is, a‘. The frequency of the latter will be 9/8 that of g‘. Of course, a‘ is today normally defined as the note whose frequency is 440. So what would be the frequency of g‘ in Pythagorean intonation? We know the ratio between the two frequencies; it is 8 to 9. The latter number represents the higher frequency. So the frequency of g‘ must be 8/9 that of a‘, or 8/9 x 440 = 391.1111…. (391-1/9). What would be the frequency of b‘? Since we consider the latter to be one whole step higher than a‘, then its frequency is 9/8 that of a‘, or 495. (This result is obtained by multiplying 440 by 9/8.) We now have ratios and frequencies for the two whole steps above g‘. What, then, is the relationship between g‘ and b‘? To find this we must add the two whole steps. We can think of this as adding one ratio, 9/8, to another, also 9/8. To add ratios, we multiply the fractions that represent them. Thus the interval from g‘ to b‘ is 9/8 x 9/8, or 81/64. [See below on multiplying and dividing fractions.] The frequency of this b‘is therefore 81/64 that of g‘, or 495. The ratio 81/64 is often referred to as that of the Pythagorean third. Technically, this is incorrect; the interval g‘ to b‘ as described above is actually a ditone, the sum of two whole tones, or more properly, two major whole tones, each formed by frequencies in the ratio 9/8. In just intonation, a major third is actually formed from two different-sized whole tones, as in the interval c” to e” in the tables shown above. The smaller whole tone, in the ratio 10/9, is a minor whole tone. Suppose that we decided to define b‘ not as the sum of two whole steps above g‘, but rather as a pure or just major third above g‘. In that case, the frequency of b‘ would be 5/4 that of g‘, or 5/4 x 391-1/9 = 488-8/9. This number is significantly lower than the one we obtained above (495). These considerations are important when we proceed to our next problem, the definition of the half step. Suppose we wish to express the interval from b‘ to c” as a ratio. We could think of this interval as the difference between the just fourth g‘/c” and the just third g‘/b‘. That is, we could take the difference between the ratios of the perfect fourth (4/3) and the major third (5/4). To find the difference between two ratios, we divide the larger by the smaller. Division by a fraction is the same as multiplying by its reciprocal. Thus, in this case the ratio representing the half step would be 4/3 ÷ 5/4 = 4/3 x 4/5 = 16/15. This is the value usually given for the diatonic or major half-step in just intonation. Expressed as a decimal, it is equivalent to about 1.067. We could find other half steps as well. Suppose, for example, we define bb’ as the note that is a minor third above g‘. In that case, the frequency of bb’ would be 6/5 that of g‘. What is the interval between this bb’ and the b‘ that forms a pure third to g‘? We must subtract a just major third (6/5) from a just minor third (5/4): 5/4 ÷ 6/5 = 4/5 x 5/6 = 25/24. This is the chromatic or minor half-step. Expressed in decimal terms, it is about 1.042, which is significantly smaller than the diatonic one. What if, in order to calculate our diatonic half step from b‘ to c”, we defined b‘ not as a pure third above g‘ but as a so-called Pythagorean third above that note? In that case we would be subtracting a ditone (81/64) from a perfect fourth (4/3). The result is 4/3 ÷ 81/64 = 4/3 x 64/81 = 256/243 or about 1.053. This is a significantly smaller number than 16/15, although larger than 25/ 24. In other words, the “Pythagorean third” results in a significantly higher b‘ than does a just third. The difference between these two intervals—the ditone and the just major third—is known as the syntonic comma. It is equal to 16/15 ÷ 256/243 = 16/15 x 243/256 = 3888/3840 = 81/80 = 1.0125. [See below on Quantz and the comma.] Now for some practical problems. Suppose that we have two flutes of slightly different lengths. We suspect that the difference in length represents a difference in the pitch standards at which they were built. How can we determine the interval between those two pitch standards? Let us imagine that the sounding length of one flute is exactly two feet; that of the other is an inch longer. To compare them, we might first convert both lengths into inches. In that case the length of the longer flute, in inches, 2 x 12 = 24; the other is 25. The sounding lengths of the two flutes are thus in the ratio 25/24, or a chromatic semitone. If the frequency of a‘ on the shorter flute is 440, then that on the longer one is 24/25 x 440 = 422.4 (remember that length and frequency are inversely proportional, so if the length of one flute is 25/24 that of the other, its frequency will be 24/25 that of the longer one). Now suppose instead that we know two frequencies and wish to determine the interval between them. If one flute sounds a‘ at 405 and another at 430, then the interval between them is 430/405. We can reduce that fraction to 86/81, but clearly this is not a small whole-number ratio, so it does not correspond exactly to any interval in Pythagorean intonation. Nevertheless, as a decimal, this fraction is equivalent to about 1.062. Comparing it to the decimal values for the common intervals, we see that it compares fairly closely to the diatonic half-step (1.067). One final problem: how can one divide an interval in half? For example, what note is at the exact midpoint of the octave a/a‘? We would probably call it eb’, but what is its frequency? If the frequency of a‘ is 440, that of a will be half that of 220. We might imagine then that the note halfway between them will have a frequency that is midway between 440 and 220. The difference between the two is also 220, and if we split this difference evenly might imagine the note halfway between them to have a frequency of 220 + 110 = 330. But 330 is equal to 2/3 of 220, and 2/3 is the ratio of the perfect fifth. The note a perfect fifth above a is e‘, which is obviously not the same as eb’. So one cannot simply split the difference of the frequencies between a and a‘ in order to divide the interval between them in half. In order to divide an interval in half, it is necessary to remember that intervals are expressed as ratios, and that to add ratios you must multiply the fractions that express them. To subtract ratios, you divide the fractions. Thus, when we speak of dividing an octave in half, we are imagining it to be the sum of two equal intervals. But if the ratio that represents the octave is the sum of two equal ratios, then to find their value we must find the square root of the fraction that represents the octave. Since the octave can be represented as the number two, half of the octave must be represented as the square root of 2. This is a so-called irrational number roughly equal to 1.414; looking at our second chart, we see that this corresponds to an interval somewhere between a perfect fifth and fourth. In fact, the interval in question is an equal-tempered tritone. But no interval represented by an irrational number can be expressed as a fraction, and therefore such intervals have no place in just intonation. Tritones in just intonation are always just a little different from half an octave; their ratios can be determined through the same sort of process we used to measure the halfstep. (Thus the diminished fifth is a fifth (3/2) minus a diatonic half-step (16/15), or 45/32 = 1.40625; the augmented fourth is a fourth (4/3) plus a chromatic half-step (25/24), or 100/72 = 25/18 ≈ 1.389.) The so-called Pythagorean third can be easily divided in two, since it is defined as the sum of two major whole steps. But what if we try to divide the pure major third? Earlier we found that if g‘ had a frequency of 391-1/9, then b‘ would have a frequency of 5/4 that, or 488-8/9. If we split the difference, we find that a‘ = 440, as we would expect. But, again, this frequency corresponds not to the midpoint between g‘ and b‘ but to a note slightly higher. This is because a‘ forms a major whole step to g‘ but only a minor whole step to b‘. Like the octave, the major third can be divided equally only when it is defined as the sum of two equal intervals, as in the case of the Pythagorean ditone. To divide a pure major third (5/4) in half, one would need to find the square root of 5, which is another irrational number. An equal division of the pure major third g‘/b‘ as defined above would yield an a‘ of frequency about 437, which is audibly lower than a‘ = 440. To put it another way, when adding or subtracting intervals, it is important to multiply or divide ratios properly. If modern pitch is defined as a‘ = 440, then a major whole step beneath that is 8/9 x 440 = 3520/9 or about 391.111. The halfstep above that must be defined as either g#’ or ab’, the former forming a diatonic halfstep to a‘ (frequency 15/16 x 440 = 412.5), the latter a chromatic one (24/25 x 440 = 422.4). Equal-tempered g#/ab’ lies between them at about 415. The latter, morever, is about 25 Hz below 440 and about 23 above 392—that is, not equidistant in terms of Hz. Some of the distinctions noted above are audible; others involve immeasurably tiny differences that are meaningful only on paper. But it is necessary to do the arithmetic in order to determine which of these distinctions might involve differences of practical significance. Fractions and decimals To convert a fraction to a decimal number, divide the top number (numerator) by the bottom number (demonimator). For example, to convert the fraction 9/8 to a decimal number, divide 9 by 8; the result is 1.125. To convert a decimal number to a fraction, understand the decimal portion of the number as a fraction over 10, 100, 1000, etc. Thus .65 = 65/100; .375 = 375/1000. If the top and bottom numbers have a common denominator, then you can “reduce” the fraction. For example, in 65/100, both 65 and 100 are divisible by 5 (that is, 5 “goes into” both 65 and 100). 65 divided by 5 is 13, and 100 divided by 5 is 20. Thus 65/100 = 13/20. In 375/1000, both numbers are divisible by 125, and the fraction is equal to 3/8. Some fractions cannot be expressed as exact decimal numbers. For example, if you divide 2 by 3 you get a never-ending series of digits: 2/3 = .66666666666666666666666666666666666666… Numbers of this type are called repeating decimals. Calculators and normal human beings usually round them to a simpler non-repeating decimal which is not exactly equal to the fraction but close enough for practical purposes. Thus 2/3 is approximately equal to .67, or to .667, or to .66666667. The degree of precision is arbitrary: you round the number to as many or as few decimal places as you need to, depending on the accuracy of your measuring device. Rarely, however, will it be necessary or desirable to express decimals more accurately than to the nearest thousandth (three decimal places). It is not easy to convert repeating decimals to fractions. The simplest method is simply to memorize the most common ones, such as: 1/3 = .333… 2/3 = .666… 1/6 = .166… 5/6 = .833… 1/9 = .111… 2/9 = Some decimal numbers cannot be expressed as fractions. For example, pi, the ratio between the circumferance of a circle and its diameter, is a non-repeating decimal number that begins 3.1416… and continues infinitely without forming any repeating patterns. Such a number is calledirrational. The square roots of prime numbers such as 2 and 5 are also irrational numbers. Most of the intervals of equal temperament can be expressed only as irrational numbers; many calculations with such numbers require the use of logarithms, which were not invented until around 1600 and not applied to music until around a century later. Converting feet to inches An English inch is 1/12 of a foot. In calculations involving both inches and feet, it is necessary to express all measurements in one or the other unit, choosing whichever unit yields the most useful results. It may be necessary to experiment. For example, if the pipe for c” is one foot long and we want to know the length of the pipe for d”, it doesn‘t help to know that the latter should be 8/9 of a foot in length; we want to know how many inches. Therefore start by converting one foot to 12 inches, then multiply the latter by 8/9 to get 10.666… or ten and 2/3 inches. Multiplying and dividing fractions To multiply two fractions, multiply the numerators and then the demoniminators separately. Thus, to multiply 3/4 by 5/6, first multiply 3 x 5 = 15, then 4 x 6 = 24. The result is 15/24. Because 15 and 24 are both divisible by 3, 15/24 can be reduced to 5/8. To divide one fraction by another, you multiply the first one by the reciprocal of the second. For example, to divide 3/4 by 5/6, multiply 3/4 x 6/5 = 18/20. This can be reduced to 9/10. Whole numbers are fractions with a denominator of 1. For example, 6 = 6/1. Thus, dividing by 6 is the same as multiplying by 1/6. Quantz and the comma According to Quantz and others, there are nine “commas” in a whole tone. Is he talking about the syntonic comma? We would have to add nine of these intervals, each 81/80, and compare the result with the value for a major whole tone, or 9/8. (81/80) x (81/80) x (81/80) x (81/80) x (81/80) x (81/80) x (81/80) x (81/80) x (81/80) = 150094635296999121/134217728000000000 Expressed as a decimal, the latter equals about 1.11829217744617999345064163208008, which compares fairly closely with the value of 9/8: 1.125 There is another comma as well: the so-called Pythagorean comma, which is the difference between six major whole tones and an octave. It is slightly larger than the syntonic comma (1.0125): (9/8) x (9/8) x (9/8) x (9/8) x (9/8) x (9/8) x ½ = 531441/524288 = 1.0136432647705078125 Nine Pythagorean commas would equal 1.12970812218183304558660833854287, which is somewhat closer to a major whole tone than are nine syntonic commas. Some Keyboard Temperaments The following are simple pragmatic schemes for tuning harpsichords and other keyboard instruments. Although they correspond more or less to historically documented methods, I cannot cite specific sources for any of them. Wherever the instructions indicate to tune an interval x–y, the second note y is the one that is being tuned to the pitch x. Intervals that have already been tuned are shown as x/y. 1. 1/4-comma meantone temperament (especially for music of the 16th and 17th centuries) 1. Starting on c‘, tune a pure major third to e‘. 2. Next tune the fourth c‘–g pure (beatless); then lower the g so that the interval beats about 6 times a second. 3. Tune the fifth g–d‘ pure; then lower the d‘ so that the interval beats about 4 times a second. 4. Tune the fourth d‘–a pure, then lower the a so that the interval beats at about the same rate as the fourth g–c‘. 5. Test the fifth a/e‘; it should beat at about the same rate as the fifth g/d‘. If it beats too fast, lower the a slightly and adjust the other intervals accordingly. If it beats too slowly, raise the a. 6. Tune the octave g–g‘; test the triads c‘/e‘/g‘ and a/c‘/e‘. 7. Now tune these major thirds pure: a–f, g–b‘, a–c#’, d‘–bb, d‘–f#’, g‘–eb’, e‘–g#’. 8. Also tune these octaves: f–f‘, f#’–f#, g#–g#’, a–a‘. Test all of the triads within the interval f/a‘. If it all sounds good, tune the rest of the keyboard by octaves. 2. Tempérament ordinaire (used for French music of the later 17th and 18th centuries) 1–6. Follow steps 1–6 as under no. 1 (1/4-comma meantone). 7. Tune these major thirds pure: a–f, g–b. Also tune the octaves f–f‘ and g–g‘ 8. Tune these perfect fifths and fourths pure: b–f#’, f#’–c#’, c#’–g#’. Also tune the octaves f#’–f# and g#’–g#. Test each of the triads that incorporate these notes. The triads should sound increasingly harsh as you move toward “sharp” keys, but all should be usable. If not, go back to step 7 and raise the note b slightly in relation to the g. 9. Tune these perfect fourths pure: f–bb, bb–eb’. Test the fifth ab/eb’. It should sound pretty bad. Lower the eb’ until the fifth becomes tolerable; also test the fourth bb/eb’, which should also be tolerable. If not, adjust the eb’ further. It may be necessary to lower the bb as well. When done, test the major triads on f#, g#, bb, and c#’. All should be bearable, although only the one on bb will be close to being pure. 10. Tune the rest of the keyboard by octaves. 3. A well-tempered tuning (close to those described by Werckmeister and especially Neidhardt) 1. Tune pure the perfect fifth c‘–f. Also tune the octave f–f‘ 2. Tune pure the major third f–a. Now raise the a so that the interval beats about twice a second. Do the same for the major third c–e‘, except that it should beat about three times a second. 3. Test the triad f/a/c‘; it should be very close to pure. 4. Tune the perfect fourth c‘–g and the perfect fifth g–d‘ as in 1/4-comma meantone (temperament no. 1, steps 3–4), but with both intervals beating slightly less quickly. Then check the fourth a/d‘ and the fifth a/e‘; these should beat like the fourth g/c‘ and the fifth g/d‘, respectively. If all is well, also tune the octaves f–f‘, g–g‘, and a–a‘, and test all the resultant triads, which should sound close to pure. 5. Tune pure these fourths and fifths: e‘–b, b–f#’, f#’–c#’; and f–bb, bb–eb’. Also tune the octave f#’–f#. Test the resultant triads. Some will be fairly strident, but because you will have tuned are no more than three perfects in a row, none of the thirds will be as wide (impure) as a so-called Pythagorean third (which is the product of four perfect fifths). 6. Next tune the fifth eb’–ab so that the ab is high and the interval beats at roughly the same rate as the fifth a/e‘. Then test the fourth ab(g#)–c#’, which should be wide and beating at about the same rate as the fourth a/d‘. Also tune the octave g#–g#’ and test the fifth c#’/g#’, the fourth eb’–ab’, and the thirds ab–c‘ and e‘–g#’. You may need to adjust the notes ab (g#) and ab’ (g#’) up or down a bit in order to get all of the intervals tolerably in tune. 7. Tune the rest of the keyboard by octaves. 4. Temperament no. 3 for use with a Quantz flute 1. Start by tuning the note d‘ pure to the flute’s d‘ and d” (if the latter are not the same, move the cork in the flute!). 2. Next follow all the steps under temperament no. 3, but transpose all notes up a step (this will make D and G the purest tonalities, with virtually pure thirds above both notes): 1. Tune pure the perfect fifth d‘–g. Also tune the octave g–g‘ 2. Tune pure the major third g–b. Now raise the b so that the interval beats about twice a second. Do the same for the major third d–f#’, except that it should beat about three times a second. 3. Test the triad g/b/d‘; it should be very close to pure. 4. Tune the perfect fourth d‘–a and the perfect fifth a–e‘ as in 1/4-comma meantone (temperament no. 1, steps 3–4), but with both intervals beating slightly less quickly. Then check the fourth b/e‘ and the fifth b/f#’; these should beat like the fourth a/d‘ and the fifth a/e‘, respectively. If all is well, also tune the octaves g–g‘, a–a‘, and b–b‘, and test all the resultant triads, which should sound close to pure. 5. Tune pure these fourths and fifths: f#’–c#, c#–g#’, g#’–d#’; and g–c‘, c–f‘. Also tune the octaves g#’–g# and a#’–a#. Test the resultant triads. 6. Next tune the fifth f‘–bb so that the bb is high and the interval beats at roughly the same rate as the fifth b–f#’. Test the fourth bb–eb’. You may need to adjust the bb a little upward to get the third bb/d‘ more in tune (especially if you will be playing in “flat” keys), downward to tune the third bb(a#)/f#. But any large adjustments force you also to adjustments of f/f‘ and/or eb’ (d#’ ), so avoid this if possible. 7. Tune the rest of the keyboard by octaves. 3. Optionally—especially if playing pieces in “flat” keys—check the major thirds d‘/bb and g‘/eb’. If these are too wide, raise the note g so that it forms a pure major third b–g. Then raise the notes c‘, f‘, bb, and eb’ by the same amount. Also retune the octaves g–g‘ and f‘–f. This has the effect of transforming the temperament into something close to tempérament ordinaire, but it should work. Check the resultant triads, including the ones on ab and b. 4. Tune the rest of the keyboard by octaves.
{"url":"https://schulenbergmusic.org/temperament/","timestamp":"2024-11-13T23:00:01Z","content_type":"text/html","content_length":"59766","record_id":"<urn:uuid:65e0a936-a049-47e6-819b-d817fc6c4478>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00001.warc.gz"}
Predicting volatility with neural networks | Macrosynergy Predicting realized volatility is critical for trading signals and position calibration. Econometric models, such as GARCH and HAR, forecast future volatility based on past returns in a fairly intuitive and transparent way. However, recurrent neural networks have become a serious competitor. Neural networks are adaptive machine learning methods that use interconnected layers of neurons. Activations in one layer determine the activations in the next layer. Neural networks learn by finding activation function weights and biases through training data. Recurrent neural networks are a class of neural networks designed for modeling sequences of data, such as time series. And specialized recurrent neural networks have been developed to retain longer memory, particularly LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit). The advantage of neural networks is their flexibility to include complex interactions of features, non-linear effects, and various types of non-price information. The below post is based on various papers and posts that are linked next to the quotes. Headings, cursive text, and text in brackets have been added. Also, a range of orthographic and grammatical errors have been corrected and mathematical expressions have been expressed in common language. This post ties in with this site’s summary of statistical methods. Some formal basics of volatility “Volatility changes over time. High volatility means high risk and sharp price fluctuations, while low volatility refers to smooth price changes…Simulations of [asset prices] are often modeled using stochastic differential equations…[that include a] drift coefficient or mean of returns over some time period, a diffusion coefficient or the standard deviation of the same returns, [and a stochastic process as] Wiener process or Brownian Motion…Usually…volatility changes stochastically overtime…The volatility’s randomness is often described by a different equation driven by a different Wiener process.[That] model is called a stochastic volatility model…Stochastic volatility models are expressed as a stochastic process, which means that the volatility value at time t is latent and unobservable.” [Antulov-Fantulin and Rodikov] “Daily realized volatility is defined as the square root of the sum of intra-day squared returns…Realized volatility (RV) is a consistent estimator of the squared root of the integrated variance (IV). There is even a more robust result stating that realized volatility is a consistent estimator of quadratic variation if the underlying process is a semimartingale.” [Antulov-Fantulin and Traditional volatility prediction models “Autoregressive Conditional Heteroskedasticity, or ARCH, is a method that explicitly models the change in variance over time in a time series. Specifically, an ARCH method models the variance at a time step as a function of the residual errors from a mean process (e.g. a zero mean)…Generalized Autoregressive Conditional Heteroskedasticity, or GARCH, is an extension of the ARCH model that incorporates a moving average component together with the autoregressive component. Specifically, the model includes lag variance terms (e.g. the observations if modeling the white noise residual errors of another process), together with lag residual errors from a mean process.” [Brownlee] “The generalized ARCH model [estimates] variance as future volatility [based on] long-run variance and recent variance. Thus, the clustering effect is a sharp increase of volatility…not followed by a sharp drop…Various extensions have been introduced [such as] exponential GARCH, GJR-GARCH, and threshold GARCH [motivated by] stylized facts about volatility.” [Antulov-Fantulin and Rodikov] “The HAR [heterogeneous autoregression] model essentially claims that the conditional variance of … returns is a linear function of the lagged squared return over the identical return horizon in combination with the squared returns over longer and/or shorter return horizons…Inspired by the success of HAR-type models, most work…has extended the HAR model in the direction of generalizing with jumps, leverage effects, and other nonlinear behaviors…The HAR model has an intuitive interpretation that agents with daily, weekly, and monthly trading frequencies perceive and respond to, altering the corresponding components of volatility.” [Qiu et al.] “The heterogeneous Autoregression Realized Volatility (HAR-RV) model…is based on the assumption that agents’…perception of volatility depends on their investment horizons and [can be] divided into short-term, medium-term and long-term…Different agents…have different investment periods and participate in trading on the exchange with different frequencies…and respond to different types of volatility…A short-term agent may react differently to fluctuations in volatility compared to a medium- or long-term investor. The HAR-RV model is an additive cascade of partial volatilities generated at different time horizons…[for example] daily, weekly, and monthly observed realized volatilities…that follows an autoregressive process…The HAR-RV approach is a more stable and accurate estimate for realized volatility.” [Antulov-Fantulin and Rodikov] Neural networks: basics and key types for financial markets The very basics “A neural network is an adaptive system that learns by using interconnected nodes or neurons in a layered structure that resembles a human brain. A neural network can learn from data—so it can be trained to recognize patterns, classify data, and forecast future events.” [MathWorks] “Neural Networks consist of artificial neurons that are similar to the biological model of neurons. It receives data input and then combines the input with its internal activation state as well as with an optional threshold activation function. Then by using an output function, it produces the output.” [hackr.io] Neural networks consist of layers, i.e. sets of nodes or neurons. There is typically an input layer, an output layer, and a number of hidden layers in between. A neuron is loosely a function that returns a number between 0 and 1. The number returned by the neuron is called its activation. For example, the neurons of an input layer could be the pixels of an image and the numbers could denote their brightness. Within a network, activations in one layer determine the activations in the next layer. The activation of a neuron is governed by a specific weighting function that takes as arguments all the activations of the previous layer. It is typically a function of a weighted sum that ensures that activations are always between 0 and 1, such as a sigmoid or rectified linear unit function. The function also uses a bias parameter, whose value determines a threshold that the weighted sum must exceed to activate meaningfully. Learning means with neural networks finding weights and biases that are appropriate for solving the problem at hand, using training data. The main method by which neural networks learn is gradient descent: parameters are set to minimize the average cost of errors, typically the squared differences between the estimated values in the output layer and the actual labels. The learning algorithm finds that minimum by starting with a random parameter set and then sequentially changing parameters in the direction that reduces their costs most. Types of neural networks for financial markets “Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series…Schematically, a RNN layer uses a for loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far.” [TensorFlow] RNNs are designed to model sequenced data. A sequence is an order of states. Examples are text, audio, or time series. RNNs fulfill this function through sequential memory, which makes it easy to recognize sequential patterns. It uses a looping mechanism (simple ‘for’ loop in code) that allows information to flow from one hidden state to the next. Only after the sequential information has all been passed to the hidden layer the hidden state is passed on and the output layer is activated. RNNs have a short-term memory issue. This means as steps are added to the loop the RNN struggles to retain the information of previous steps. This is caused by the “vanishing gradient” problem of backpropagation. Adjustments of parameters based on errors of the output layer decrease with each layer backward. Gradients shrink exponentially as the algorithm backpropagates down, for example moving backward through timestamps. Put simply, the earlier layers fail to do any learning and long-range dependencies are being neglected. Two specialized recurrent neural networks have been developed to mitigate short-term memory: LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit). They work like RNNs but are capable of learning long-term dependencies by using “gates”. The gates are tensor operations that learn what dependencies should be added to the hidden state. “An LSTM network is a type of recurrent neural network (RNN) that can learn long-term dependencies between time steps of sequence data.” [MathWorks] “An LSTM has a similar control flow as a recurrent neural network. It processes data passing on information as it propagates forward. The differences are the operations within the LSTM’s cells… The core concept of LSTMs is the cell state, and its various gates. The cell state act as a transport highway that transfers relative information all the way down the sequence chain. You can think of it as the ‘memory’ of the network. The cell state, in theory, can carry relevant information throughout the processing of the sequence. So even information from the earlier time steps can make its way to later time steps, reducing the effects of short-term memory. As the cell state goes on its journey, information gets added or removed to the cell state via gates. The gates are different neural networks that decide which information is allowed on the cell state. The gates can learn what information is relevant to keep or forget during training… We have three different gates that regulate information flow in an LSTM cell. A forget gate, input gate, and output gate…The forget gate…decides what information should be thrown away or kept…The input gate…updates the cell state… The output gate decides what the next hidden state should be.” [Michael Phi] “The Gated Recurrent Unit (GRU) is the younger sibling of the more popular Long Short-Term Memory (LSTM) network, and also a type of Recurrent Neural Network (RNN). Just like its sibling, GRUs are able to effectively retain long-term dependencies in sequential data.” [Loye] “To solve the vanishing gradient problem of a standard RNN, GRU uses, so-called, update gate and reset gate. Basically, these are two vectors which decide what information should be passed to the output. The special thing about them is that they can be trained to keep information from long ago, without washing it through time or remove information which is irrelevant to the prediction.”[ GRU is a lightweight version of LTSM where it combines long-term and short-term memory into its hidden state. Thus, while LSTM has cell states and hidden states, GRU only has hidden states. Thus, GRU only has two gates: an update gate (that decides how much of past memory to retain) and a reset gate (that decides how much of past memory to forget). Retaining and forgetting are different actions, i.e. different modes of manipulating past information. Application of neural networks for volatility forecasting “We study and analyze various non-parametric machine learning models for forecasting multi-asset intraday and daily volatilities by using high-frequency data from the U.S. equity market. We demonstrate that, by taking advantage of commonality in intraday volatility, the model’s forecasting performance can significantly be improved…A measure for evaluating the commonality in intraday volatility is proposed, that is the adjusted R-squared value from linear regressions of a given stock’s realized volatility against the market realized volatility…Commonality over the daily horizon is turbulent over time, although commonality in intraday realized volatilities is strong and stable…For most models, the incorporation of commonality leads to better out-of-sample performance through pooling data together and adding market volatility as additional features.” [Zhang et al] “Neural networks are in general, superior to other techniques [reflecting] the capability of neural networks for handling complex interactions among predictors… The high-dimensional nature of ML methods allows for better approximations to unknown and potentially complex data-generating processes, in contrast with traditional economic models…Furthermore, to alleviate the concerns of overfitting, we conduct a stringent out-of-sample test, using the existent trained models to forecast the volatility of completely new stocks that are not included in the training sample. Our results reveal that neural networks still outperform other approaches (including the OLS models trained for each new stock).” [Zhang et al] “We investigate whether a totally nonparametric model is able to outperform econometric methods in forecasting realized volatility. In particular, the analysis …compares the forecasting accuracy of time series models with several neural networks architectures…The data set employed in this study comprises…observations from February 1950 to December 2017 of the Standard & Poor’s (S&P) index…The latent volatility is estimated through the ex-post measurement of volatility based on high-frequency data, namely realized volatility…computed as the sum of squared daily returns…Recurrent neural networks are able to outperform all the traditional econometric methods. Additionally, capturing long-range dependence through LSTM seems to improve the forecasting accuracy also in a highly volatile period.” [Bucci] “We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors…This work shows the potential of deep learning financial time series in the presence of strong noise [and holds] strong promise for better predicting stock behavior via deep learning and neural network models.”[ Xiong, Nichols and Shen] “This study investigates the strengths and weaknesses of machine learning models for realised volatility forecasting of 23 NASDAQ stocks over the period from 2007 to 2016. Three types of daily data are used, variables used in the HAR-family of models, limit order book variables and news sentiment variables…Using a Long-Short-Term-Memory (LSTM) model combined with…four sets of variables each with 21 lags are trained with the loss function of minimising mean squared erors. These experiments provide strong evidence for the stronger forecasting power of machine learning models than all HAR-family of models.” [Rahimikia and Poon] “The volatility prediction task is of non-trivial complexity due to noise, market microstructure, heteroscedasticity, exogenous and asymmetric effect of news, and the presence of different time scales, among others…We studied and analyzed how neural networks can learn to capture the temporal structure of realized volatility. We…implement Long Short Term Memory (LTSM) and…Gated Recurrent Unit (GRU). Machine learning can approximate any linear and non-linear behavior and…learn data structure…We investigated the approach with LSTM and GRU types for realized volatility forecasting tasks and compared the predictive ability of neural networks with widely used EWMA, HAR, GARCH-family models… LSTM outperformed well-known models in this field, such as HAR-RV. Out-of-sample accuracy tests have shown that LSTM offers significant advantages in both types of markets.” [Antulov-Fantulin and Rodikov]
{"url":"https://macrosynergy.com/research/predicting-volatility-with-neural-networks/","timestamp":"2024-11-09T11:09:37Z","content_type":"text/html","content_length":"193446","record_id":"<urn:uuid:e687292b-84c5-434e-83aa-339855ae6288>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00656.warc.gz"}
Latest Changes • Discussion Type • discussion topicstructure • Category Latest Changes • Comments 15 • Last Active May 28th 2012 • Discussion Type • discussion topicalgebraic dynamics • Category Latest Changes • Comments 1 • Last Active May 29th 2012 • Changes to dynamical system. Stub for algebraic dynamics. • Discussion Type • discussion topicconstructible set • Category Latest Changes • Comments 4 • Last Active May 30th 2012 • Stub constructible set, not yet precise (e.g. the universe is not a Boolean algebra as it is a proper class), but gives idea what we could work on in the entry. • Discussion Type • discussion topicDixmier-Douady class • Category Latest Changes • Comments 2 • Last Active May 30th 2012 • created Dixmier-Douady class • Discussion Type • discussion topicfirst-order theory • Category Latest Changes • Comments 1 • Last Active May 31st 2012 • We already have the entry predicate calculus (or first-order logic); I have created separate stub first-order theory, almost empty now, and which could just have been redirect to first-order logic, though I think eventually it would be good to have them separate, as under first-order theories version one could list lots of standard examples of first-order theories, what does not really fit into predicate logic entry, where one should really deal more with predicate calculus. PLus one should do other views of first-order theories. • Discussion Type • discussion topicamalgamation • Category Latest Changes • Comments 1 • Last Active May 31st 2012 • Stub for amalgamation. • Discussion Type • discussion topicelementary and abstract elementary classes • Category Latest Changes • Comments 1 • Last Active May 31st 2012 • Stubs for elementary class of structures (with redirect elementary class) and a stub for far more general Saharon Shelah’s notion of an abstract elementary class. • Discussion Type • discussion topicpretopos completion • Category Latest Changes • Comments 3 • Last Active May 31st 2012 • Discussion Type • discussion topictype theory • Category Latest Changes • Comments 48 • Last Active Jun 2nd 2012 • Some reorganization and added material at type theory. In particular, I added some of the basic syntax of type theories, and also some comments about extensional vs. intensional type theories. • Discussion Type • discussion topicformal loop space • Category Latest Changes • Comments 1 • Last Active Jun 2nd 2012 • stub for formal loop space, just so as to record Kapranov’s article. Eventually it would be great if the $n$Lab had some paragraphs on the geometric link with the chiral de Rham complex, vertex operator algebras, etc. But I can’t do it now. • Discussion Type • discussion topicKac-Moody group • Category Latest Changes • Comments 2 • Last Active Jun 3rd 2012 • stub for Kac-Moody group, in order to record some references • Discussion Type • discussion topicBorn-Oppenheimer approximation • Category Latest Changes • Comments 2 • Last Active Jun 3rd 2012 • I happened to have need for a stub Born-Oppenheimer approximation • Discussion Type • discussion topicKervaire invariant • Category Latest Changes • Comments 3 • Last Active Jun 4th 2012 • I may have written something at Kervaire invariant, but it is at best a stub for the moment • Discussion Type • discussion topicdisplay of a p-divisible group • Category Latest Changes • Comments 1 • Last Active Jun 4th 2012 • I started display of a p-divisible group. • Discussion Type • discussion topicEnergy • Category Latest Changes • Comments 16 • Last Active Jun 5th 2012 • I have created energy ex nihilo. Take that, Hermann von Helmholtz! • Discussion Type • discussion topicfiber integration in ordinary differential cohomology • Category Latest Changes • Comments 4 • Last Active Jun 6th 2012 • I am starting fiber integration in ordinary differential cohomology, but just two references so far (are there more? the project by Baer seems not to have borne any article, it seems) • Discussion Type • discussion topicRelation between type theory and category theory • Category Latest Changes • Comments 4 • Last Active Jun 6th 2012 • Added the reference Maria Emilia Maietti, Modular correspondence between dependent type theories and categories including pretopoi and topoi, Math. Struct. in Comp. Science (2005), vol. 15, pp. 1089–1149 • Discussion Type • discussion topicOnline Resources, math institutions, math archives • Category Latest Changes • Comments 6 • Last Active Jun 9th 2012 • I made redirects to Online Resources, namely the math blogs, online resources. Before we were complaining to Online Resources for many reasons including that it is not of all resources but only of blogs and wikis in relevant areas. No list of main institutes and archives like arXiv, numdam, jstor etc. there. As the list is long, and hard to scroll, I suggest not to add those to the current page. I think we should rename the current page to math blogs eventually and keep Online Resources (especially because of John's reference in his AMS Notices paper) as a redirect and create new pages for other stuff as well as organize the whole system around a top page math resources which will link to math blogs, math archives, math institutions (and maybe more) as well as very comprehensive central AMS-kept list of math resources. I know it is not only about math here, but math is a short abbreviation for page name. Up to now I have realized a large part of an above program, see math archives, math institutions and the supposed top resource page math resources, except that I was cautious not to rename the page Online Resources as people may disagree even with keeping the old redirect and because it may be tricky with the cache bug, while the page is of central importance. I think it would be useful if the pages like math institutions and the top page math resources stay not much longer than they are now, to have quick links and nice readability/visibility. This is the most effective organization, I think. For smaller institutions societies and alternative small lists of resources, it is better to go via links at AMS, EMS and IMU which are already efefctively linked. We can not do better there than those societies do, apart from listing few extra main resources of our main interest. We can have a separate page just for categories or some other things. But the list of blogs is of different character, unlike going to AMS page or jstor, one does not need to go that quickly through list of less-organized stuff like blogs. So the blog list math blogs should grow indefinitely...I have chosen plural as before in these pages, without singular redirect at the moment. • Discussion Type • discussion topicover quasi-category • Category Latest Changes • Comments 3 • Last Active Jun 14th 2012 • and handful of further details at over quasi-category • Discussion Type • discussion topicinfinite-dimensional Chern-Simons theory • Category Latest Changes • Comments 2 • Last Active Jun 15th 2012 • just heard an interesting talk by Steven Rosenberg on CS invariants on infinite-dimensional manifolds. So I created an entry infinite-dimensional Chern-Simons theory in order to record some • Discussion Type • discussion topiccarrying • Category Latest Changes • Comments 2 • Last Active Jun 16th 2012 • created carrying • Discussion Type • discussion topicring extension • Category Latest Changes • Comments 2 • Last Active Jun 20th 2012 • started a stub for ring extension to go with the Café-discussion here. • Discussion Type • discussion topicminimal fibration • Category Latest Changes • Comments 8 • Last Active Jun 21st 2012 • I created minimal fibration which could be merged with minimal Kan fibration. The idea-section says that this notion is needed to give a well defined notion of n-category. However there are other applications which I didn’t mention. • Discussion Type • discussion topicsuppressed reference • Category Latest Changes • Comments 1 • Last Active Jun 22nd 2012 • Hi guys, I suppressed the reference to my course on global analytic geometry. These notes were not well written enough and i put them into the basket. Please, don't pull back the reference. • Discussion Type • discussion topicNekrasov functions and AGT conjecture • Category Latest Changes • Comments 1 • Last Active Jun 25th 2012 • Stubs (just to provide the links to the basic references so far) for Nekrasov functions, AGT conjecture, and, less related, spectral networks. • Discussion Type • discussion topicstructures in a cohesive oo-topos • Category Latest Changes • Comments 12 • Last Active Jun 28th 2012 • I am now going through the section Structures in a cohesive oo-topos and polish and expand the discussions there. First thing I went through is the subsection Geometric homotopy and Galois theory. It gives the definition of the fundamental $\infty$-groupoid functor, a proposition on its consistency (which we had mentioned elsewhere), the definition of locally constant $\infty$-stacks in the sense of $Disc Aut(F)$-principal $\infty$-bundles, and then the central theorem of Galois theory, proven by applying the $\infty$-Yoneda lemma iteratively. (This is material appearing in one form or other in other entries and at this point does not invoke the $\infty$-locality, but I want to have here all in one place a nice comprehensive discussion of the whole situation in a cohesive $\infty$-topos.) • Discussion Type • discussion topicassociated oo-bundle • Category Latest Changes • Comments 10 • Last Active Jun 29th 2012 • started an entry associated infinity-bundle in order to summarize the thesis by Matthias Wendt on associated $\infty$-bundles in arbitrary $(\infty,1)$-toposes, generalizing the classical old results by Stasheff and May from $\infty Grpd$. Also added some remarks on the relation to the discussion at principal infinity-bundle. Hopefully to be continued tomorrow. • Discussion Type • discussion topichomotopy-homology-cohomology • Category Latest Changes • Comments 7 • Last Active Jun 30th 2012 • I felt it was time for another table: homotopy-homology-cohomology The structure is just a first attempt, begun in a brief moment of leisure. I’ll try to think about how to improve on it. Let me know what you think. I have started to include this into relevant entries. • Discussion Type • discussion topichigher geometric quantization • Category Latest Changes • Comments 1 • Last Active Jun 30th 2012 • In case you see the activity in the logs and are wondering, I should say that I have been working on a new entry higher geometric quantization (that used to redirect to n-plectic I have started adding some survey-tables. But not done yet with the entry as a whole. • Discussion Type • discussion topicinfinite dimensional manifold • Category Latest Changes • Comments 2 • Last Active Jul 1st 2012 • I realized that infinite-dimensional manifold and all entries related to it are still very stubby. I am not attempting to change this now, but I thought a first step to make progress is to list what stubs we actually have. I came up with this list and added it to manifolds and cobordisms - contents and made sure that all these entries point to each other. Now somebody go and add more content to these entries! :-) • Discussion Type • discussion topicIHL manifold • Category Latest Changes • Comments 10 • Last Active Jul 2nd 2012 • stub for IHL manifold. • Discussion Type • discussion topicrepresentation theory - contents • Category Latest Changes • Comments 1 • Last Active Jul 2nd 2012 • I have started a table of contents representation theory - contents (based on the link list at representation theory) and started adding it as a floating table of contents to relevant entries. But I ran out of steam before being entirely satisfied with the result. • Discussion Type • discussion topicgeometric quantization extensions - table • Category Latest Changes • Comments 1 • Last Active Jul 2nd 2012 • I have created a table geometric quantization extensions - table. Mostly I have been editing aspects of the entries listed in this table here and there. Also included the table in the Properties-section of various of these entries. • Discussion Type • discussion topicgeometric quantization - contents • Category Latest Changes • Comments 14 • Last Active Jul 3rd 2012 • I have started a table geometric quantization - contents and added it as a floating TOC to the relevant entries. Parts of this remain a bit unfinished. The $n$Lab is pretty much unusable in the last hours. I’ll give up now, have wasted too much time with this already. Maybe later it has recovered. • Discussion Type • discussion topicsymplectic reduction • Category Latest Changes • Comments 2 • Last Active Jul 4th 2012 • I have decided to splitt off a stand-alone entry symplectic reduction from BRST-BV formalism (which used to be the redirect). Still just a stub. Lots of material and references still needs to be copied or moved from the latter to the former. • Discussion Type • discussion topicHamiltonian action • Category Latest Changes • Comments 12 • Last Active Jul 4th 2012 • I wrote Hamiltonian action. I tried to say precisely what the action is by. In the literature (but also in a previous version of our moment map entry) there is often (for instance on Wikipedia, but also in many other sources) an imprecise (not to say: wrong) statement, where an action by Hamiltonian vector fields is not distinguished from one by Hamiltonians. • Discussion Type • discussion topicsymplectic reduction - table • Category Latest Changes • Comments 1 • Last Active Jul 5th 2012 • I have created an entry symplectic reduction - table and included it into relevant entries. • Discussion Type • discussion topicKirchhoff's laws • Category Latest Changes • Comments 7 • Last Active Jul 10th 2012 • I created a stub for Kirchhoff’s laws to go with the $n$Café-discussion here. Maybe somebody feels like expanding it, I don’t really have the time for this right now. • Discussion Type • discussion topic(infinity,1)-pullback • Category Latest Changes • Comments 1 • Last Active Jul 10th 2012 • Danny Stevenson was so kind and completed spelling out the proof of the pasting law for $\infty$-pullbacks here at (infinity,1)-pullback. • Discussion Type • discussion topicmetalinear group • Category Latest Changes • Comments 1 • Last Active Jul 10th 2012 • started metalinear group • Discussion Type • discussion topicmetalinear structure • Category Latest Changes • Comments 1 • Last Active Jul 10th 2012 • created metalinear structure. Added it to square roots of line bundles - table . Linked to it from Theta characteristic and so forth. • Discussion Type • discussion topicreduction of structure groups • Category Latest Changes • Comments 3 • Last Active Jul 10th 2012 • added a few more Examples to reduction of structure groups. • Discussion Type • discussion topicmetaplectic structure • Category Latest Changes • Comments 8 • Last Active Jul 10th 2012 • stub for metaplectic structure • Discussion Type • discussion topicdiagram of LCTVS properties • Category Latest Changes • Comments 1 • Last Active Jul 10th 2012 • Maybe I am not searching correctly, but it seems to me that until 2 minutes ago the rather remarkable diagram of LCTVS properties was linked to from exactly none non-meta $n$Lab page. It was effectively invisble unless one explicitly searched for “SVG”. Let me know if there is a reason for it remaining invisible. Assuming that there isn’t, I have now added it to locally convex space and to functional analysis - contents (which I restructured slightly, moving the two such overview diagrams prominently to the top, where they can be recognized as what they are). • Discussion Type • discussion topicmetaplectic representation • Category Latest Changes • Comments 2 • Last Active Jul 11th 2012 • New stub metaplectic representation, for now containing only some references. • Discussion Type • discussion topicconjugacy class • Category Latest Changes • Comments 9 • Last Active Jul 11th 2012 • created conjugacy class • Discussion Type • discussion topiccalculus, types and trees • Category Latest Changes • Comments 10 • Last Active Jul 12th 2012 • I created types and calculus and seven trees in one. Both entries as yet contain just references. It would be nice to have more articles expanding on the reltion of calculus and (higher) category theory /type theory. • Discussion Type • discussion topicquantum affine algebra • Category Latest Changes • Comments 1 • Last Active Jul 13th 2012 • I have created a stub quantum affine algebra as a means to collect some references, alluded to here. If there is any expert on the matter around, he or she should please feel invited to add an illuminating Idea-section to the entry. • Discussion Type • discussion topicPure and mixed states. • Category Latest Changes • Comments 3 • Last Active Jul 14th 2012 • I moved some material from state to create pure state (redirect from mixed state). • Discussion Type • discussion topicRadon–Nikodym derivatives. • Category Latest Changes • Comments 2 • Last Active Jul 14th 2012 • New page: Radon–Nikodym derivative. • Discussion Type • discussion topicTowards QFT • Category Latest Changes • Comments 2 • Last Active Jul 15th 2012 • Hi guys, The situation with my habilitation has been resolved. I decided to postone it to more favourable times. You can refer to my book and link it. • Discussion Type • discussion topicNew page: [[structured set]] • Category Latest Changes • Comments 4 • Last Active Jul 16th 2012 • I've been meaning to write this for a while. Now I need to look at Bourbaki this weekend to explain their approach. • Discussion Type • discussion topiccategorification in representation theory • Category Latest Changes • Comments 1 • Last Active Jul 17th 2012 • added some references by Catahrina Stroppel at the end of categorification in representation theory (also added the words “representation theory” to the entry itself :-) • Discussion Type • discussion topicnotes from String-Math 2012 • Category Latest Changes • Comments 1 • Last Active Jul 21st 2012 • as some of you will have seen, I had spent part of the last week with attending talks at String-Math 2012 and posting some notes about these, to the $n$Café (here). For many of these notes I added material to existing $n$Lab entries (mostly just references) or created $n$Lab entries (mostly just stubs). But since at the same time I was also finalizing the writup of an article as well as doing yet some other things, the whole undertaking was a bit time-pressured. As a result, I decided it would be too much to announce every single $n$Lab edit that I did here on the $n$Forum. So I ask you for understaning that hereby I just collectively announce these edits here: those who care should please scan through the list of blue links here and see if they spot pointers to $n$ Lab entries where they would like to check out the recent edits. I think I can guarantee, though, that in all cases I did edits that should be entirely uncontroversial, their main defect being that in many cases they leave one wish for more exhaustive • Discussion Type • discussion topicoverview over group schemes • Category Latest Changes • Comments 1 • Last Active Jul 21st 2012 • I wrote an overview over some constructions on- and examples of group schemes. • Discussion Type • discussion topiccategory with star-morphisms • Category Latest Changes • Comments 16 • Last Active Jul 26th 2012 • I've added the following two new (related) pages to nLab wiki: category with star-morphismsabrupt category These concept aroused in my research of what I call cross-composition product (a stub in the wiki). my research , especially this manuscript for details and examples. • Discussion Type • discussion topicConcrete and abstract structures. • Category Latest Changes • Comments 4 • Last Active Jul 28th 2012 • Concrete, abstract: group actions, groups; concrete categories, categories; Cartesian spaces, vector spaces; von Neumann algebras, $W^*$-alebras; material sets, structural sets; etc. At concrete New entry structure but the $n$Lab is down so I save here the final version of editing, which is probably lost in $n$Lab: The concept of a structure is formulated as the basic object of mathematics in the work of Bourbaki. In model theory, a structure of a language $L$ is the same as model of $L$ with empty set of extra axioms. Given a first-order language $L$, which consists of symbols (variable symbols, constant symbols, function symbols and relation symbols including $\epsilon$) and quantifiers; a structure for $L$, or $L$-structure is a set $M$ with an interpretation for symbols: • if $R\in L$ is an $n$-ary relation symbol, then its interpretation $R^M\subset M^n$ • if $f\in L$ is an $n$-ary function symbol, then $f^M:M^n\to M$ is a function • if $c\in L$ is a constant symbol, then $c^M\in M$ Interpretation for an $L$-structure inductively defines an interpretation for well-formed formulas in $L$. We say that a sentence $\phi\in L$ is true in $M$ if $\phi^M$ is true. Given a theory $(L,T) $, which is a language $L$ together with a given set $T$ of sentences in $L$, the interpretation in a structure $M$ makes those sentences true or false; if all the sentences in $T$ are true in $M$ we say that $M$ is a model of $(L,T)$. Some special cases include algebraic structures, which is usually defined as a structure for a first order language with equality and $\epsilon$-relation both with the standard interpretation, no other relation symbols and whose function symbols are interpreted as operations of various arity. This is a bit more general than an algebraic theory as in the latter, one needs to have free algebras so for example fields do not form an algebraic theory but are the algebraic structures for the theory of fields. In category theory we may talk about functor forgetting structure (formalizing an intuitive, related and in a way more general sense), see • Discussion Type • discussion topicalgebraic group • Category Latest Changes • Comments 5 • Last Active May 28th 2012 • I added in the definition of algebraic group the requirement ”field” into ”algebraically closed field”. Alternatively one could omit ”field” in the definition at all since this is implicit in • Discussion Type • discussion topicn-angulated category • Category Latest Changes • Comments 1 • Last Active May 28th 2012 • Discussion Type • discussion topicCalculus is Topology • Category Latest Changes • Comments 8 • Last Active May 28th 2012 • I recently came across some interesting ideas at inperc.com/wiki/index.php?title=Calculus_is_topology which might be incorporable into the nLab wiki -- although I'm not sure exactly where.
{"url":"https://nforum.ncatlab.org/5/227/","timestamp":"2024-11-09T17:22:14Z","content_type":"application/xhtml+xml","content_length":"148445","record_id":"<urn:uuid:d4950b1f-bea3-4202-8cea-2340beb94b25>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00441.warc.gz"}
Integral Domains Integral Domains Recall from the Zero Divisors in Rings page that if we consider a ring $(R, +, *)$ where $0$ is the identity of $+$ then a zero divisor of $R$ is an element $a \in R \setminus \{ 0 \}$ such that there exists an element $b \in R \setminus \{ 0 \}$ for which either $a * b = 0$ or $b * a = 0$. We saw that the ring $(M_{22}, +, *)$ of $2 \times 2$ matrices with real coefficients under the operation of standard addition $+$ and standard multiplication $*$ has zero divisors. We also noted that the rings $(\mathbb{C}, +, *)$ and $(\mathbb{R}, +, *)$ of complex and real numbers respectively under standard addition $+$ and standard multiplication $*$ has no zero divisors. The rings of complex and real numbers described above are particularly handy in being commutative rings and not having an zero divisors. In fact, we give a special name to rings that are commutative and have no zero divisors. Definition: An Integral Domain is a commutative ring $(R, +, *)$ that has no zero divisors. That is, it satisfies the extra condition that $a * b = b * a$ and if $0$ is the identity for $+$ then for all $a, b \in R$ we have that $a * b = 0$ implies that $a = 0$, $b = 0$ or both. We can therefore call the rings $(\mathbb{C}, +, *)$ and $(\mathbb{R}, +, *)$ integral domains. One important property we should note of is that if $(R, +, *)$ is an integral domain, then any subring $(S, +, *)$ is also an integral domain as we prove in the following theorem. Theorem 1: If $(S, +, *)$ is a subring of the integral domain $(R, +, *)$ then $(S, +, *)$ is an integral domain. • Proof: Suppose that $(S, +, *)$ is a subring of the integral domain $(R, +, *)$ and assume that $(S, +, *)$ is not an integral domain. • If $(S, +, *)$ is not commutative then there exists a pair of elements $a, b \in S$ such that $a * b \neq b * a$. But since $S \subseteq R$ then $a, b \in R$ is such that $a * b \neq b * a$ so $ (R, +, *)$ is not commutative and hence is not an integral domain which is a contradiction. • Now if $0$ is the identity for $+$ and there exists a zero divisor $a \in S \setminus \{ 0 \}$ then there also exists an element $b \in S \setminus \{ 0 \}$ such that $a * b = 0$ or $b * a =0$. But since $S \subseteq R$ we have that then $S \setminus \{ 0 \} \subseteq R \setminus \{ 0 \}$ so $a, b \in R \setminus \{ 0 \}$. Thus $a \in R$ is a zero divisor of $R$ which implies that $(R, +, *)$ is not an integral domain which is a contradiction. • Therefore the assumption that $(S, +, *)$ is not an integral domain is false. Therefore any subring $(S, +, *)$ is an integral domain of $(R, +, *)$. $\blacksquare$
{"url":"http://mathonline.wikidot.com/integral-domains","timestamp":"2024-11-09T06:50:29Z","content_type":"application/xhtml+xml","content_length":"17583","record_id":"<urn:uuid:6fb8c8ec-2838-436e-abbd-9e92c0dc0be8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00371.warc.gz"}
How to do multi level allocationHome - Board Community How to do multi level allocation 1. Abstract A Multi-Level cost allocation is an important process of any business, it plays a huge role in the decision-making process, and that’s why we must design it in the best possible way. The Multi-Level cost allocation consists of applying a classic allocation but with the possibility that an element can be at the same time a target and a source of the allocation process. 2. Context In a cost allocation module, sometimes we face the need to perform allocations at multiple levels. Let’s see in this article what are the best practices in implementing such a process. 3. Content The Multi-Level allocation procedure consists of applying the allocation in several iterations. Let’s take the following use case: cost allocation between different departments (see schema here below). We notice that some departments are both targets and sources of the allocation process. For example : • Step 1: Management department allocates total or partial costs to the Finance and Production Planning departments. • Step 2: Finance will also allocate to Production Planning. • Step 3: Production Planning will do the same operation as Manufacturing. 3.1 Use case For this use case, the focus is only the logic of the allocation procedure, all that concerns the source perimeter to allocate, the allocation driver, reporting …etc. has already been treated in the article How to allocate with a dynamic driver. 3.1.1 Entities and Relationships 3.1.2 Cubes We have the “Costs” cube (before allocation) that is structured as follows : The “Allocated Costs” cube shares the same structure. The “Allocation driver” cube is structured as follows: 3.1.3 Source data 3.1.4 Allocation driver • By rows: Source Cost Center • By column: Target Cost Center (CC [R]) As you can see, each source CC can allocate its costs totally or partially to one or many target CCs, for example : • CC 1: keeps 50% of its costs and allocates 10% to CC 5 and 40% to CC 21 • CC 5: allocates 100% of (its original costs + costs coming from CC 1) to CC 10 • CC 10: allocates 100% of (its original costs + costs coming from CC 5) to CC 11 • CC 11: keeps 50% of its costs and allocates the remaining 50% to CC 12 • CC 12: keeps 50% of its costs and allocates the remaining 50% to CC 19, note that CC 12 is also receiving costs from CC 9 • CC 19 will also allocate to others and so on … Etc. So, we can see that a cost center can have multiple sources, and it also can allocate to many targets. 3.2 Complexity The complexity of this use case lies in defining the right sequence of allocations, which means that for a Cost Center “X” to allocate its costs to “Y” and “Z”, it must first receive all costs coming from other CCs towards “X”. Let’s take the previous example: To allocate from CC 12 to CC 19, CC 12 must first receive the costs coming from CC 11 and CC 9. 3.3 Solution The solution is to apply the allocation following multiple iterations. Iteration 1: apply the allocation on one shot for all the CCs, as if it is a single-level allocation. Iteration 2: Take only the CCs that were targets of iteration 1, and calculate their new costs as follows: Cost before Iteration 2 = Original cost + Results of iteration 1 Apply allocation on one shot for those CCs Iteration 3: Take only the CCs that were targets of iteration 2, and calculate their new costs as follows: Cost before Iteration 3 = Costs before Iteration 2 + Results of iteration 2 Apply allocation on one shot for those CCs Iteration 4: … Iteration n: the last iteration is when the result of applying the allocation is NULL, which means that the remaining CCs only receive costs, they do not reallocate. The final value allocated is calculated as follows: Suppose we have 5 iterations, for each CC we apply this formula: Final allocated value = if iteration 5 is NOT NULL then iteration 5, else, if iteration 4 is NOT NULL then Iteration 4, else, if iteration 3 is NOT NULL then Iteration 3 …etc. The result of the previous example is: 3.4 Procedure 3.4.1 Loop Before getting to the core of the procedure, we must first define the logic of the iterations. Since the number of iterations cannot be known and is not fixed, we must build a procedure that is dynamic and works on all configurations on the allocation driver. The solution is to use a LOOP, the logic of which is as follows: • Allocation step: which has the following elements {Step 1, Step 2, … Step n}. • Dummy: which has only one element. • LOOP Steps, structured by Allocation Step. This cube is filled by the step position as follows: • LOOP Next Step, structured by Allocation Step. • LOOP Count, structured by Dummy. The logic is: • Initialize cube LOOP Count with value 1. • Condition to enter the LOOP: • Treatments: Allocation • Incrementation 3.4.2 Calculation algorithm Now let’s focus on the core of the procedure. As mentioned earlier, for each iteration, we must define the correct value to be allocated; to get this value, we must include another dimension “Cost Center Origin”, identical to the Cost Center so that we can keep track of the origin of the allocation; let us explain this with the following example: • CC1 allocates 10% to CC5. • CC5 allocates 100% to CC10. • CC10 allocates 100% to CC11. • CC11 keeps 50% and allocates 50% to CC12. By drilling to Cost Center Origin on the CC11 • Iteration 1: CC11 is receiving 900k which is the original value of CC10. • Iteration 2: CC11 receives 1700k that is the sum of 900k (original value of CC10) + 800k which CC10 received from CC5 in iteration 1, knowing that 800k is the original vale of CC5. • Iteration 3: CC11 is receiving 1800k which is the sum of 900k + 900k, i.e.: □ 900k original value of CC10 □ 900k received by CC10 from CC5, which is: ☆ 800k original value of CC5 ☆ + 100k that CC5 received from CC1. At this point, the final value of CC11 to be used in iteration 4 is the sum of 650k which is its original value + 1800K which was received from all the others, which gives 2450k. Now, drilling down to CC12: We can see that in Iteration 4, CC12 received 1224k from CC11, which is 50% of the value of CC11, and CC11 retained the other 50% which became its final value. • The final value is NOT the original value + the results of the last iteration. • The final allocated value is the sum of the original value + the result of the last iteration for each source CC. Let us continue with the same example to understand. In this step, for Iteration 5, CC12 will keep 50% and allocates the other 50% to CC19. The final value of CC12 is calculated as follows: • 1200k original value. • + 1641k coming from CC9 (results of iteration 3). • + 1225k coming from CC11 (results of iteration 4). • The sum of the 3 above gives 4066k. • Taking 50% of 4066k gives 2033k. Please see the screenshot here below. Let us take CC21 as a last example for better understanding. CC21 retains 100% of its costs, as we see in the allocation driver, by drilling to the “Cost Center Origin” we get: The final value of 7,322,250 is calculated as follows: 2,800,000+400,000+273,500+738,800+980,250+524,100+738,800+294,250+572,550 = 7,322,250 Let us reiterate this once again: The final allocated value is the sum of the original value + the result of the last iteration for each source CC. 3.4.3 Procedure steps Before entering the loop, we need to set 2 things: • Add the Cost Center Origin dimension to the source cube: → C=a*b • Remove the value of the CC allocation driver, which is on itself: → C = if(b=0,a,0) Then, the procedure steps are as follows: 1. Select Iteration step 2. Calculate the allocation using the Allocation Driver → d=(a+b)*(c/100) 3. Eliminate the CC → b=a 4. Move the CC[R] in CC → b=a Since we are working with the replicated entity Cost Center [R], it is possible to simply copy cube a into cube b, Board will automatically do the mapping between the 2 entities, otherwise, if Cost Center [R] is not a replicated entity, you must use a mapping cube to pass from cube ‘a’ to cube ‘b’ as follows : • Create mapping cube structured by Cost Center and Cost Center [R]. • Create a temporary cube that has the same structure as cube “a” + Cost Center • Temp cube = cube * mapping 5. Copy the result of the Step in Allocated Costs (Step) → b=a 6. Condition to Exit the Loop 7. Calculate the last allocated value → C= if(a<>0,a,b) 8. Step 1 Calculate the new origin CC for each step → b=a 8. Step 12Calculate the new origin CC for each step → c=a*b 9. Final allocated value (with origin) → c=a+b After quitting the loop (when all iterations have been treated), the last thing to do is to populate the target cube “Allocated Costs” → c=a-(a*(b/100)) Related Content: • Thanks for sharing the article; it's quite interesting. • 2K Forums • 332 Resources • Academy • 341 Partner Hub • 94 Support
{"url":"https://community.board.com/discussion/17051/how-to-do-multi-level-allocation","timestamp":"2024-11-13T07:46:50Z","content_type":"text/html","content_length":"373912","record_id":"<urn:uuid:3d7fe1d7-a7d8-4eb5-b89f-77c6042fd17d>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00031.warc.gz"}
What is 195 Fahrenheit to Celsius? - ConvertTemperatureintoCelsius.info The conversion from Fahrenheit to Celsius is a common calculation that many people need to make, particularly when dealing with international travel, cooking, or science. In this article, we will explore the process for converting 195 degrees Fahrenheit to Celsius and provide some background on the Fahrenheit and Celsius temperature scales. To convert from Fahrenheit to Celsius, you can use the following formula: T(°C) = (T(°F) – 32) × 5/9. In this case, the temperature in Celsius is equal to the temperature in Fahrenheit minus 32, multiplied by 5/9. So, let’s apply this formula to the given temperature of 195 degrees Fahrenheit. T(°C) = (195 – 32) × 5/9 T(°C) = 163 × 5/9 T(°C) = 90.56 Therefore, 195 degrees Fahrenheit is equivalent to approximately 90.56 degrees Celsius. Now that we have the conversion, let’s take a closer look at the Fahrenheit and Celsius temperature scales. The Fahrenheit scale was proposed by Daniel Gabriel Fahrenheit in 1724. This scale divides the freezing and boiling points of water into 180 equal intervals, with 32 degrees Fahrenheit as the freezing point and 212 degrees Fahrenheit as the boiling point at standard atmospheric pressure. In contrast, the Celsius scale was developed by Anders Celsius in 1742. This scale also uses the freezing and boiling points of water as reference points, but divides the range into 100 equal intervals. In the Celsius scale, the freezing point of water is 0 degrees Celsius, and the boiling point is 100 degrees Celsius at standard atmospheric pressure. The Celsius scale is more commonly used in scientific and international contexts, as it is based on the properties of water, which is a universal substance. In contrast, the Fahrenheit scale remains in use primarily in the United States and a few other countries. When it comes to converting between Fahrenheit and Celsius, it’s helpful to know some key reference points. For example, 0 degrees Celsius is equivalent to 32 degrees Fahrenheit, and 100 degrees Celsius is equivalent to 212 degrees Fahrenheit. With this knowledge, it becomes easier to estimate conversions without needing to use the exact formulas every time. In conclusion, the conversion from 195 degrees Fahrenheit to Celsius is approximately 90.56 degrees Celsius. Understanding the relationship between these temperature scales can be useful in a variety of everyday situations, from cooking to travel to scientific research. Whether you prefer Fahrenheit or Celsius, having a basic understanding of both scales can help you navigate the temperature differences you encounter in your daily life.
{"url":"https://converttemperatureintocelsius.info/what-is-195-fahrenheit-in-celsius/","timestamp":"2024-11-05T22:37:19Z","content_type":"text/html","content_length":"73249","record_id":"<urn:uuid:f76ea31e-3f2d-436c-bfc5-8c6a8d748229>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00296.warc.gz"}
How is "is_irreducible" function implemented in SAGE How is "is_irreducible" function implemented in SAGE Dear community, Suppose I have a polynomial $g$ over a finite field $\mathbb{F}_q[x]$. Then we can check whether $g$ is irreducible by using the following command I did not know about this method until recently. Previously, I just factorized g and counted the number of factors. If this number is 1, then g is irreducible. I compared the running time of my method and the above built-in method. I learned that the latter method is much faster (which makes sense). That leads me to wonder whether which method is used to implement is_irreducible. For example, some search on the internet suggests that Rabin's algorithm is a popular one. Is it true that Rabin's algorithm is implemented on Sage? For the record, I am using the paid version of Cocalc. Thank you for your consideration! Best wishes, Tung To see the code: 2 Answers Sort by » oldest newest most voted SageMath contains multiple implementations of finite fields and polynomial rings over them, even if this is not always clear from the interface. So the answer to the question depends on which implementation is used. For finite fields there are the specialized classes PolynomialRing_dense_mod_p (with implementations 'FLINT', 'NTL', and 'GF2X') and PolynomialRing_dense_finite_field (with implementations 'NTL' and 'generic'), and there is the generic class PolynomialRing_field (with a sparse and dense implementation). You can find out which one you are dealing with (if you didn't specify one explicitly) by checking the types: sage: R.<x> = PolynomialRing(GF(2)) sage: g = R.random_element() sage: type(R) <class 'sage.rings.polynomial.polynomial_ring.PolynomialRing_dense_mod_p_with_category'> sage: type(g) <class 'sage.rings.polynomial.polynomial_gf2x.Polynomial_GF2X'> So here we have a PolynomialRing_dense_mod_p with the 'GF2X' implementation. We can check how is_irreducible is implemented by looking at its source code as follows: sage: g.is_irreducible?? We see that the function is defined in sage/rings/polynomial/polynomial_gf2x.pyx, and that it calls GF2X_IterIrredTest. To find out where GF2X_IterIrredTest comes from we have to look at the file. We can find the file e.g. on GitHub, starting from the src/sage folder. We see GF2X_IterIrredTest is only mentioned once, so the name must be imported somehow. Looking at the top of the file, we see include "sage/libs/ntl/decl.pxi". Chasing down this file we see that it imports more stuff, in particular GF2X.pxd. In here we find the line long GF2X_IterIrredTest "IterIrredTest" (GF2X_c f) which means that GF2X_IterIrredTest refers to the C function IterIrredTest defined somewhere in the NTL library. After finding the NTL source code and searching for the function we finally find its definition in GF2XFactoring.cpp. Indeed it looks like an algorithm, not sure which one (there are 0 comments). Fortunately the same repository contains documentation; we want doc/GF2XFactoring.txt. It reads: long IterIrredTest(const GF2X& f); // performs an iterative deterministic irreducibility test, based on // DDF. Fast on average (when f has a small factor). By doing a web search we find that DDF stands for distinct-degree factorization. With this information and some more effort, one could decipher the C code of IterIrredTest. Another example: sage: R.<x> = PolynomialRing(GF(3)) sage: g = R.random_element() sage: type(R) <class 'sage.rings.polynomial.polynomial_ring.PolynomialRing_dense_mod_p_with_category'> sage: type(g) <class 'sage.rings.polynomial.polynomial_zmod_flint.Polynomial_zmod_flint'> So we have a PolynomialRing_dense_mod_p with the 'FLINT' implementation. Checking g.is_irreducible?? we find that it is defined in the file sage/rings/polynomial/polynomial_zmod_flint.pyx and that it calls nmod_poly_is_irreducible. Checking the file we find the import statement from sage.libs.flint.nmod_poly cimport * which looks promising. Checking sage/libs/flint/nmod_poly.pxd we find the line cdef int nmod_poly_is_irreducible(nmod_poly_t f) which means that it refers to a C function defined in the FLINT library. Looking up the FLINT source code and searching for the function we find its definition in nmod_poly_factor/is_irreducible.c, where it calls nmod_poly_is_irreducible_ddf. Searching for that one, we find its definition in nmod_poly_factor/is_irreducible_ddf.c. Indeed, it looks like an algorithm, even with some comments. The respective documentation states: Uses fast distinct-degree factorisation. So again one can try to understand the C code with this hint. Another example: sage: R.<x> = PolynomialRing(GF(3^2)) sage: g = R.random_element() sage: type(R) <class 'sage.rings.polynomial.polynomial_ring.PolynomialRing_dense_finite_field_with_category'> sage: type(g) <class 'sage.rings.polynomial.polynomial_zz_pex.Polynomial_ZZ_pEX'> This is a PolynomialRing_dense_finite_field with the 'NTL' implementation. Checking g.is_irreducible?? we find that it is defined in the file sage/rings/polynomial/polynomial_zz_pex.pyx and that it calls ZZ_pEX_IterIrredTest by default (though it offers other options). We see the import statement include "sage/libs/ntl/ntl_ZZ_pEX_linkage.pxi" and in the respective file we find from sage.libs.ntl.ZZ_pEX cimport *; in there we find long ZZ_pEX_IterIrredTest "IterIrredTest"(ZZ_pEX_c x) so it refers to the C function IterIrredTest (note: now with a differently typed argument!) in the NTL library. After a search we find its definition in src/ZZ_pEXFactoring.cpp. In the documentation we find: // performs an iterative deterministic irreducibility test, based on // DDF. Fast on average (when f has a small factor). So it is similar to the first example. I encourage others to post answers about the other implementations. edit flag offensive delete link more Thank you very much for your detailed answer. Let me provide a more precise description of the problem that I am working on. I have a polynomial $f$ over $\mathbb{Z}$ and I want to see whether it is irreducible. My original method is to study the factorization of $f$ modulo a prime $q$ and then use this factorization to see whether $f$ is irreducible or not. As I mentioned earlier, this method is quite slow. For example, it took hours to return the result for $f$ of degree around $500$. Note that I need to verify other properties of $f$ so taking modulo various primes $q$ is a must. However, for irreducibility, I realized that I can just use the built-in function is_irreducible(). And since I already used mod q, I continued to change the ring to GF(q) and use this function is_irreducible over $\mathbb{F}_q[x]$. It is faster than the method that I used before. Following the suggestion from rburing, I checked that the function the reduction mod q has the type ='sage.rings.polynomial.polynomial_zmod_flint.Polynomial_zmod_flint'>. So, using rburing's answer, we can trace back and learn that this method uses the FLINT library. Specifically, it uses Uses fast distinct-degree factorisation. I realize that I can even test for irreducibility over $\mathbb{Z}$ directly. This method is much faster than the previous two methods. For example, it only took 4 minutes (wall time) to return the result for polynomials of degree around $1500$. I copied the method of rburing and learned that the type in this case is <class 'sage.rings.polynomial.polynomial_integer_dense_flint.Polynomial_integer_dense_flint'> The documentation about the irreducibility test says the following • If the base ring implements _is_irreducible_univariate_polynomial, then this method gets used instead of the generic algorithm which just factors the input. Unfortunately, reading through the codes, I cannot decide which method it uses in my case (generic algorithm or is_irreducible_univariate_polynomial). I would appreciate your expertise on this Thank you again for your wonderful help! edit flag offensive delete link more You're welcome! Generally I would advise to post a follow-up question as a separate question (linking to the previous question). For factorization over $\mathbb{Z}$ the generic algorithm is used (because ZZ does not have a method _is_irreducible_univariate_polynomial), so the input is just factored. From g.factor?? you can see that factorization uses NTL or PARI, depending on the degree. For more details, please post a separate question. rburing ( 2022-03-17 15:10:04 +0100 )edit
{"url":"https://ask.sagemath.org/question/61523/how-is-is_irreducible-function-implemented-in-sage/?answer=61525","timestamp":"2024-11-07T09:51:18Z","content_type":"application/xhtml+xml","content_length":"70733","record_id":"<urn:uuid:e962126a-60d2-49ee-8956-c4f314f44d82>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00407.warc.gz"}
Exponents Calculator • Enter the base and exponent values. • Check the "Calculate Square Root" box if you want to calculate the square root. • Click the "Calculate" button to perform the calculation. • The result will be displayed along with the detailed explanation and formula used. • Your calculation history will be listed below. • Click the "Clear" button to clear the input fields and result. • Click the "Copy" button to copy the result to the clipboard. Calculation Details: Calculation History: What is Exponents? Exponents, also known as powers or indices, are mathematical notation used to represent repeated multiplication of a number or a base by itself. In an exponent expression, the base is raised to a certain power, which is indicated by a smaller number called the exponent. The exponent tells you how many times the base should be multiplied by itself. All Formulae Related to Exponents 1. Exponentiation: □ The general form of exponentiation is: a^n □ Where “a” is the base, and “n” is the exponent. □ It represents multiplying the base “a” by itself “n” times. 2. Multiplication with Same Base: □ When you multiply numbers with the same base and different exponents, you can add the exponents: a^n * a^m = a^(n + m) 3. Division with Same Base: □ When you divide numbers with the same base and different exponents, you can subtract the exponents: a^n / a^m = a^(n – m) 4. Exponent of 1: □ Any number raised to the exponent 1 is equal to itself: a^1 = a 5. Exponent of 0: □ Any nonzero number raised to the exponent 0 is equal to 1: a^0 = 1 6. Negative Exponents: □ A number raised to a negative exponent is equal to the reciprocal of the same number raised to the positive exponent: a^(-n) = 1 / a^n 7. Product of Powers Rule: □ When you multiply numbers with the same base and different exponents, you can distribute the exponent to each factor: (a^n * b^n) = (a * b)^n 8. Quotient of Powers Rule: □ When you divide numbers with the same base and different exponents, you can distribute the exponent to each factor: (a / b)^n = a^n / b^n 9. Power of a Power Rule: □ When you raise an exponentiated number to another exponent, you can multiply the exponents: (a^n)^m = a^(n * m) 10. Negative Exponent Rule: □ To remove a negative exponent, you can move the base with the negative exponent to the denominator with a positive exponent: a^(-n) = 1 / a^n Applications of Exponents Calculator in Various Fields An exponents calculator, which facilitates exponentiation and simplifies calculations involving exponents, has applications in various fields and industries where exponential growth, decay, and mathematical operations are common. Here are some examples of its applications in different areas: 1. Finance and Investments: □ In finance, compound interest calculations involve exponential growth. An exponents calculator is used to determine the future value of investments or loans. 2. Science and Engineering: □ Scientists and engineers use exponential equations to model natural phenomena, such as radioactive decay, population growth, and chemical reactions. The calculator helps analyze and predict these processes. 3. Statistics and Data Analysis: □ In statistics, exponential functions are used to model data trends. Calculators assist in fitting exponential curves to data sets and making predictions based on exponential growth or decay. 4. Medicine and Pharmacology: □ In pharmacokinetics and epidemiology, exponents are used to model drug concentration decay and disease spread. Calculators aid in determining optimal drug dosages and predicting infection 5. Computer Science: □ In computer algorithms and data structures, exponents are used to analyze algorithm efficiency and complexity. Calculators help computer scientists assess computational performance. Benefits of Using the Exponents Calculator Using an exponents calculator offers several benefits across various fields and mathematical applications. Here are some of the key advantages of using an exponents calculator: 1. Accuracy: Exponents calculators provide precise and error-free results, reducing the risk of calculation mistakes that can occur with manual calculations. 2. Efficiency: Calculating large or complex exponent expressions can be time-consuming and prone to errors. An exponents calculator speeds up the process, saving time and effort. 3. Ease of Use: Exponents calculators have user-friendly interfaces that make it easy to input values and perform exponentiation calculations, even for those less familiar with mathematical 4. Consistency: When working with exponential functions, maintaining consistency in calculations is crucial. Calculators ensure uniformity and accuracy in mathematical operations. 5. Educational Tool: Exponents calculators serve as valuable educational tools, helping students learn and understand exponentiation concepts and practice calculations. 1. “Beyond Simple Multiplication: Exponents in Number Theory and Group Theory” by Journal of Algebra 2. “From Fractals to Chaos Theory: Exponents in Dynamical Systems and Iterative Processes” by Chaos, Solitons & Fractals Last Updated : 03 October, 2024 One request? I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️ Sandeep Bhandari holds a Bachelor of Engineering in Computers from Thapar University (2006). He has 20 years of experience in the technology field. He has a keen interest in various technical fields, including database systems, computer networks, and programming. You can read more about him on his bio page. 25 thoughts on “Exponents Calculator” 1. Pete40 The article effectively elucidates the rules and properties of exponents while also highlighting the real-world applications in diverse fields. The benefits of using an exponents calculator are convincingly explained, emphasizing its utility and efficiency. 2. Ross Jayden The detailed exploration of exponents, coupled with practical examples and the benefits of using an exponents calculator, makes this article an illuminating and informative resource. Engaging and 3. Francesca43 An insightful and well-structured article that elucidates the intricacies of exponentiation and its wide-ranging applications. The benefits of using an exponents calculator are convincingly explained, highlighting its efficiency and utility. 4. Jrobinson The article effectively breaks down the various rules and formulae related to exponents, making it accessible and clear for readers. The section on applications of exponents calculator in different fields provides a holistic view of its significance. 5. Harrison Wood The benefits outlined for using an exponents calculator in diverse fields highlight its versatility and efficiency. A well-researched and informative piece that enlightens readers about the relevance of exponentiation in real-world scenarios. 6. Cook Aaron The article provides a comprehensive and well-structured explanation of exponentiation and its applications, particularly in finance, science, statistics, and computer science. The benefits of using an exponents calculator underscore its value across diverse disciplines. 7. Courtney Russell This article provides a comprehensive overview of exponentiation and its formulae. The benefits of using an exponents calculator are particularly insightful, showcasing its real-world applications in various fields. The references add credibility to the information presented. 8. Jason44 The comprehensive coverage of exponents and the practical insights into its applications in different fields add depth and clarity to the article. The benefits of using an exponents calculator are particularly compelling. 9. Bevans The content is informative and comprehensive, covering the fundamental aspects of exponents and their applications. The inclusion of specific examples in finance, science, and statistics adds practical relevance to the article. 10. Freya78 An enlightening article that effectively articulates the significance of exponentiation and the applications of exponents calculator in diverse fields. The content is engaging, informative, and 11. Danielle40 The practical implications of exponents in finance, science, and statistics are well-delineated, providing a clear understanding of how exponentiation is utilized across different disciplines. 12. Amanda40 The practical implications of exponentiation in finance, science, and other fields are effectively communicated. The benefits of using an exponents calculator are particularly compelling and highlight its indispensable role in accurate and efficient calculations. 13. Leah92 The practical examples and real-world applications of exponentiation in different domains provide a holistic perspective on the significance of exponents. The clarity and depth of the content make it a valuable resource for readers. 14. Nick12 The practical insights into the applications of exponentiation in various fields, coupled with the benefits of using an exponents calculator, make this article an enlightening read. Informative and insightful. 15. Stevens Kevin The article delivers a profound understanding of exponentiation and its practical relevance in diverse fields. The benefits of using an exponents calculator are clearly expounded, emphasizing its significance in facilitating accurate and efficient calculations. 16. Dale84 The detailed breakdown of exponentiation, its rules, practical applications, and the benefits of using an exponents calculator make this article a compelling and informative read. Well-crafted and insightful. 17. Hill Florence I concur with the thoroughness of the article in covering both the theoretical and applied aspects of exponentiation. The examples and benefits presented offer a comprehensive understanding of why exponents are significant in various domains. 18. Kevin Cook I agree, the applications of exponents calculator are extensive and far-reaching. The examples provided in finance, science, and computer science illustrate the relevance and utility of exponentiation in different domains. 19. Ray Parker The educational value of exponents calculators is indeed noteworthy. It serves as a valuable aid for students and learners to grasp exponentiation concepts effectively. Well-articulated article with practical insights. 20. Jennifer Lee Absolutely, the clarity of explanations and practical examples make this article a valuable resource for those seeking to understand and apply the concepts of exponentiation effectively. 21. Imogen Griffiths The article eloquently explains the fundamental concepts of exponentiation and offers practical examples of its applications in different fields. The benefits of using an exponents calculator are persuasively presented, emphasizing its utility and efficiency. 22. James66 The comprehensive coverage of exponentiation from theoretical foundations to real-world applications, coupled with the advantages of using an exponents calculator, makes this article a valuable educational tool for learners and practitioners alike. 23. Imogen79 The clarity of explanations and the real-world applications of exponentiation presented in the article enhance its educational value. The benefits of using an exponents calculator underscore its indispensable role in different domains. 24. Brandon Phillips The practical relevance of exponentiation in finance, science, statistics, and other domains is effectively communicated. The benefits of using an exponents calculator are convincingly outlined, highlighting its instrumental role in various fields. 25. Poppy06 The clarity and coherence of the article make it a valuable resource for individuals seeking to enhance their understanding of exponentiation and its practical applications. Insightful and
{"url":"https://calculatoruniverse.com/exponents-calculator/","timestamp":"2024-11-01T19:25:55Z","content_type":"text/html","content_length":"268468","record_id":"<urn:uuid:841788a7-9285-4eb3-b609-fbf2f660d542>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00747.warc.gz"}
pickSizeTolerance(x, metric, tol = 1.5, maximize) pickSizeBest(x, metric, maximize) pickVars(y, size) a matrix or data frame with the performance metric of interest a character string with the name of the performance metric that should be used to choose the appropriate number of variables a logical; should the metric be maximized? a scalar to denote the acceptable difference in optimal performance (see Details below) a list of data frames with variables Overall and var an integer for the number of variables to retain
{"url":"https://www.rdocumentation.org/packages/caret/versions/4.42/topics/caretFuncs","timestamp":"2024-11-12T22:19:08Z","content_type":"text/html","content_length":"54997","record_id":"<urn:uuid:7078ae64-8456-4b32-8e60-77c8c96d3a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00848.warc.gz"}
Pythagoras in Music and Dance The purpose of the Introduction section is to provide an overview of the document and its relevance to the reader. It sets the stage for the main content by offering essential background information and context related to the topic. The Introduction outlines the main objectives of the document and introduces any important definitions or concepts that will be discussed further in the following Key points to address in the Introduction include providing the context of the topic, highlighting the main objectives of the document, and introducing any important definitions or concepts that will be discussed. It is crucial to grab the reader's attention and make them understand the significance of the document's content. In summary, the Introduction serves as a guide for the reader, providing necessary background information, context, and objectives of the document, as well as introducing important definitions and concepts that will be covered. - Brief overview of Pythagoras' contributions to music and dance Pythagoras, the ancient Greek philosopher and mathematician, is known for his significant contributions to various fields, including music and dance. His innovative ideas and theories have greatly influenced the development of these art forms. In the realm of music, Pythagoras is credited with discovering the mathematical relationships between musical intervals, known as the Pythagorean tuning system. This groundbreaking insight laid the foundation for the understanding of harmony and musical scales. Additionally, his work in the field of mathematics greatly impacted the study of rhythm and the organization of musical compositions. In the realm of dance, Pythagoras introduced the concept of the "harmony of the spheres," which posited that the movement of celestial bodies created a cosmic harmony that could be reflected in the art of dance. His ideas inspired a new perspective on the relationship between music, mathematics, and movement, shaping the way we understand and experience these art forms today. Overall, Pythagoras' contributions to music and dance have left a lasting legacy that continues to influence scholars, artists, and enthusiasts alike. Early Life and Education of Pythagoras Pythagoras, born around 570 BC in the island of Samos, Greece, was a mathematician and philosopher. He studied under the philosopher Thales and Anaximander before founding his own school in Croton, southern Italy. Pythagoras is often considered the first pure mathematician, combining his interests in mathematics, music, and philosophy. One of his most famous contributions to mathematics is the Pythagorean Theorem, which states that in a right-angled triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. This theorem has had a lasting impact on the field of mathematics and is still widely studied and used today. Pythagoras also had a strong interest in music, believing that it could be described mathematically and that harmony in music was based on whole number ratios. He also had a belief in the healing properties of music and its ability to bring about a sense of well-being. Overall, Pythagoras made significant contributions to both mathematics and music, and his legacy continues to be influential in these fields. - Background information on Pythagoras' upbringing and education Pythagoras of Samos, a renowned ancient Greek mathematician, is best known for the Pythagorean theorem. Born around 570 B.C. on the island of Samos, Pythagoras was said to have had a rich and diverse upbringing, learning from both his father, a merchant, and possibly from scholars and philosophers at the island's renowned educational center. Seeking further education, he traveled to different places, studying mathematics, music, and philosophy. This diverse educational background would later influence his own teachings and theories on numbers, geometry, and the harmony of the cosmos. The Pythagorean Theory of Harmonic Ratios The Pythagorean Theory of Harmonic Ratios relates to music through the belief that musical intervals are based on whole number ratios. Pythagoras, a Greek mathematician and philosopher, believed that these harmonic ratios had healing properties and could be used to influence the emotions and behavior of individuals. This idea was foundational to the development of music theory and the understanding of how different musical intervals and scales affect human perception. The influence of Pythagoras and the Pythagoreans in understanding and controlling music was significant. They discovered the relationship between string length and pitch, laying the groundwork for the concept of the musical scale. Their understanding of harmonic ratios allowed for the creation of more complex and emotional music, shaping the development of Western music for centuries to come. In summary, the Pythagorean Theory of Harmonic Ratios has had a lasting impact on music, shaping both the belief that music is based on whole number ratios and the understanding of how music can have healing properties on the human mind and spirit. - Explanation of the Pythagorean theory of harmonic ratios The Pythagorean theory of harmonic ratios is based on the idea that mathematics is fundamental and at the heart of reality. Pythagoras, the ancient Greek philosopher and mathematician, believed that numbers and their relationships were the key to understanding the universe. His followers, known as the Pythagoreans, were proficient mathematicians who took mathematics very seriously, almost like a This belief in the fundamental nature of mathematics had a profound influence on Greek society. It led to a world of order and harmony, as the Greeks believed that the universe could be understood through mathematical principles. The Pythagoreans made significant contributions to music theory, as they believed that musical harmony could be explained through mathematical ratios. This idea of harmony and proportion in both mathematics and music influenced Greek architecture, art, and philosophy, creating a society that valued order and balance. In essence, the Pythagorean theory of harmonic ratios reflects the ancient Greek belief in the fundamental nature of mathematics and its ability to bring order and harmony to the world. - Understanding the mathematical relationships between musical intervals Understanding the mathematical relationships between musical intervals is essential to grasping the fundamental principles behind harmony and melody in music. By exploring the ratios and frequencies that underpin these relationships, we can gain a deeper insight into the ways in which different pitches and intervals interact with each other, and how they contribute to the overall sound and emotional impact of a piece of music. This understanding can also inform the creation and analysis of musical compositions, as well as inspire new ways of approaching musical expression and communication. By delving into the mathematical foundations of musical intervals, we can uncover the interconnectedness of music and mathematics, and appreciate the beauty and complexity of the relationship between the two disciplines. The Monochord Experiment The monochord experiment is a simple and effective way to demonstrate the relationship between the length of a vibrating string and the pitch of the sound it produces. The experiment consists of a wooden resonance box with a single string stretched across it, a movable bridge to change the length of the vibrating string, and a tuning peg to adjust the tension of the string. To set up the experiment, first, measure the length of the string using a ruler or measuring tape. Next, adjust the tension of the string by turning the tuning peg to make it taut. Once the string is properly tensioned, place the movable bridge at different points along the string to create different lengths. Plucking the string at each length will produce a different pitch, demonstrating the relationship between the length of the string and the pitch of the sound it produces. By following these steps, the monochord experiment effectively illustrates how the length of a vibrating string affects the pitch of the sound it produces, making it a valuable tool for demonstrating the physics of sound and music. - Description of the monochord experiment conducted by Pythagoras Pythagoras conducted the monochord experiment, where he used a single stringed instrument called a monochord to demonstrate the mathematical ratios of musical notes and the relationship between string length and pitch. By using the monochord, Pythagoras discovered that the length of the string determined the pitch produced when it was plucked. He found that when the string was divided in whole number ratios (such as 1:2, 2:3, 3:4), it produced harmonious sounds. This experiment led Pythagoras to the discovery of the mathematical basis of music and the development of the concept of intervals. He realized that the relationships between different notes and the string lengths could be expressed as simple numerical ratios. This laid the foundation for the understanding of music as a mathematical science. Pythagoras' understanding of music was centered around the importance of whole number ratios. He believed that these ratios were fundamental to the harmony of the universe and that they could be used to create beautiful and balanced musical compositions. His experiment with the monochord laid the groundwork for the mathematical principles of music and the concept of intervals, which continue to influence music theory and composition to this day. - How this experiment demonstrated the concept of harmonic ratios In this experiment, the concept of harmonic ratios was demonstrated through a series of simple but effective demonstrations. With the use of different harmonic ratios, the experiment aimed to illustrate how the vibration frequencies of different objects are related to each other. By exploring the concept of harmonic ratios, the experiment sought to show the relationships between the fundamental frequency and the overtones produced by vibrating objects. Through this exploration, the experiment highlighted the importance of harmonic ratios in understanding the patterns and relationships within the vibrations of various objects. The Doctrine of Ethos The Doctrine of Ethos in ancient Greek philosophy is a central concept that focuses on ethical principles and the development of moral character. This doctrine emphasizes the idea that one's character and moral conduct are essential for living a virtuous life. The significance of the Doctrine of Ethos lies in its emphasis on the importance of cultivating virtues such as honesty, integrity, and self-discipline. Prominent thinkers in ancient Greek philosophy, such as Pythagoras and Aristotle, placed great importance on the Doctrine of Ethos in their ethical teachings. Pythagoras believed that the key to living in harmony with oneself and others was through the development of one's character and moral virtues. Aristotle, on the other hand, emphasized the cultivation of virtues as essential for achieving eudaimonia, or human flourishing. The principles of the Doctrine of Ethos continue to influence ethical thought and character education today. It serves as a timeless reminder of the importance of developing a virtuous character and living a life guided by moral principles. - Exploration of the doctrine of ethos Ethos is a fundamental concept in ancient Greek rhetoric and has been influential in modern communication and argumentation. In ancient Greece, ethos referred to the character, credibility, and trustworthiness of the speaker. It was crucial in persuading an audience, as a speaker's ethical appeal could greatly influence the audience's perception of their message. This historical significance has carried over into modern communication, where the credibility and authenticity of the speaker play a vital role in persuasion. Ethos is particularly important in ethical persuasion, as it emphasizes the importance of honesty, integrity, and moral character in argumentation. Unlike pathos, which appeals to the audience's emotions, and logos, which relies on logic and reasoning, ethos focuses on the ethical standing of the speaker. By establishing trust and credibility, a speaker can effectively convey their message and rally support. In conclusion, ethos remains a significant and relevant concept in both ancient Greek rhetoric and modern communication. Its emphasis on the character and credibility of the speaker distinguishes it from pathos and logos and highlights the importance of ethical persuasion in effective argumentation. - Belief that specific musical modes could influence human emotions and character The belief that specific musical modes could influence human emotions and character has been a recurring theme throughout history. Dating back to ancient times, various cultures and societies have held the belief that certain musical modes or scales have the power to evoke particular emotions and influence the character of individuals. This concept has been a major influence in the development of music theory and composition, as well as in the practice of music therapy. The idea that music has the ability to elicit specific emotions and shape one's personality continues to be an area of interest in psychological and musical research, and has had a significant impact on the way we experience and understand music. Contributions to Musical Notation Pythagoras, the ancient Greek philosopher, made significant contributions to musical notation by quantifying the rules of music and laying a mathematical foundation for it. He is credited with discovering the mathematical ratios that underlie musical intervals, such as the perfect fourth and perfect fifth. Pythagoras also established the concept of harmonics and the mathematical relationships between musical tones. Pythagoras' contributions have had a profound influence on the system of music in the Western world. His mathematical understanding of music laid the groundwork for the development of musical notation and the establishment of musical scales and harmony. His insights into the mathematical basis of music have contributed to the intrinsic beauty of Western music, providing a framework for composers and musicians to create harmonious and aesthetically pleasing compositions. In conclusion, Pythagoras' quantification of the rules of music and his mathematical foundation have had a lasting impact on the Western musical tradition, shaping the way music is composed, performed, and understood. His contributions continue to be fundamental to the beauty and elegance of Western music. - Explanation of Pythagoras' system for representing musical notes through symbols Pythagoras' system for representing musical notes through symbols was based on the concept of whole number ratios and the relationship between string length and pitch. He discovered that the pitch of a musical note is directly related to the length of the vibrating string. When a string is divided into ratios of whole numbers, such as 1:2 or 2:3, the resulting pitches produce harmonious sounds when played together. Pythagoras' contributions to music theory were significant as he was one of the first to recognize the mathematical relationships that govern the production of musical tones. His work laid the foundation for our understanding of harmonics and the mathematical principles behind musical scales. This system of representing musical notes through symbols has been fundamental to the development of Western music theory and has influenced the structure and organization of musical compositions for centuries. Overall, Pythagoras' system for representing musical notes through symbols, based on whole number ratios and the relationship between string length and pitch, has had a profound impact on the understanding and practice of music theory and composition.
{"url":"https://pythagoras.au/articles/viewArticle/pythagoras-in-pop-culture-music-dance","timestamp":"2024-11-08T14:58:45Z","content_type":"text/html","content_length":"43713","record_id":"<urn:uuid:f1fd3655-984d-4e8e-bcd0-4c3ea7c3809e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00354.warc.gz"}
Calculator of sequence defined by recurrence - Solumaths The calculator of sequence makes it possible to calculate online the terms of the sequence, defined by recurrence and its first term, until the indicated index. The calculator is able to calculate online the terms of a sequence defined by recurrence between two of the indices of this sequence. It is also possible to calculate the elements of a numerical sequence when it is explicitly defined . Calculation of the terms of a sequence defined by recurrence The calculator is able to calculate the terms of a sequence defined by recurrence between two indices of this sequence. Thus, to obtain the elements of a sequence defined by `u_(n+1)=5*u_n` and `u_0=2`, between 1 and 4 , enter : recursive_sequence(`5x;2;4;x`) after calculation, the result is returned. Calculation of elements of an arithmetic sequence defined by recurrence The calculator is able to calculate the terms of an arithmetic sequence between two indices of this sequence , from the first term of the sequence and a recurrence relation. Thus, to obtain the terms of an arithmetic sequence defined by recurrence with the relation `u_(n+1)=5*u_n` et `u_0=3`, between 1 and 6 enter : recursive_sequence(`5*x;3;6;x`) after calculation, the result is returned. Calculation of the terms of a geometric sequence The calculator is able to calculate the terms of a geometric sequence between two indices of this sequence, from a relation of recurrence and the first term of the sequence. Thus, to obtain the terms of a geometric sequence defined by `u_(n+1)=3*u_n` and `u_0=2`, between 1 and 4 , enter : recursive_sequence(`3*x;1;4;x`) after calculation, the result is returned. Calculation of the sum of the terms of a sequence The calculator is able to calculate the sum of the terms of a sequence between two indices of this series, it can be used in particular to calculate the partial sums of some series. . Syntax : recursive_sequence(expression;first_term;upper bound;variable) Examples : This example shows how to calculate the first terms of a geometric sequence defined by recurrence. `u_(n+1)=4*u_n` and `u_0=-1` recursive_sequence(`4*x;-1;3;x`)
{"url":"https://www.solumaths.com/en/calculator/calculate/recursive_sequence","timestamp":"2024-11-02T09:17:38Z","content_type":"text/html","content_length":"58284","record_id":"<urn:uuid:bce0ac2a-8d70-43c3-9819-0d40cb3a8a83>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00215.warc.gz"}
Causal effects, Regression and Path Diagrams Regression and structural equation modeling are two techniques commonly employed in social scientific analyses. Both methods produce equations (often linear) relating a 'response' to a set of 'explanatory' variables. However, it is well-known that these methods often produce quite different equations. This immediately raises a number of questions, including: 1. How does the interpretation of a coefficient in a regression equation differ from the interpretation of a structural coefficient? Or equivalently, which substantive questions do these methods attempt to answer? 2. When will a given coefficient in a regression equation consistently estimate the coefficient in a postulated structural model? 3. When covariates are said to be &quot;controlled for&quot; by adding them to a regression equation, what is being asserted? 4. What assumptions or background knowledge are required for a coefficient in a regression equation to be interpretable as a 'structural' coefficient in a possibly unknown structural equation model? I will show that many of these questions may be answered by examining the path diagrams associated with structural equation models. The talk will describe some of my recent work on this topic, but will also draw heavily on the existing literature on causation which has developed in diverse fields over the last 50 years, including: Econometrics, Statistics, Computer Science, Epidemiology, Philosophy and Sociology.
{"url":"https://csss.uw.edu/seminars/causal-effects-regression-and-path-diagrams","timestamp":"2024-11-10T05:45:30Z","content_type":"text/html","content_length":"22312","record_id":"<urn:uuid:d276016e-45a7-4bef-99bc-884cf6c85e3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00693.warc.gz"}
[tex4ht] At tweak for align and alignat. Beuthe, Thomas beuthet at aecl.ca Tue Sep 7 17:05:17 CEST 2010 As mentioned earlier, when using htlatex with the align and alignat environment, by default TeX4ht will mix html and picmath in this environment. Putting the whole thing into picmath ends up including the equation number in to the picture, which is not desirable. Using the \left. \right. method mentioned in an earlier email will work to put the individual aligned sections into picmath, but the vertical alignment between columns may still be wrong, as illustrated by the following example: Under certain circumstances, the default behaviour of TeX4ht can produce somewhat less than desirable results. In the following ``mixed mode'' (pic + html) math, note the improper placement of = and the 1/4 superscript. Note also the improper placement of the equation numbers (too far to the right. A & = B + \left( 1 - C \right) \left( \frac{D}{E} \right)^{\frac{1}{4}} \\ F & = \frac{1 + 2 \left( \dfrac{G_g}{H_h} \right) }{1 - I} Adding a ``do nothing'' left. and right. pair as well as a blank notag line helps position of equation number and vertical placement of equations. It's better than before, but still not \left. A \right. & \left. = B + \left( 1 - C \right) \left( \frac{D}{E} \right)^{\frac{1}{4}} \right. \\ & \notag \\[-\baselineskip] \left. F \right. & \left. = \frac{1 + 2 \left( \dfrac{G_g}{H_h} \right) }{1 - I} \right. At least this method gives a little better control without introducing and special coding that will affect the result if the document is rendered in LaTeX. This e-mail, and any attachments, may contain information that is confidential, subject to copyright, or exempt from disclosure. Any unauthorized review, disclosure, retransmission, dissemination or other use of or reliance on this information may be unlawful and is strictly prohibited. Le présent courriel, et toute pièce jointe, peut contenir de l'information qui est confidentielle, régie par les droits d'auteur, ou interdite de divulgation. Tout examen, divulgation, retransmission, diffusion ou autres utilisations non autorisées de l'information ou dépendance non autorisée envers celle-ci peut être illégale et est strictement interdite. More information about the tex4ht mailing list
{"url":"https://tug.org/pipermail/tex4ht/2010q3/000185.html","timestamp":"2024-11-13T21:48:05Z","content_type":"text/html","content_length":"5067","record_id":"<urn:uuid:ac6a402c-4ce7-4dca-bd42-f9d064bcc94a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00130.warc.gz"}
Wrong rounded amount in quote lines Hi Everyone I am stuck with a such a stupid little problem I am suprised I haven’t found a post on this matter… The idea is simple : I have products sold at 13.00 €, VAT included. When I create quotes for any quantity of this product, I just want my total, VAT included, to be multiple of 13. For instance, 6 quantities of this product should have a total amount of 78.00 €. I populate the product price with the amount without VAT 10.833 € to be exact. By default, SuiteCRM displays 2 digits for decimals, inside the product panel it displays 10.83. I then create a quote and add two lines: 1 quantity : 10.83 + 2.17 VAT = 13.00 - OK 12 quantities : 129.96 + 25.99 VAT = 155.95 - NOK -> it should be 156 ! I tried to change my profile settings to 3 digits curency: total is now 129,996 instead of 130 because it is not rounded. If I switch back to 2 digits, rounding is correct, but we can begin this thread again Honestly I don’t know where to look, anyone to point me in the good direction ? SuiteCRM v7.11.18 You forgot to say your SuiteCRM version. I remember seeing a couple of issues on Github that could be what you’re describing. Possibly already fixed. Sorry and thx pgr, it’s 7.11.18 I’ll update the main post Indeed in Github I found this very old issue : https://github.com/salesagility/SuiteCRM/issues/1776 But it doesn’t seems fixed, it links to a trello post that links back to a 404… There is this newer one which will point you back into the Forums here, with a suggested workaround. If that really works, somebody needs to create a PR on GitHub otherwise it will never be picked up by the developers… Wahou many thanks pgr ! I’ll test it right away and keep you posted in this thread. 2017 indeed… Fix doesn’t work in my case : • in 3 decimals mode, I’m off 0,002 € total (if this amount was rounded to 2 decimal it would have been OK) • in 2 decimals mode, i’m off 0,02 € Can you get a developer to work on this? The fix already points you to the correct spot in the code, it shouldn’t be too hard. Yep, I contacted someone to fix this, I will keep this post updated with the solution. If possible I will create a PR as well. 1 Like Please have a look on the below to understand why the solution didn’t worked for you. Scenario 1: Qty: 1 Price: 10.83 VAT: 20% Vat Amount Formula = (PriceVat)/100 i.e. (10.8320)/100 = 2.166, third digit is 6 so rounded to 2.17 Scenario 1: Qty: 12 Price: 129.96 VAT: 20% Vat Amount Formula = (PriceVat)/100 i.e. (129.9620)/100 = 25.992, third digit is 2 so rounding remains same to 25.99 I may be wrong but to me calculations are okay. Hello @nabeeluos and thanks for your answer. Actually you can think the other way around: unit price + VAT = 13€ => I want my invoice to reflect this total price, ie 6 quantities => total price = 6 x 13 = 78 € My problem is to adjust the unit price to reflect this, because of the rounding mechanism in SuiteCRM. Does it make more sense this way? Can you share Screenshot from you Line Items and Annotate on where it is wrong for You?
{"url":"https://community.suitecrm.com/t/wrong-rounded-amount-in-quote-lines/78150","timestamp":"2024-11-07T01:20:28Z","content_type":"text/html","content_length":"49625","record_id":"<urn:uuid:ae029a2a-74c2-473c-bcc6-06bab14f661c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00263.warc.gz"}
Matrix inverses – The Dan MacKinlay stable of variably-well-consider’d enterprises Matrix inverses August 5, 2014 — September 29, 2023 feature construction functional analysis high d linear algebra signal processing sparser than thou Assumed audience: People with undergrad linear algebra Matrix inverses. Sometimes we need them. Worse, sometimes we need generalizations of them, inverses of singular (a.k.a. non-invertible) matrices. This page is for notes on that theme. But first, please fill out this form. ✅ By reading further I acknowledge that a matrix inverse is actually what I want, even though the more common case would be that I would like the solution of a linear system without forming the OK, done? Let us continue. 1 All possible generalizations of matrix inverses Many (Rao and Mitra 1973; Ben-Israel and Greville 2003). Nick Higham’s introduction generalized inverses is an easier start. A beautiful, compact summary is in Searle (2014): …there are (with one exception) many matrices \(\mathrm{G}\) satisfying \[ A G A=\mathrm{A} \text {, } \] which is condition (i). Each matrix \(\mathrm{G}\) satisfying AGA = A is called a generalized inverse of A, and if it also satisfies \(\mathrm{GAG}=\mathrm{G}\) it is a reflexive generalized inverse. The exception is when \(\mathrm{A}\) is nonsingular: there is then only one \ (\mathrm{G}\); namely, \(\mathrm{G}=\mathrm{A}^{-1}\). 8.2 Arbitrariness That there are many matrices \(\mathrm{G}\) can be illustrated by showing ways in which from one \(\mathrm{G}\) others can be obtained. Thus, if A is partitioned as \[ \mathrm{A}=\left[\begin{array}{ll} \mathrm{A}_{11} & \mathrm{A}_{12} \\ \mathrm{A}_{21} & \mathrm{A}_{22} \end{array}\right] \] where \(\mathrm{A}_ {11}\) is nonsingular with the same rank as \(\mathrm{A}\), then \[ \mathrm{G}=\left[\begin{array}{lllll} \mathrm{A}_{11}^{-1}-\mathrm{U A}_{21} & \mathrm{A}_{11}^{-1}-\mathrm{A}_{11}^{-1} & \ mathrm{A}_{12} \mathrm{V}-\mathrm{A}_{11}^{-1} \mathrm{A}_{12} \mathrm{W A}_{21} \mathrm{A}_{11}^{-1} & \mathrm{U} \\ \mathrm{V} & & \mathrm{W} \end{array}\right] \] is a generalized inverse of \ (\mathrm{A}\) for any values of \(\mathrm{U}, \mathrm{V}\) and \(\mathrm{W}\). This can be used to show that a generalized inverse of a symmetric matrix is not necessarily symmetric; and that of a singular matrix is not necessarily singular. A simpler illustration of arbitrariness is that if \(\mathrm{G}\) is a generalized inverse of \(\mathrm{A}\) then so is \[ \mathrm{G}^*=\mathrm{GAG}+(\mathrm{I}-\mathrm{GA}) \mathrm{S}+\mathrm{T} (\mathrm{I}-\mathrm{AG}), \] for any values of \(\mathrm{S}\) and \(\mathrm{T}\). 2 Moore-Penrose pseudo-inverse A classic. The “default” generalized inverse. See Nick Higham’s What Is the Pseudoinverse of a Matrix? Let us do the conventional thing and mention which properties of the pseudo-inverse are shared by the inverse: The pseudo-inverse (or Moore-Penrose inverse) of a matrix \(\mathrm{A}\) is the matrix \(\mathrm{A}^{+}\) that fulfils 1. \(\mathrm{A A}^{+} \mathrm{A}=\mathrm{A}\) 2. \(\mathrm{A}^{+} \mathrm{A} \mathrm{A}^{+}=\mathrm{A}^{+}\) 3. \(\mathrm{A A}^{+}\) symmetric 4. \(\mathrm{A}^{+} \mathrm{A}\) symmetric Want more? As always, we copy-paste the omnibus results of Petersen and Pedersen (2012): Assume \(\mathrm{A}^{+}\) to be the pseudo-inverse of \(\mathrm{A}\), then \[ \begin{aligned} \left(\mathrm{A}^{+}\right)^{+} & =\mathrm{A} \\ \left(\mathrm{A}^\top\right)^{+} & =\left(\mathrm{A}^{+} \right)^\top \\ \left(\mathrm{A}^\dagger\right)^{+} & =\left(\mathrm{A}^{+}\right)^\dagger \\ \left(\mathrm{A}^{+} \mathrm{A}\right) \mathrm{A}^\dagger & =\mathrm{A}^\dagger \\ \left(\mathrm{A}^{+} \ mathrm{A}\right) \mathrm{A}^\top & \neq \mathrm{A}^\top \\ (c \mathrm{A})^{+} & =(1 / c) \mathrm{A}^{+} \\ \mathrm{A}^{+} & =\left(\mathrm{A}^\top \mathrm{A}\right)^{+} \mathrm{A}^\top \\ \mathrm{A}^ {+} & =\mathrm{A}^\top\left(\mathrm{A} \mathrm{A}^\top\right)^{+} \\ \left(\mathrm{A}^\top \mathrm{A}\right)^{+} & =\mathrm{A}^{+}\left(\mathrm{A}^\top\right)^{+} \\ \left(\mathrm{A} \mathrm{A}^\top\ right)^{+} & =\left(\mathrm{A}^\top\right)^{+} \mathrm{A}^{+} \\ \mathrm{A}^{+} & =\left(\mathrm{A}^\dagger \mathrm{A}\right)^{+} \mathrm{A}^\dagger \\ \mathrm{A}^{+} & =\mathrm{A}^\dagger\left(\ mathrm{A} \mathrm{A}^\dagger\right)^{+} \\ \left(\mathrm{A}^\dagger \mathrm{A}\right)^{+} & =\mathrm{A}^{+}\left(\mathrm{A}^\dagger\right)^{+} \\ \left(\mathrm{A} \mathrm{A}^\dagger\right)^{+} & =\ left(\mathrm{A}^\dagger\right)^{+} \mathrm{A}^{+} \\ (\mathrm{A B})^{+} & =\left(\mathrm{A}^{+} \mathrm{A B}\right)^{+}\left(\mathrm{A B B} \mathrm{B}^{+}\right)^{+} \\ f\left(\mathrm{A}^\dagger \ mathrm{A}\right)-f(0) \mathrm{I} & =\mathrm{A}^{+}\left[f\left(\mathrm{A} \mathrm{A}^\dagger\right)-f(0) \mathrm{I}\right] \mathrm{A} \\ f\left(\mathrm{A} \mathrm{A}^\dagger\right)-f(0) \mathrm{I} & =\mathrm{A}^\dagger\left[\left(\mathrm{A}^\dagger \mathrm{A}\right)-f(0) \mathrm{I}\right] \mathrm{A}^{+} \end{aligned} \] I find definition in terms of these properties totally confusing. Perhaps a better way of thinking about pseudo-inverses is via their action upon vectors. TBC Consider the Moore-Penrose of \(\mathrm{K}\) which we write \(\mathrm{K}^+\). The famous way of constructing it is by taking a SVD of \(\mathrm{K}=\mathrm{U}_{\mathrm{K}}\mathrm{S}_{\mathrm{K}}\ mathrm{V}_{\mathrm{K}}^{\dagger}\) where \(\mathrm{U}_{\mathrm{K}}\) and \(\mathrm{V}_{\mathrm{K}}\) are unitary and \(\mathrm{S}_{\mathrm{K}}\) is diagonal. Then we define \(\mathrm{S}_{\mathrm{K}}^ +\) to be the pseudo-inverse of the diagonal matrix of singular values, which we construct by setting all non-zero entries to their own reciprocal, but otherwise leave at 0. We have sneakily decided that pseudo-inverse of a diagonal matrix is easy; we just take the reciprocal of the non-zero entries. This turns out to do the right thing, if you check it, and it does not even sound crazy, but also it is not, to me at least, totally obvious. Next, the pseudo-inverse of the whole thing is \(\mathrm{K}^+=\mathrm{V}_{\mathrm{K}}\mathrm{S}_{\mathrm{K}}^+\mathrm{U}_{\mathrm{K}}^{\dagger}\), we claim. If we check the object we create by this procedure, we discover that it satisfies the above properties. (Homework problem). Meta note: In general, proving things about pseudo-inverses by the constructive solution given by the SVD is much more compact than via the algebraic properties, as well as more intuitive, at least for me. There is a cute special-case result for low rank matrices. 4 Drazin inverse The Drazin inverse is introduced in a mediocre Wikipedia article: Let \(\mathrm{A}\) be square matrix. The index of \(\mathrm{A}\) is the least nonnegative integer \(k\) such that \(\operatorname{rank}\left(\mathrm{A}^{k+1}\right)=\operatorname{rank}\left(\ mathrm{A}^k\right)\). The Drazin inverse of \(\mathrm{A}\) is the unique matrix \(\mathrm{A}^{\mathrm{D}}\) that satisfies \[ \mathrm{A}^{k+1} \mathrm{A}^{\mathrm{D}}=\mathrm{A}^k, \quad \mathrm {A}^{\mathrm{D}} \mathrm{A} \mathrm{A}^{\mathrm{D}}=\mathrm{A}^{\mathrm{D}}, \quad \mathrm{A} \mathrm{A}^{\mathrm{D}}=\mathrm{A}^{\mathrm{D}} \mathrm{A} . \] It’s not a generalized inverse in the classical sense, since \(\mathrm{A} \mathrm{A}^{\mathrm{D}} \mathrm{A} \neq \mathrm{A}\) in general. □ If \(\mathrm{A}\) is invertible with inverse \(\mathrm{A}^{-1}\), then \(\mathrm{A}^{\mathrm{D}}=\mathrm{A}^{-1}\). □ If \(\mathrm{A}\) is a block diagonal matrix \[ \mathrm{A}=\left[\begin{array}{cc} \mathrm{B} & 0 \\ 0 & \mathrm{N} \end{array}\right] \] where \(\mathrm{B}\) is invertible with inverse \(\mathrm{B}^{-1}\) and \(\mathrm{N}\) is a nilpotent matrix, then \[ \mathrm{A}^D=\left[\begin{array}{cc} \mathrm{B}^{-1} & 0 \\ 0 & 0 \end{array}\right] \] □ Drazin inversion is invariant under conjugation. If \(\mathrm{A}^{\mathrm{D}}\) is the Drazin inverse of \(\mathrm{A}\), then \(P \mathrm{A}^{\mathrm{D}} P^{-1}\) is the Drazin inverse of \(P \mathrm{A} P^{-1}\). □ The Drazin inverse of a matrix of index 0 or 1 is called the group inverse or \(\{1,2,5\}\)-inverse and denoted \(\mathrm{A}^{\#}\). The group inverse can be defined, equivalently, by the properties \(\mathrm{A} \mathrm{A}^{\#} \mathrm{A}=\mathrm{A}, \mathrm{A}^{\#} \mathrm{A} \mathrm{A}^{\#}=\mathrm{A}^{\#}\), and \(\mathrm{A} \mathrm{A}^{\#}=\mathrm{A}^{\#} \mathrm{A}\). □ A projection matrix \(P\), defined as a matrix such that \(P^2=P\), has index 1 (or 0) and has Drazin inverse \(P^{\mathrm{D}}=P\). □ If \(\mathrm{\mathrm{A}}\) is a nilpotent matrix (for example a shift matrix), then \(\mathrm{A}^{\mathrm{D}}=0\).
{"url":"https://danmackinlay.name/notebook/matrix_inverse","timestamp":"2024-11-10T21:32:24Z","content_type":"application/xhtml+xml","content_length":"51592","record_id":"<urn:uuid:88dcfb4f-0d0c-49e4-a006-50c93567de34>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00506.warc.gz"}
Samacheer Kalvi 12th Physics Solutions Chapter 1 Electrostatics Students can Download Physics Chapter 1 Electrostatics Questions and Answers, Notes Pdf, Samacheer Kalvi 12th Physics Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations. Tamilnadu Samacheer Kalvi 12th Physics Solutions Chapter 1 Electrostatics Samacheer Kalvi 12th Physics Electrostatics Textual Evaluation Solved Samacheer Kalvi 12th Physics Electrostatics Multiple Choice Questions Question 1. Two identical point charges of magnitude -q are fixed as shown in the figure below. A third charge +q is placed midway between the two charges at the point P. Suppose this charge +q is displaced a small distance from the point P in the directions indicated by the arrows, in which direction(s) will +q be stable with respect to the displacement? (a) A[1] and A[2] (b) B[1] and B[2] (c) both directions (d) No stable (b) B[1] and B[2] Question 2. Which charge configuration produces a uniform electric field? (a) point charge (b) the infinite uniform line charge (c) uniformly charged infinite plane (d) uniformly charged spherical shell (c) uniformly charged infinite plane Question 3. What is the ratio of the charges \(\left|\frac{q_{1}}{q_{2}}\right|\) for the following electric field line pattern? (a) \(\frac { 1 }{ 5 }\) (b) \(\frac { 25 }{ 11 }\) (c) 5 (d) \(\frac { 12 }{ 25}\) (d) \(\frac { 12 }{ 25}\) Question 4. An electric dipole is placed at an alignment angle of 30° with an electric field of 2 x 10^5 N C^-1. It experiences a torque equal to 8 N m. The charge on the dipole if the dipole length is 1 cm is- (a) 4 mC (b) 8 mC (c) 5 mC (d) 1 mC (b) 8 mC Question 5. Four Gaussian surfaces are given below with charges inside each Gaussian surface. Rank the electric flux through each Gaussian surface in increasing order- (a) D < C < B < A (b) A < B = C < D (c) C < A = B < D (d)D > C > B > A (a) D < C < B < A Question 6. The total electric flux for the following closed surface which is kept inside water- (a) \(\frac { 80q }{{ ε }_{0}}\) (b) \(\frac { q }{{ 40ε }_{0}}\) (c) \(\frac { q }{{ 80ε }_{0}}\) (d) \(\frac { q }{{ 40ε }_{0}}\) (b) \(\frac { q }{{ 40ε }_{0}}\) Question 7. Two identical conducting balls having positive charges q[1] and q[2] are separated by a center to center distance r. If they are made to touch each other and then separated to the same distance, the force between them will be- (NSEP 04-05) (a) less than before (b) same as before (c) more than before (d) zero (c) more than before Question 8. Rank the electrostatic potential energies for the given system of charges in increasing order (a) 1 = 4 < 2 < 3 (b) 2 = 4 < 3 < 1 (c) 2 = 3 < 1 < 4 (d) 3 < 1 < 2 < 4 (a) 1 = 4 < 2 < 3 Question 9. An electric field \(\vec { E } \) = 10x\(\hat{i} \) exists in a certain region of space. Then the potential difference V = V[0] – V[A], Where V[0] is the potential at the origin and V[A] is the potential at x = 2 m is- (a) 10 J (b) -20 J (c) + 20 J (d) – 10 J (a) 10 J Question 10. A thin conducting spherical shell of radius R has a charge Q which is uniformly distributed on its surface. The correct plot for electrostatic potential due to this spherical shell is- Question 11. Two points A and B are maintained at a potential of 7 V and -4 V respectively. The work done in moving 50 electrons from A to B is- (a) 8.80 x 10^-17 J (b) -8.80 x 10^-17 J (c) 4.40 x 10^-17 J (d) 5.80 x 10^-17 J (a) 8.80 x 10^-17 J Question 12. If voltage applied on a capacitor is increased from V to 2V, choose the correct conclusion. (a) Q remains the same, C is doubled (b) Q is doubled, C doubled (c) C remains same, Q doubled (d) Both Q and C remain same (c) C remains same, Q doubled Question 13. A parallel plate capacitor stores a charge Q at a voltage V. Suppose the area of the parallel plate capacitor and the distance between the plates are each doubled then which is the quantity that will (a) Capacitance (b) Charge (c) Voltage (d) Energy density (d) Energy density Question 14. Three capacitors are connected in a triangle as shown in the figure. The equivalent capacitance between points A and C is (a) 1 μF (b) 2 μF (c) 3 μF (d) \(\frac { 1 }{ 4 }\) μF (b) 2 μF Question 15. Two metallic spheres of radii 1 cm and 3 cm are given charges of -1 x 10^-2 C and 5 x 10^-2 C respectively. If these are connected by a conducting wire, the final charge on the bigger sphere is (AIIPMT 2012) (a) 3 x 10^-2 C (b) 4 x 10^-2 C (c) 1 x 10^-2 C (d) 2 x 10^-2 C (a) 3 x 10^-2 C Samacheer Kalvi 12th Physics Electrostatics Short Answer Questions Question 1. What is meant by quantisation of charges? The charge q on any object is equal to an integral multiple of the fundamental unit of charge ‘e’. q = ne Where ‘n’ is an integer e e = charge of an electron =1.6 × 10^-19 C. Question 2. Write down Coulomb’s law in vector form and mention what each term represents. The force on a charge q[1] exerted by a point charge q[1] is given by \(\vec { F } \)[12] = \(\frac { 1 }{{ 4πε }{0}}\) \(\frac {{ q }_{1}{ q }_{2}}{{ r }^{2}}\) \(\hat{r} \)[21] Here \(\hat{r} \)[21] is the unit vector from charge q[1] to q[1]. But \(\hat{r} \)[21] = –\(\hat{r} \)[12], Therefore, the electrostatic force obeys Newton’s third law. Question 3. What are the differences between the Coulomb force and the gravitational force? ┃Coulomb force │Gravitational force ┃ ┃1. It can be attractive or repulsive depends on the nature of the charge │1. It is always attractive ┃ ┃2. The value of Proportionality constant K = 9 x 10^9 Nm^2 C^-2 │2. The value of Gravitational constant G = 6.626 x 10^11 Nm^2 Kg^-2┃ ┃3. It depends on the medium which it exists. │3. It is independent of the medium which it exists. ┃ Question 4. Write a short note on the superposition principle. According to this superposition principle, the total force acting on a given charge is equal to the vector sum of forces exerted on it by all the other charges. \({ \vec { F } }_{ 1 }^{ tot }\) = \(\vec { F } \)[12] + \(\vec { F } \)[13] + \(\vec { F } \)[14] + \(\vec { F } \)[1n] Question 5. Define ‘Electric field’. It is defined as the force experience by a unit positive charge, kept at that point It is a Vector quantity. Unit: NC^-1 Question 6. What is mean by ‘Electric field lines’? Electric field vectors are visualized by the concept of electric field lines. They form a set of continuous lines which are the visual representation of the electric field in some region of space. Question 7. The electric field lines never intersect. Justify. If two lines cross at a point, then there will be two different electric field vectors at the same point which is not possible. hence, they do not intersect. Question 8. Define ‘Electric dipole’ Two equal and opposite charges separated by a small distance constitute an electric dipole. Question 9. What is the general definition of electric dipole moment? The electric dipole moment for a collection of ‘n point charges is given by \(\overrightarrow{\mathrm{P}}=\sum_{i=1}^{\mathrm{n}} \mathrm{q}_{\mathrm{i}} \mathrm{r}_{\mathrm{i}}\) where r̂[i] is the position ofvector of change q[i] from origin. Question 10. Define “electrostatic potential”. The electric potential at a point P is equal to the work done by an external force to bring a unit positive charge with constant velocity from infinity to the point P in the region of the external electric field \(\vec { E } \). Question 11. What is an equipotential surface? An equipotential surface is a surface on which all the points are at the same electric potential. Question 12. What are the properties of an equipotential surface? Properties of equipotential surfaces (i) The work is done to move a charge q between any two points A and B, W = q (V[B] – V[A]). If the points A and B lie on the same equipotential surface, work done is zero because of V[A] = V[B]. (ii) The electric field is normal to an equipotential surface. If it is not normal, then there is a component of the field parallel to the surface. Then work must be done to move a charge between two points on the same surface. This is a contradiction. Therefore the electric field must always be normal to the equipotential surface. Question 13. Give the relation between electric field and electric potential. The electric field is the negative gradient of the electric potential. \(\mathrm{E}=-\frac{\mathrm{d} \mathrm{v}}{\mathrm{d} \mathrm{x}}\) Question 14. Define electrostatic potential energy? The potential energy of a system of point charges may be defined as the amount of work done in assembling the charges at their locations by bringing them in from infinity. Question 15. Define ‘electric flux’. 1. The number of electric field lines crossing a given area kept normal to the electric field lines is called electric flux. 2. Scalar quantity. 3. Unit: Nm^2C^-1 Question 16. What is meant by electrostatic energy density? The energy stored per unit volume of space is defined as energy density u[E] = \(\frac { U }{ Volume }\) From equation u[E] = \(\frac { 1 }{ 2 }\) \(\frac{\left(\varepsilon_{0} A\right)}{d}\) (Ed)^2 = \(\frac { 1 }{ 2 }\) ε[0] (Ad) E^2 or u[E] = \(\frac { 1 }{ 2 }\) ε[0]E^2 Question 17. Write a short note on ‘electrostatic shielding’. 1. The process of isolating a certain region of space from the external field. It is based on the fact that the electric field inside a conductor is zero. 2. Whatever the charges at the surfaces and whatever the electrical disturbance outside, the electric field inside the cavity are zero. Question 18. What is Polarisation? Polarisation \(\vec { P } \) is defined as the total dipole moment per unit volume of the dielectric. \(\vec { P } \) = X[e] \(\vec { P } \)[ext] Question 19. What is dielectric strength? 1. The maximum electric field the dielectric can withstand before it breakdowns is called dielectric strength. 2. The dielectric strength of air 3 × 10^6 Vm^-1 Question 20. Define ‘capacitance’. Give its unit. The capacitance C of a capacitor is defined as the ratio of the magnitude of charge on either of the conductor plates to the potential difference existing between the conductors. C = \(\frac { q }{ V }\) or Q ∝ V. The SI unit of capacitance is coulomb per volt or farad (F). Question 21. What is corona discharge? The total charge of the conductor near the sharp edge gets reduces due to ionization of surrounding air. It is called corona discharge. Samacheer Kalvi 12th Physics Electrostatics Long Answer Questions Question 1. Discuss the basic properties of electric charges. The electric charge is an inherent property of particles. Conservation of electric charge: 1. Total electric charge in the universe is constant. 2. Charge can be neither created nor destroyed. 3. In any physical process, the net change in charge will always be zero. 4. The charge ‘q’ of any object is equal to an integral multiple of the fundamental unit of charge ‘e’. q = ne 5. n is any integer 6. e is charge of an electron = 1.6 × 10^-19C. Question 2. Explain in detail Coulomb’s law and its various aspects. Consider two point charges q[1] and q[2] at rest in vacuum, and separated by a distance of r. According to Coulomb, the force on the point charge q[2] exerted by another point charge q[1] is \(\vec { F } \) [21] = K\(\frac{q_{1} q_{2}}{r_{2}}\) \(\hat{r} \)[12], where [/latex] \(\hat{r} \)[12] is the unit vector directed from charge q[1] to charge q[2] and k is the proportionality constant. Important aspects of Coulomb’s law: (i) Coulomb’s law states that the electrostatic force is directly proportional to the product of the magnitude of the two point charges and is inversely proportional to the square of the distance between the two point charges. (ii) The force on the charge q[2]exerted by the charge q[1] always lies along the line joining the two charges. \(\hat{r} \)[21]is the unit vector pointing from charge q[1] to q[2] Likewise, the force on the charge q[1] exerted by q[2] is along – (i.e., in the direction opposite to \(\hat{r} \)[21]). (iii) In SI units, k = \(\frac { 1 }{{ 4πε }_{0}}\) and its value is 9 x 10^9 Nm^2C^-2. Here e0 is the permittivity of free space or vacuum and the value of ε[0] = \(\frac { 1 }{{ 4πε }_{0}}\) = 8.85 x 10^-12 C^2 N^-1 m^-2 (iv) The magnitude of the electrostatic force between two charges each of one coulomb and separated by a distance of 1 m is calculated as follows: [F] = \(\frac{9 \times 10^{9} \times 1 \times 1}{1^{2}}\) = 9 x 10^9N. This is a huge quantity, almost equivalent to the weight of one million ton. We never come across 1 coulomb of charge in practice. Most of the electrical phenomena in day-to-day life involve electrical charges of the order of pC (micro coulomb) or nC (nano coulomb). (v) In SI units, Coulomb’s law in vacuum takes the form \(\vec { F } \) [21] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{q_{1} q_{2}}{r_{2}}\) \(\hat{r} \)[12]. sin Since ε > ε[0], the force between two point charges in a medium other than vacuum is always less than that in vacuum. We define the relative permittivity for a given medium as ε = \(\frac { ε }{{ ε }_{0}}\) .For vacuum or air, ε[r] = 1 and for all other media ε[r] > 1 (vi) Coulomb’s law has same structure as Newton’s law of gravitation. Both are inversely proportional to the square of the distance between the particles. The electrostatic force is directly proportional to the product of the magnitude of two point charges and gravitational force is directly proportional to the product of two masses. (vii) The force on a charge q[1] exerted by a point charge q[2] is given by \(\vec { F } \)[12] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{q_{1} q_{2}}{r_{2}}\) \(\hat{r} \)[12] Here \(\hat{r} \)[21] is sthe unit vector from charge q[2] to q[1]. Therefore, the electrostatic force obeys Newton’s third law. (viii) The expression for Coulomb force is true only for point charges. But the point charge is an ideal concept. However we can apply Coulomb’s law for two charged objects whose sizes are very much smaller than the distance between them. In fact, Coulomb discovered his law by considering the charged spheres in the torsion balance as point charges. The distance between the two charged spheres is much greater than the radii of the spheres. Question 3. Define ‘Electric field’ and discuss its various aspects. The electric field at the point P at a distance r from the point charge q is the force experienced by a unit charge and is given by Here \(\hat{r} \) is the unit vector pointing from q to the point of interest P. The electric field is a vector quantity and its SI unit is Newton per Coulomb (NC^-1). Important aspects of the Electric field: (i) If the charge q is positive then the electric field points away from the source charge and if q is negative, the electric field points towards the source charge q. (ii) If the electric field at a point P is \(\vec { E } \), then the force experienced by the test charge qo placed at the point P is \(\vec { F } \) = q[0] \(\vec { E } \). This is Coulomb’s law in terms of the electric field. This is shown below Figure. (iii) The equation implies that the electric field is independent of the test charge q0 and it depends only on the source charge q. (iv) Since the electric field is a vector quantity, at every point in space, this field has a unique direction and magnitude as shown in Figures (a) and (b). From the equation, we can infer that as distance increases, the electric field decreases in magnitude. Note that in Figures (a) and (b) the length of the electric field vector is shown for three different points. The strength or magnitude of the electric field at point P is stronger than at the point Q and R because the point P is closer to the source charge. (v) In the definition of the electric field, it is assumed that the test charge q[0] is taken sufficiently small, so that bringing this test charge will not move the source charge. In other words, the test charge is made sufficiently small such that it will not modify the electric field of the source charge. (vi) The expression is valid only for point charges. For continuous and finite-size charge distributions, integration techniques must be used. However, this expression can be used as an approximation for a finite-sized charge if the test point is very far away from the finite-sized source charge. (vii) There are two kinds of electric field: uniform (constant) electric field and non-uniform electric field. A Uniform electric field will have the same direction and constant magnitude at all points in space. The non-uniform electric field will have different directions or different magnitudes or both at different points in space. The electric field created by a point charge is basically a non-uniform electric field. This non-uniformity arises, both in direction and magnitude, with the direction being radially outward (or inward), and the magnitude changes as distance increases. Question 4. How do we determine the electric field due to a continuous charge distribution? Explain. Electric field due to continuous charge distribution. The electric charge is quantized microscopically. The expressions of Coulomb’s Law, superposition principle force and electric field are applicable to only point charges. While dealing with the electric field due to a charged sphere or a charged wire etc., it is very difficult to look at individual charges in these charged bodies. Therefore, it is assumed that charge is distributed continuously on the charged bodies, and the discrete nature of charges is not considered here. The electric field due to such continuous charge distributions is found by invoking the method of Consider the following charged object of irregular shape. The entire charged object is divided into a large number of charge elements ∆q[1], ∆q[2], ∆q[3] ……..∆q[n],…… and each charge element Δq is taken as a point charge. The electric field at a point P due to a charged object is approximately given by the sum of the fields at P due to all such charge elements. Here ∆ qi is the i^th charge element, r[ip] is the distance of the point P frome the i^th charge element, r[ip] is the unit vector from ith charge element to the pont P. However the equation is only an approximation. To incorporate the continuous distribution of charge, we take the limit ∆q → 0(= dq). In this limit, the summation in the equation becomes an integration and takes the following form Here r is the distance of the point P from the infinitesimal charge dq and \(\hat{r} \) is the unit vector from dq to point P. Even though the electric field for a continuous charge distribution is difficult to evaluate, the force experienced by some test charge q in this electric field is still given by \(\vec { F } \) = q\(\vec { E } \). (a) Line charge distribution: If the charge Q is uniformly distributed along the wire of length L, then linear charge density (charge per unit length) is λ = \(\frac { Q }{ L }\). Its unit is colomb per meter (Cm^-1). The charge present in the infinitestimal length dl is dq = λdl. The electric field due to the line of total charge Q is given by (b) Surface charge distribution: If the charge Q is uniformly distributed on a surface of area A, then surface charge density (charge per unit area) is σ = \(\frac { Q }{ A }\). Its unit is coulomb per square meter (C m^-2). The charge present in the infinitesimal area dA is dq = σdA. The electric field due to a of total charge Q is given by (c) Volume charge distribution: If the charge Q is uniformly distributed in a volume V, then volume charge density (charge per unit volume) is given by ρ = \(\frac { Q }{ V }\). Its unit is coulomb per cubic meter (Cm^-3) The charge present in the infinitesimal volume element dV is dq = ρdV. The electric field due to a volume of total charge Q is given by Question 5. Calculate the electric field due to a dipole on its axial line and the equatorial plane. Case (I) : Electric field due to an electric dipole at points on the axial line. Consider an electric dipole placed on the x-ax is as shown in the figure. A point C is located at a distance of r from the midpoint O of the dipole along the axial line. Axial line The electric field at a point C due to +q is Since the electric dipole moment vector \(\vec { P } \) is from -q to +q and is directed along BC, the above equation is rewritten as where \(\hat{p} \) is the electric dipole moment unit vector from -q to +q. The electric field at a point C due to -q is Since +q is located closer to the point C than -q, \(\vec { E } \) _. \(\vec { E } \) + us stronger than \(\vec { E } \). Therefore, the length of the E + vector is drawn large than that of \(\vec { E } \) _vector. The total electric field at point C is calculated using the superposition principle of the electric field. Note that the total electric field is along \(\vec { E } \)[+], since +q is closer to C than -q. The direction of \(\vec { E } \)[tot] is shown in Figure If the point C is very far away from the dipole then (r >> a). Under this limit the term(r^2 – a^2)^2 ≈ r^4 Substituting this into equation, we get If point C is chosen on the left side of the dipole, the total electric field is still in the Case (II) : Electric field due to an electric dipole at a point on the equatorial plane Consider a point C at a distance r from the midpoint O of the dipole on the equatorial plane as shown in Figure. Since point C is equidistant from +q and -q, the magnitude of the electric fields of +q and -q are the same. The direction of E+ is along with BC and the direction of E is along with CA. E+ and E_ are resolved into two components; one component parallel to the dipole axis and the other perpendicular to it. The perpendicular components \(\left|\vec{E}_{+}\right|\) sin θ and \(\left|\vec{E}_{-}\right|\) sin θ are oppositely directed and cancel each other. The magnitude of the total electric field at point C is the sum of the paralle component of \(\vec { E } \)[+] and \(\vec { E } \)[–] and its direction is along \(\hat{-p} \). The magnitudes \(\vec { E } \)[+] and \(\vec { E } \)[–] are the same and are given by By substituting equation (1) into equation (2), we get At very large distances (r >> a), the equation becomes \(\vec { E } \)[tot] \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { p }{{ r }^{3}}\) (r >>) …… (4) Question 6. Derive an expression for the torque experienced by a dipole due to a uniform electric field. Torque experienced by an electric dipole in the uniform electric field: Consider an electric dipole of dipole moment \(\vec { p } \) placed in a uniform electric field E whose field lines are equally spaced and point in the same direction. The charge +q will experience a force q\(\vec { E } \) in the direction of the field and charge -q will experience a force -q\(\vec { E } \) in a direction opposite to the field. Since the external field \(\vec { E } \) is uniform, the total force acting on the dipole is zero. These two forces acting at different points will constitute a couple and the dipole experience a torque. This torque tends to rotate the dipole. (Note that electric field lines of a uniform field are equally spaced and point in the same direction). The total torque on the dipole about the point \(\vec { τ } \) = \(\overrightarrow{\mathrm{OA}}\) × (-q\(\vec { E } \)) + \(\overrightarrow{\mathrm{OB}}\) × q\(\vec { E } \) Using the right-hand corkscrew rule, it is found that total torque is perpendicular to the plane of the paper and is directed into it. The magnitude of the total torque \(\vec { τ } \) = \(|\overrightarrow{\mathrm{OA}}|\)(-q\(\vec { E } \)) sin θ + \(|\overrightarrow{\mathrm{OB}}|\) \(|q \overrightarrow{\mathrm{E}}|\) sin θ where θ is the angle made by \(\vec { P } \) with \(\vec { E } \). Since p = 2aq, the torque is written in terms of the vector product as \(\vec { τ } \) = \(\vec { p } \) x \(\vec { E } \) The magnitude of this torque is τ = pE sin θ and is maximum Torque on dipole when θ =90°. This torque tends to rotate the dipole and align it with the electric field \(\vec { E } \). Once \(\vec { E } \) is aligned with \(\vec { E } \), the total torque on the dipole becomes zero. Question 7. Derive an expression for electrostatic potential due to a point charge. Electric potential due to a point charge: Consider a positive charge q kept fixed at the origin. Let P be a point at distance r from the charge q. The electric potential at point P is Electric field due to positive point charge q is The infinitesimal displacement vector, d\(\vec { r } \) = dr\(\hat{r} \) and using \(\hat{r} \) . \(\hat{r} \) = 1, we have After the integration, Hence the electric potential due to a point charge q at a distance r is V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { q }{ r }\) …… (2) Important points (If asked in the exam) (i) If the source charge q is positive, V > 0. If q is negative, then V is negative and equal to V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { q }{ r }\) (ii) The description of the motion of objects using the concept of potential or potential energy is simpler than using the concept of field. (iii) From expression (2), it is clear that the potential due to positive charge decreases as the distance increases, but for a negative charge, the potential increases as the distance are increased. At infinity (r = ∞) electrostatic potential is zero (V = 0). (iv) The electric potential at a point P due to a collection of charges q[1],q[2],q[3]… q[n] is equal to the sum of the electric potentials due to individual charges. Where r[1], r[2],r[3],…..r[n] are the distances of q[1],q[2],q[3]… q[n] respectively from P Question 8. Derive an expression for electrostatic potential due to an electric dipole. Electrostatic potential at a point due to an electric dipole: Consider two equal and opposite charges separated by a small distance 2a. The point P is located at a distance r from the midpoint of the dipole. Let 0 be the angle between the line OP and dipole axis AB. Let r[1] be the distance of point P from +q and r[1] be the distance of point P from -q. Potential at P due to charge +q = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { q }{{ r }_{1}}\) Potential at P due to charge -q = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { q }{{ r }_{2}}\) Total Potential at the point P, V = \(\frac { 1 }{{ 4πε }_{0}}\)q \(\left(\frac{1}{r_{1}}-\frac{1}{r_{2}}\right)\) ….. (1) Suppose if the point P is far away from the dipole, such that r >> a, then equation can be expressed in terms of r. By the cosine law for triangle BOP, \(r_{1}^{2}\) = r^2 + a^2 – 2ra cos θ = r^2 \(\left(1+\frac{a^{2}}{r^{2}}-\frac{2 a}{r} \cos \theta\right)\) Since the point P is very far from dipole, then r >> a. As a result the term \(\frac {{ a }^{ 2 }}{{ r }^{ 2 }}\) is very small and can be neglected. Therefore since \(\frac { a }{ r }\) << 1, we can use binominal theorem and retain the terms up to first order \(\frac { 1 }{{ r_{1}} }\) = \(\left(1+\frac{a}{r} \cos \theta\right)\) ……. (2) Similarly applying the cosine law for triangle AOP, \(r_{2}^{2}\) = r^2 + a^2 – 2ra cos (180 – θ) Since cos (180 – θ) = cos θ we get \(r_{2}^{2}\) = r^2 + a^2 + 2ra cos θ Neglecting the term \(\frac {{ a }^{ 2 }}{{ r }^{ 2 }}\) (because r >> a) \(r_{2}^{2}\) = r^2 \(\left(1+\frac{2 a \cos \theta}{r}\right)\) (or) r[2] = r \(\left(1+\frac{2 a \cos \theta}{r}\right)^{\frac{1}{2}}\) Using Binomial theorem, we get \(\frac { 1 }{{ r_{2}} }\) = \(\frac { 1 }{ r }\) \(\left(1-a \frac{\cos \theta}{r}\right)\) Substituting equations (3) and (2) in equation (1) But the electric dipole moment p = 2qa and we get, V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\left(\frac{p \cos \theta}{r^{2}}\right)\) Now we can write p cos θ = \(\vec { P } \), \(\hat{r} \) where \(\hat{r} \) is the unit vector from the point O to point P. Hence the electric potential at a point P due to an electric dipole is given by V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{\vec{p} \cdot \hat{r}}{r^{2}}\) (r >> a) ….. (4) Equation (4) is valid for distances very large compared to the size of the dipole. But for a point dipole, the equation (4) is valid for any distance. Special cases: Case (I): If the point P lies on the axial line of the dipole on the side of +q, then θ = 0. Then the electric potential becomes V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { p }{{r}^{ 2 }}\) Case (II): If the point P lies on the axial line of the dipole on the side of -q, then θ = 180°, then V = – \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { p }{{r}^{ 2 }}\) Case (III): If the point P lies on the equatorial line of the dipole, then θ = 90°. Hence, V = 0. Question 9. Obtain an expression for potential energy due to a collection of three-point charges which are separated by finite distances. Electrostatic potential energy for a collection of point charges: The electric potential at a point at a distance r from point charge ql is given by V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ q }_{ 1 }}{r}\) …… (1) This potential V is the work done to bring a unit positive charge from infinity to the point. Now if the charge q[2] is brought from infinity to that point at a distance r from qp the work done is the product of q[2] and the electric potential at that point. Thus we have W = q[2]V …… (2) This work done is stored as the electrostatic potential energy U of a system of charges q[1] and q[2] separated by a distance r. Thus we have U = q[2] V = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{q_{1} q_{2}}{r}\) …… (3) The electrostatic potential energy depends only on the distance between the two point charges. In fact, the expression (3) is derived by assuming that q[1] is fixed and q[2] is brought from infinity. The equation (3) holds true when q[2] is fixed and q[1] is brought from infinity or both q[2]and q[2] are simultaneously brought from infinity to a distance r between them. Three charges are arranged in the following configuration as shown in Figure. To calculate the total electrostatic potential energy, we use the following procedure. We bring all the charges one by one and arrange them according to the configuration. (i) Bringing a charge q[1] from infinity to point A requires no work, because there are no other charges already present in the vicinity of charge q[1] (ii) To bring the second charge q[2] to point B, work must be done against the electric field created by the charge q[1] So the work done on the charge q[1] is W = q[2]V[1B]. Here V[1B] is the electrostatic potential due to the charge q[1] at point B. U = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{q_{1} q_{2}}{{r}_{12}}\) ….. (4) Note that the expression is the same when q[2] is brought first and then q[1] later. (iii) Similarly to bring the charge q3 to point C, work has to be done against the total electric field due to both charges q[1] and q[2]. So the work done to bring the charge q[3] is = q[3] (V[1C] + V[2C]). Here V[1C] is the electrostatic potential due to charge q[1] at point C and V[2C] is the electrostatic potential due to charge q[2] at point C. The electrostatic potential is (iv) Adding equations (4) and (5), the total electrostatic potential energy for the system of three charges q[1],q[2,] and q[3] is Note that this stored potential energy U is equal to the total external work done to assemble the three charges at the given locations. The expression (6) is the same if the charges are brought to their positions in any other order. Since the Coulomb force is a conservative force, the electrostatic potential energy is independent of the manner in which the configuration of charges is arrived Question 10. Derive an expression for the electrostatic potential energy of the dipole in a uniform electric field. The electrostatic potential energy of a dipole in a uniform electric field: Consider a dipole placed a torque when kept in an uniform electric field \(\vec { E } \). A dipole experiences a torque when kept in an uniform electric field \(\vec { E } \). This torque rotates the dipole to align it with the direction of the electric field. To rotate the dipole (at constant angular velocity) from its initial angle θ’ to another angle θ against the torque exerted by the electric field, an equal and opposite external torque must be applied on the dipole. The work done by the external torque to rotate the dipole from angle θ’ to θ at constant angular velocity is Since τ[ext] is equal and opposite to τ[E] = \(\vec { P } \) x \(\vec { E } \), we have \(\left|\overrightarrow{\mathrm{r}}_{\mathrm{ext}}\right|\) = \(\left|\overrightarrow{\mathrm{r}}_{\mathrm{E}}\right|\)= \(|\overrightarrow{\mathrm{P}} \times \overrightarrow{\mathrm{E}}|\) …. (2) Substituting equation (2) in equation (1) We get, This work done is equal to the potential energy difference between the angular positions θ and θ’. U(θ) – (Uθ’) = AU = -pE cos θ +PE cos θ’. If the initial angle is = θ’ = 90° and is taken as reference point, then U(θ’) + pE cos θ’ = θ. The potential energy stored in the system of dipole kept in the uniform electric field is given by El = -pE cos θ = –\(\vec { P } \) . \(\vec { E } \) ….. (3) In addition to p and E, the potential energy also depends on the orientation θ of the electric dipole with respect to the external electric field. The potential energy is maximum when the dipole is aligned anti-parallel (θ = π) to the external electric field and minimum when the dipole is aligned parallel (θ = 0) to the external electric field. Question 11. Obtain Gauss law from Coulomb’s law. Gauss law: Gauss’s law states that if a charge Q is enclosed by an arbitrary closed surface, then the total electric flux Φ[E] through the closed surface is Φ[E] = \(\oint { \vec { E } } \) .d \(\vec { A } \) = \(\frac{\mathrm{Q}_{\mathrm{end}}}{\varepsilon_{0}}\) A positive point charge Q is surrounded by an imaginary sphere of radius r as shown in the figure. We can calculate the total electric flux through the closed surface of the sphere using the Φ[E] = \(\oint { \vec { E } } \) .d \(\vec { A } \) = \(\oint { EdA } \) cos θ …… (1) The electric field of the point charge is directed radially outward at all points on the surface of the sphere. Therefore, the direction of the area element d \(\vec { A } \) is along the electric field \(\vec { E } \) and θ = 0°. Φ[E] = \(\oint { EdA } \) since cos 0° = 1 ….. (2) E is uniform on the surface of the sphere, Φ[E] = \(\oint { EdA } \) ….. (3) Substituting for \(\oint { dA } \) = 4π^2 and E = \(\frac { 1 }{{ 4πε }_{0}}\) Q in equation 3, we get Φ[E] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { q }{{ r }^{2}}\) × 4π^2 = 4π \(\frac { 1 }{{ 4πε }_{0}}\) = \(\frac { q }{{ ε }_{0}}\) ……. (4) The equation (4) is called as Gauss’s law. The remarkable point about this result is that the equation (4) is equally true for any arbitrary shaped surface which encloses the charge Q. Question 12. Obtain the expression for electric field due to an infinitely long charged wire. Electric field due to an infinitely long charged wire: Consider an infinitely long straight wire having uniform linear charge density λ. Let P be a point located at a perpendicular distance r from the wire. The electric field at the point P can be found using Gauss law. We choose two small charge elements A[1] and A[1] on the wire which are at equal distances from the point P. The resultant electric field due to these two charge elements points radially away from the charged wire and the magnitude of electric field is same at all points on the circle of radius r. From this property, we can infer that the charged wire possesses a cylindrical symmetry. Let us choose a cylindrical Gaussian surface of radius r and length L. The total electric flux in this closed surface is It is seen that for the curved surface, \(\vec { E } \) is parallel to \(\vec { A } \) and \(\vec { E } \).d \(\vec { A } \) = EdA. For the top and bottom surface, \(\vec { E } \) is perpendicular to \(\vec { A } \) and \(\vec { E } \).d\(\vec { A } \) = 0 Substituting these values in equation (2) and applying Gauss law Since the magnitude of the electric field for the entire curved surface is constant, E is taken out of the integration, and Q[encl] is given by Q[encl] = λL. dA = total area of the curved surface = 2πrL. Substituting this in equation (4), We get The electric field due to the infinite charged wire depends on \(\frac { 1 }{ r }\) rather than \(\frac { 1 }{{r}^{ 2 }}\) for a point charge. Equation (6) indicates that the electric field is always along the perpendicular direction (\(\hat{r} \) ) to wire. In fact, if λ > 0 then E points perpendicular outward (\(\hat{r} \) ) from the wire and if λ < 0, then E points perpendicular inward (- \(\hat{r} \) ). Question 13. Obtain the expression for the electric field due to a charged infinite plane sheet. Electric field due to the charged infinite plane sheet: Consider an infinite plane sheet of charges with uniform surface charge density o. Let P be a point at a distance of r from the sheet. Since the plane is infinitely large, the electric field should be the same at all points equidistant from the plane and radially directed at all points. A cylindrical shaped Gaussian surface of length 2r and area A of the flat surfaces is chosen such that the infinite plane sheet passes perpendicularly through the middle part of the Gaussian surface. Applying Gauss law for this cylindrical surface, The electric field is perpendicular to the are element at all points on the curved surface and is parallel to the surface areas at P and P’. Then, Since the magnitude of the electric field at these two equal surfaces is uniform, E is taken out of the integration, and Q[encl] is given by Q[encl] = σA, we get The total area of surface either at P or P’ Hence 2EA = \(\frac { σA }{{ ε }_{0}}\) or E = \(\frac { σ }{{ 2ε }_{0}}\) …… (3) In vector from, E = \(\frac { σ }{{ 2ε }_{0}}\) \(\hat{n} \) ….. (4) Hence \(\hat{n} \) is the outward unit vector normal to the plane. Note that the electric field due to an infinite plane sheet of the charge depends on the surface charge density and is independent of the distance r. The electric field will be the same at any point farther away from the charged plane. Equation (4) implies that if o > 0 the electric field at any point P is outward perpendicular n to the plane and if σ < 0 the electric field points inward perpendicularly (\(\hat{n} \) ) to the plane. For a finite charged plane sheet, equation (4) is approximately true only in the middle region of the plane and at points far away from both ends. Question 14. Obtain the expression for the electric field due to a uniformly charged spherical shell. Electric field due to a uniformly charged spherical shell: Consider a uniformly charged spherical shell of radius R and total charge Q. The electric field at points outside and inside the sphere is found using Gauss law. Case (a): At a point outside the shell (r > R): Let us choose a point P outside the shell at a distance r from the center as shown in figure (a). The charge is uniformly distributed on the surface of the sphere (spherical symmetry). Hence the electric field must point radially outward if Q > 0 and point radially inward if Q < 0. So we choose a spherical Gaussian surface of radius r and the total charge enclosed by this Gaussian surface is Q. Applying Gauss law, \(\oint { \vec { E } } .d\vec { A } \) = \(\frac { Q }{{ ε }_{0}}\) …….(1) The electric field \(\vec { E } \) and d\(\vec { A } \) point in the same direction (outward normal) at all the points on the Gaussian surface. The magnitude of \(\vec { E } \) is also the same at all points due to the spherical symmetry of the charge distribution. dA = total area of Gaussian surface = 4πr^2. Substituting this value in equation (2). The electric field is radially outward if Q > 0 and radially inward if Q < 0. From equation (3), we infer that the electric field at a point outside the shell will be same as if the entire charge Q is concentrated at the center of the spherical shell. (A similar result is observed in gravitation, for gravitational force due to a spherical shell with mass M) Case (b): At a point on the surface of the spherical shell (r = R): The electrical field at points on the spherical shell (r = R) is given by \(\vec { E } \) = \(\frac{\mathrm{Q}}{4 \pi \varepsilon_{0} \mathrm{R}^{2}}\) \(\hat{r} \) …… (4) Case (c): At a point inside the spherical shell (r < R): Consider a point P inside the shell at a distance r from the center. A Gaussian sphere of radius r is constructed as shown in the figure (b). Applying Gauss law Since Gaussian surface encloses no charge, So Q = 0. The equation (5) becomes E = 0 (r < R) …(6) The electric field due to the uniformly charged spherical shell is zero at all points inside the shell. Question 15. Discuss the various properties of conductors in electrostatic equilibrium. Properties of conductors in electrostatic equilibrium: (i) The electric field is zero everywhere inside the conductor. This is true regardless of whether the conductor is solid or hollow. This is an experimental fact. Suppose the electric field is not zero inside the metal, then there will be a force on the mobile charge carriers due to this electric field. As a result, there will be a net motion of the mobile charges, which contradicts the conductors being in electrostatic equilibrium. Thus the electric field is zero everywhere inside – the conductor. We can also understand this fact by applying an external uniform electric field on the conductor. Before applying the external electric field, the free electrons in the conductor are uniformly distributed in the conductor. When an electric field is applied, the free electrons accelerate to the left causing the left plate to be negatively charged and the right plate to be positively charged. Due to this realignment of free electrons, there will be an internal electric field created inside the conductor which increases until it nullifies the external electric field. Once the external electric field is nullified the conductor is said to be in electrostatic equilibrium. The time taken by a conductor to reach electrostatic equilibrium is in the order of 10^-6s, which can be taken as almost instantaneous. (ii) There is no net charge inside the conductors. The charges must reside only on the surface of the conductors. We can prove this property using Gauss law. Consider an arbitrarily shaped conductor. A Gaussian surface is drawn inside the conductor such that it is very close to the surface of the conductor. Since the electric field is zero everywhere inside the conductor, the net electric flux is also zero over this Gaussian surface. From Gauss’s law, this implies that there is no net charge inside the conductor. Even if some charge is introduced inside the conductor, it immediately reaches the surface of the conductor. (iii) The electric field outside the conductor is perpendicular to the surface of the conductor and has a magnitude of \(\frac { σ }{{ ε }{0}}\) where a is the surface charge density at that point. If the electric field has components parallel to the surface of the conductor, then free electrons on the surface of the conductor would experience acceleration. This means that the conductor is not in equilibrium. Therefore at electrostatic equilibrium, the electric field must be perpendicular to the surface of the conductor. We now prove that the electric field has magnitude \(\frac { σ }{{ ε }{0}}\) just outside the conductor’s surface. Consider a small cylindrical Gaussian surface. One half of this cylinder is embedded inside the conductor. Since electric field is normal to the surface of the conductor, the curved part of the cylinder has zero electric flux. Also inside the conductor, the electric field is zero. Hence the bottom flat part of the Gaussian surface has no electric flux. Therefore the top flat surface alone contributes to the electric flux. The electric field is parallel to the area vector and the total charge inside the surface is σA. By applying Gauss’s law, EA = \(\frac { σA }{{ ε }{0}}\) In vector from, \(\vec { E } \) = \(\frac { σ }{{ ε }{0}}\) \(\hat{n} \) Here n represents the unit vector outward normal to the surface of the conductor. Suppose σ < 0, then electric field points inward perpendicular to the surface. (iv) The electrostatic potential has the same value on the surface and inside of the conductor. We know that the conductor has no parallel electric component on the surface which means that charges can be moved on the surface without doing any work. This is possible only if the electrostatic potential is constant at all points on the surface and there is no potential difference between any two points on the surface. Since the electric field is zero inside the conductor, the potential is the same as the surface of the conductor. Thus at electrostatic equilibrium, the conductor is always at Question 16. Explain the process of electrostatic induction. Whenever a charged rod is touched by another conductor, charges start to flow from the charged rod to the conductor. This type of charging without actual contact is called electrostatic induction: (i) Consider an uncharged (neutral) conducting sphere at rest on an insulating stand. Suppose a negatively charged rod is brought near the conductor without touching it, as shown in figure (a). The negative charge of the rod repels the electrons in the conductor to the opposite side. Various steps in electrostatic induction As a result, positive charges are induced near the region of the charged rod while negative charges on the farther side. Before introducing the charged rod, the free electrons were distributed uniformly on the surface of the conductor and the net charge is zero. Once the charged rod is brought near the conductor, the distribution is no longer uniform with more electrons located on the farther side of the rod and positive charges are located closer to the rod. But the total charge is zero. (ii) Now the conducting sphere is connected to the ground through a conducting wire. This is called grounding. Since the ground can always receive any amount of electrons, grounding removes the electron from the conducting sphere. Note that positive charges will not flow to the ground because they are attracted by the negative charges of the rod (figure (b)). (iii) When the grounding wire is removed from the conductor, the positive charges remain near the charged rod (figure (c)). (iv) Now the charged rod is taken away from the conductor. As soon as the charged rod is removed, the positive charge gets distributed uniformly on the surface of the conductor (figure (d)). By this process, the neutral conducting sphere becomes positively charged. Question 17. Explain dielectrics in detail and how an electric field is induced inside a dielectric. Induced Electric field inside the dielectric: When an external electric field is applied to a conductor, the charges are aligned in such a way that an internal electric field is created which cancels the external electric field. But in the case of a dielectric, which has no free electrons, the external electric field only realigns the charges so that an internal electric field is produced. The magnitude of the internal electric field is smaller than that of the external electric field. Therefore the net electric field inside the dielectric is not zero but is parallel to an external electric field with a magnitude less than that of the external electric field. For example, let us consider a rectangular dielectric slab placed between two oppositely charged plates (capacitor) as shown in the figure. The uniform electric field between the plates Induced electric field lines inside the dielectric acts as an external electric field \(\vec { E } \)[ext] which polarizes the dielectric placed between plates. The positive charges are induced on one side surface and negative charges are induced on the other side of the surface But inside the dielectric, the net charge is zero even in a small volume. So the dielectric in the external field is equivalent to two oppositely charged sheets with the surface charge densities +σb and -σb. These charges are called bound charges. They are not free to move like free electrons in conductors. This is shown in the figure. (a) Balloon sticks to the wall (b) Polarisation of the wall due to the electric field created by the balloon For example, the charged balloon after rubbing sticks onto a wall. The reason is that the negatively charged balloon is brought near the wall, it polarizes opposite charges on the surface of the wall, which attracts the balloon. Question 18. Obtain the expression for capacitance for a parallel plate capacitor. The capacitance of a parallel plate capacitor: Consider a capacitor with two parallel plates each of cross-sectional area A and separated by a distance d. The electric field between two infinite parallel plates is uniform and is given by E = \(\ frac { σ }{{ ε }{0}}\) where σ is the surface charge density on the plates σ = \(\frac { Q }{ A }\) .If the separation distance d is very much smaller than the size of the plate (d^2 << A), then the above result is used even for finite-sized parallel plate capacitor. The capacitance of a parallel plate capacitor The electric field between the plates is E = \(\frac { Q }{{ Aε }{0}}\) ….. (1) Since the electric field is uniform, the electric potential between the plates having separation d is given by V = Ed = \(\frac { Qd }{{ Aε }{0}}\) ….. (2) Therefore the capacitance of the capacitor is given by C = \(\frac { Q }{ V }\) = \(\frac{\mathrm{Q}}{\left(\frac{\mathrm{Q} d}{\mathrm{A} \varepsilon_{0}}\right)}\) = \(\frac{\varepsilon_{0} \mathrm{A}}{d}\) ….. (3) From equation (3), it is evident that capacitance is directly proportional to the area of cross-section and is inversely proportional to the distance between the plates. This can be understood from the following. • If the area of cross-section of the capacitor plates is increased, more charges can be distributed for the same potential difference. As a result, the capacitance is increased. • If the distance d between the two plates is reduced, the potential difference between the plates (V = Ed) decreases with the E constant. Question 19. Obtain the expression for energy stored in the parallel plate capacitor. Energy stored in the capacitor: Capacitor not only stores the charge but also it stores energy. When a battery is connected to the capacitor, electrons of total charge -Q are transferred from one plate to the other plate. To transfer the charge, work is done by the battery. This work done is stored as electrostatic potential energy in the capacitor. To transfer an infinitesimal charge dQ for a potential difference V, the work done is given by dW = VdQ ….. (1) Where V = \(\frac { Q }{ C }\) The total work done to charge a capacitor is This work done is stored as electrostatic potential energy (U[E]) in the capacitor. U[E] = \(\frac {{ Q }^{2}}{ 2C }\) = \(\frac { 1 }{ 2 }\) CV^2 (∴ Q = CV) ….. (3) where Q = CV is used. This stored energy is thus directly proportional to the capacitance of the capacitor and the square of the voltage between the plates of the capacitor. But where is this energy stored in the capacitor? To understand this question, the equation (3) is rewritten as follows using the results C = \(\frac{\varepsilon_{0} \mathrm{A}}{d}\) and V = Ed U[E] = \(\frac { 1 }{ 2 }\) \(\left(\frac{\varepsilon_{0} \mathrm{A}}{d}\right)\) (Ed)^2 = \(\frac { 1 }{ 2 }\) ε[0](Ad)^2 …… (4) where Ad = volume of the space between the capacitor plates. The energy stored per unit volume of space is defined as energy density \(\overline { Volume } \). Frome equation (4) we get u[E] = \(\frac { 1 }{ 2 }\) ε[0]E^2 From equation (5), we infer that the energy is stored in the electric field existing between the plates of the capacitor. Once the capacitor is allowed to discharge, the energy is retrieved. Question 20. Explain in detail the effect of a dielectric placed in a parallel plate capacitor. (i) When the capacitor is disconnected from the battery: Consider a capacitor with two parallel plates each of cross-sectional area A and are separated by a distance d. The capacitor is charged by a battery of voltage V[0] and the charge stored is Q[0]. The capacitance of the capacitor without the dielectric is C[0] = \(\frac {{ Q }_{0}}{{ V }_{0}}\) ….. (1) The battery is then disconnected from the capacitor and the dielectric is inserted between the plates. The introduction of dielectric between the plates will decrease the electric field. Experimentally it is found that the modified electric field is given by (a) Capacitor is charged with a battery (b) Dielectric is inserted after the battery is disconnected E = \(\frac {{ E }_{0}}{{ ε }_{r}}\) …… (2) Here E[0] is the electric field inside the capacitors when there is no dielectric and ε[r] is the relative permeability of the dielectric or simply known as the dielectric constant. Since ε[r] > 1, the electric field E < E[0]. As a result, the electrostatic potential difference between the plates (V = Ed) is also reduced. But at the same time, the charge Q[0] will remain constant once the battery is disconnected. Hence the new potential difference is V = Ed = \(\frac {{ E }_{0}}{{ ε }_{r}}\)d = \(\frac {{ V }_{0}}{{ ε }_{r}}\) ….. (3) We know that capacitance is inversely proportional to the potential difference. Therefore as V decreases, C increases. Thus new capacitance in the presence of a dielectric is C = \(\frac {{ Q }_{0}}{ V }\) = ε[r] \(\frac {{ Q }_{0}}{{ V }_{0}}\) = ε[r] C[0] …… (4) Since ε[r] > 1, we have C > C[0]. Thus insertion of the dielectric constant ε[r] increases the capacitance. Using equation, C = \(\frac { { \varepsilon }_{ 0 }A }{ d } \) C = \(\frac{\varepsilon_{r} \varepsilon_{o} A}{d}\) = \(\frac { εA }{ d }\) …… (5) where ε = ε[r]ε[0] is the permittivity of the dielectric medium. The energy stored in the capacitor before the insertion of a dielectric is given by U[0] = \(\frac { 1 }{ 2 }\) \(\frac{\mathrm{Q}_{0} ^{2}}{\mathrm{C}_{0}}\) ….. (6) After the dielectric is inserted, the charge Q[0] remains constant but the capacitance is increased. As a result, the stored energy is decreased. Since ε[r]> 1 we get U < U[0]. There is a decrease in energy because, when the dielectric is inserted, the capacitor spends some energy in pulling the dielectric inside. (ii) When the battery remains connected to the capacitor: Let us now consider what happens when the battery of voltage V[0] remains connected to the capacitor when the dielectric is inserted into the The potential difference V[0] across the plates remains constant. But it is found experimentally (first shown by Faraday) that when the dielectric is inserted, the charge stored in the capacitor is increased by a factor ε[r]. (a) Capacitor is charged through a battery (b) Dielectric is inserted when the battery is connected. Q = ε[r]Q[0] ….. (1) Due to this increased charge, the capacitance is also increased. The new capacitance is C = \(\frac {{ Q }_{0}}{ V }\) = ε[r] \(\frac {{ Q }_{0}}{{ V }_{0}}\) = ε[r] C[0] …… (2) However, the reason for the increase in capacitance in this case when the battery remains connected is different from the case when the battery is disconnected before introducing the dielectric. Now, C[0] = \(\frac {{ ε }{_0}A}{ d }\) and, C = \(\frac { εA }{ d }\) …… (3) U[0] = \(\frac { 1 }{ 2 }\) C[0] \({ V }_{ 0 }^{ 2 }\) ….. (4) Note that here we have not used the expression U[0] = \(\frac { 1 }{ 2 }\)\({{ V }_{ 0 }^{ 2 }}{{C}_{0}}\) because here, both charge and capacitance are changed, whereas in equation 4, V[0] remains constant. After the dielectric is inserted, the capacitance is increased; hence the stored energy is also U = \(\frac { 1 }{ 2 }\) \({ CV }_{ 0 }^{ 2 }\) = \(\frac { 1 }{ 2 }\) ε[r] \({ CV }_{ 0 }^{ 2 }\) = ε[r] U[0] Since e[r] > 1 we have U > U[0] It may be noted here that since voltage between the capacitor V[0] is constant, the electric field between the plates also remains constant. Question 21. Derive the expression for resultant capacitance, when capacitors are connected in series and in parallel. capacitors in series and parallel: (i) Capacitors in series: Consider three capacitors of capacitance C[1], C[2] and C[3] connected in series with a battery of voltage V as shown in figure (a). As soon as the battery is connected to the capacitors in series, the electrons of charge -Q are transferred from the negative terminal to the right plate of C[3]which pushes the electrons of the same amount -Q from left plate of C[3] to the right plate of C[2] due to electrostatic induction. Similarly, the left plate of C[2] pushes the charges of Q to the right plate of which induces the positive charge +Q on the left plate of C[1] At the same time, electrons of charge -Q are transferred from the left plate of C[1] to the positive terminal of the battery. By these processes, each capacitor stores the same amount of charge Q. The capacitances of the capacitors are in general different so that the voltage across each capacitor is also different and are denoted as V[1], V[2] and V[3] respectively. The total voltage across each capacitor must be equal to the voltage of the battery. V = V[1] + V[2] + V[3] ….. (1) Since Q = CV, we have V = \(\frac { Q }{{ C }_{1}}\) + \(\frac { Q }{{ C }_{2}}\) + \(\frac { Q }{{ C }_{3}}\) Q = \(\left( \frac { 1 }{ { C }_{ 1 } } +\frac { 1 }{ { C }_{ 2 } } +\frac { 1 }{ { C }_{ 3 } } \right) \) ….. (2) If three capacitors in series are considered to form an equivalent single capacitor Cs shown in figure (b), then we have V = \(\frac { Q }{{ C }_{s}}\) Substituting this expression into equation (2) we get V = \(\frac { Q }{{ C }_{s}}\) = Q\(\left( \frac { 1 }{ { C }_{ 1 } } +\frac { 1 }{ { C }_{ 2 } } +\frac { 1 }{ { C }_{ 3 } } \right) \) \(\frac { 1 }{{ C }_{s}}\) = \(\frac { 1 }{{ C }_{1}}\) + \(\frac { 1 }{{ C }_{2}}\) + \(\frac { 1 }{{ C }_{3}}\) ….. (3) Thus, the inverse of the equivalent capacitance C[s] of three capacitors connected in series is equal to the sum of the inverses of each capacitance. This equivalent capacitance C[s] is always less than the smallest individual capacitance in the series. (ii) Capacitance in parallel: Consider three capacitors of capacitance C[1],C[2] and C[3] connected in parallel with a battery of voltage V as shown in figure (a). Since corresponding sides of the capacitors are connected to the same positive and negative terminals of the battery, the voltage across each capacitor is equal to the battery’s voltage. Since the capacitance of the capacitors is different, the charge stored in each capacitor is not the same. Let the charge stored in the three capacitors be Q[1],Q[2], and Q[2] respectively. According to the law of conservation of total charge, the sum of these three charges is equal to the charge Q transferred by the battery, Q = Q[1] + Q[2] + Q[3] ….. (1) Now, since Q = CV, we have Q = C[1]V + C[2] V + C[3] V ….. (2) If these three capacitors are considered to form a single capacitance C[P] which stores the total charge Q as shown in figure (b), then we can write Q = CPV. Substituting this in equation (2), we get C[p] V = C[1 ]V + C[2] V + C[3] V C[p] = C[1] + C[2] + C[3] Thus, the equivalent capacitance of capacitors connected in parallel is equal to the sum of the individual capacitance. The equivalent capacitance C[p] in a parallel connection is always greater than the largest individual capacitance. In a parallel connection, it is equivalent as an area of each capacitance adds to give a more effective area such that total capacitance increases. Question 22. Explain in detail how charges are distributed in a conductor, and the principle behind the lightning conductor. Distribution of charges in a conductor: Consider two conducting spheres A and B of radii r[1] and r[2] respectively connected to each other by a thin conducting wire as shown in the figure. The distance between the spheres is much greater than the radii of either sphere. If a charge Q is introduced into any one of the spheres, this charge Q is redistributed into both the spheres such that the electrostatic potential is same in both the spheres. They are now uniformly charged and attain electrostatic equilibrium. Let q[1] be the charge residing on the surface of sphere A and q[2] is the charge residing on the surface of sphere B such that Q = q[1] + q[2] The charges are distributed only on the surface and there is no net charge inside the conductor. The electrostatic potential at the surface of sphere A is given by V[A] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ q }_{ 2 }}{{ r }_{ 2 }}\) …. (1) The electrostatic potential at the surface of sphere B is given by V[B] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ q }_{ 2 }}{{ r }_{ 2 }}\) ….. 2) The surface of the conductor is an equipotential. Since the spheres are connected by the conducting wire, the surfaces of both the spheres together form an equipotential surface. This implies that V[A] = V[B] or \(\frac {{ q }_{ 1 }}{{ r }_{ 1 }}\) = \(\frac {{ q }_{ 2 }}{{ r }_{ 2 }}\) ….. (3) Let us take the charge density on the surface of sphere A is σ[1] and charge density on the surface of sphere B is σ[1]. This implies that q[1] = \({ 4\pi r }_{ 1 }^{ 2 }\)σ[1] and q[1] = \({ 4\pi r }_{ 1 }^{ 2 }\)σ[2]. Substituting these values into equation (3), we get σ[1] r[1] = σ[2]r[2] ….. (4) from which we conclude that σr = constant …. (5) Thus the surface charge density o is inversely proportional to the radius of the sphere. For a smaller radius, the charge density will be larger and vice versa. Lightning arrester or lightning conductor: This is a device used to protect tall buildings from lightning strikes. It works on the principle of action at points or corona discharge. The device consists of a long thick copper rod passing from top of the building to the ground. The upper end of the rod has a sharp spike or a sharp needle. The lower end of the rod is connected to the copper plate which is buried deep into the ground. When a negatively charged cloud is passing above the building, it induces a positive charge on the spike. Since the induced charge density on thin sharp spike is large, it results in a corona discharge. This positive charge ionizes the surrounding air which in turn neutralizes the negative charge in the cloud. The negative charge pushed to the spikes passes through the copper rod and is safely diverted to the Earth. The lightning arrester does not stop the lightning; rather it divers the lightning to the ground safely. Question 23. Explain in detail the construction and working of a Van de Graaff generator. Principle: Electrostatic induction and action at points. A large hollow spherical conductor is fixed on the insulating stand. A pulley B is mounted at the center of the hollow sphere and another pulley C is fixed at the bottom. A belt made up of insulating materials as silk or rubber runs over both pulleys. The pulley C is driven continuously by the electric motor. Two comb-shaped metallic conductors E and D are fixed near the pulleys. The comb D is maintained at a positive potential of 104 V by a power supply. The upper comb E is connected to the inner side of the hollow metal sphere. Due to the high electric field near comb D, the air between the belt and comb D gets ionized. The positive charges are pushed towards the belt and negative charges are attracted towards the comb D. The positive charges stick to the belt and move up. When the positive charges reach the comb E, a large amount of negative and positive charges are induced on either side of comb E due to electrostatic induction. As a result, the positive charges are pushed away from the comb E and they reach the outer surface of the sphere. Since the sphere is a conductor, the positive charges are distributed uniformly on the outer surface of the hollow sphere. At the same time, the negative charges nullify the positive charges in the belt due to corona discharge before it passes over the When the belt descends, it has almost no net charge. At the bottom, it again gains a large positive charge. The belt goes up and delivers positive charges to the outer surface of the sphere. This process continues until the outer surface produces the potential difference of the order of 10^7 which is the limiting value. We cannot store charges beyond this limit since the extra charge starts leaking to the surroundings due to the ionization of air. The leakage of charges can be reduced by enclosing the machine in a gas-filled steel chamber at very high pressure. Uses: The high voltage produced in this Van de Graaff generator is used to accelerate positive ions (protons and deuterons) for nuclear disintegrations and other applications. Samacheer Kalvi 12th Physics Electrostatics Numerical Problems Question 1. When two objects are rubbed with each other, approximately a charge of 50 nC can be produced in each object. Calculate the number of electrons that must be transferred to produce this charge. Charge produced in each object q = 50 nC q = 50 x 10^-9 C Charge of electron (e) = 1.6 x 10^-9 C Number of electron transferred, n = \(\frac { q }{ e }\) = \(\frac {{ 50 × 10 }^{-9}}{{ 1.6 × 10 }^{-19}}\) =31. 25 × 10^-9 × 10^19 n = 31.25 x 10^10 electrons Ans. n = 31.25 x 10^10 electrons Question 2. The total number of electrons in the human body is typically in the order of 10^28. Suppose, due to some reason, you and your friend lost 1% of this number of electrons. Calculate the electrostatic force between you and your friend separated at a distance of 1 m. Compare this with your weight. Assume the mass of each person is 60 kg and use point charge approximation. Number of electrons in the human body = 10^28 Number of electrons in me and my friend after lost of 1% = 10^28 x 1% = 10^28 x \(\frac { 1 }{ 100 }\) n = 10^26 electrons Separation distance d = 1 m Charge of each person q = 10^26 x 1.6 x 10^-19 q = 1.6 x 10^7 C Electrostatic force, F = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{q_{1} q_{2}}{r^{2}}\) = \(\frac{9 \times 10^{9} \times 1.6 \times 10^{7} \times 1.6 \times 10^{7}}{1^{2}}\) F = 2.304 x 10^24N Mass of the person, M = 60 kg Acceleration due to gravity, g = 9.8 ms^-2 Weight (W) = mg = 60 x 9.8 W = 588 N Comparison: Electrostatic force is equal to 3.92 x 10^21 times of weight of the person. Question 3. Five identical charges Q are placed equidistant on a semicircle as shown in the figure. Another point charge q is kept at the center of the circle of radius R. Calculate the electrostatic force experienced by the charge q. Force acting on q due to Q[1] and Q[5] are opposite direction, so cancel to each other. Force acting on q due to Q[3] is F[3] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ qQ }_{3}}{{ R }^{2}}\) Force acting on q due to Q[2] and Q[4] Resolving in two-component method: (i) Vertical Component: Q[2] Sin θ and Q[4] Sinθ are equal and opposite directions, so they cancel to each other. (ii) Horizontal Component: Q[2] Sin θ and Q[4] cos θ are equal and same direction, so they can get added. F[24] = F[2q] + F[4q] = F[2] cos 55° + F[4] cos 45° F[24] = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ qQ }_{2}}{{ R }^{2}}\) cos 45° + \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac { qQ 4}{{ R }^{2}}\) cos 45° Resultant net force F Question 4. Suppose a charge +q on Earth’s surface and another +q charge is placed on the surface of the Moon, (a) Calculate the value of q required to balance the gravitational attraction between Earth and Moon (b) Suppose the distance between the Moon and Earth is halved, would the charge q change? (Take m[E] = 5.9 x 10^24 kg, m[M] = 7.348 x 10^22 kg) Mass of the Earth, M[E] = 5.9 x 10^24 kg Mass of the Moon, M[M] = 7.348 x 10^22 kg Charge placed on the surface of Earth and Moon = q (a) Required charge to balance the F[G] between Earth and Moon F[C] = F[G] (or) \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac {{ q }^{2}}{{ r }^{2}}\) = \(\frac{\mathrm{G} \mathrm{M}_{\mathrm{E}} \times \mathrm{M}_{\mathrm{M}}}{r^{2}}\) q^2 = G × M[E] × M[M] × 4πε[0] = 320.97 × 10^25 q = \(\sqrt { 320.97\times { 10 }^{ 25 } } \) = 5.665 x 10^13 = 5.67 x 10^13 C (b) The distance between the Moon and Earth is so q = 5.67 x 10^13 C There is no change. Question 5. Draw the free body diagram for the following charges as shown in figure (a), (b) and (c). Question 6. Consider an electron travelling with a speed V[Φ] and entering into a uniform electric field \(\vec { E } \) which is perpendicular to \(\overrightarrow{\mathrm{V}_{0}}\) as shown in the Figure. Ignoring gravity, obtain the electron’s acceleration, velocity and position as functions of time. Speed of an electron = V[0] Uniform electric field = \(\vec { E } \) (а) Electron’s acceleration: Force on electron due to uniform electric field, F = Ee Downward acceleration of electron due to electric field, a = \(\frac { F }{ m }\) = – \(\frac { eE }{ M }\) Vector from, \(\vec { a } \) = – \(\frac { eE }{ M }\) \(\hat{j} \) (b) Electron’s velocity: Speed of electron in horizontal direction, u = V[0] From the equation of motion, V = u + at V = V[0] \(\frac { eE }{ M }\) t Vector from \(\vec { V } \) = V[0] \(\hat{j} \) – \(\frac { eE }{ M }\) t \(\hat{j} \) (c) Electron’s position: Position of electron, s = r From equation of motion, r = V[0] t + \(\frac { 1 }{ 2 }\) \(\left(-\frac{e \mathrm{E}}{\mathrm{M}}\right)\) t^2 r = V[0] t + \(\frac { 1 }{ 2 }\) \(\frac { eE }{ M }\) t^2 \(\hat{j} \) Vector from, \(\vec { r } \) = V[0] t \(\hat{j} \) \(\frac { 1 }{ 2 }\) \(\frac { eE }{ M }\) t^2 \(\hat{j} \) Question 7. A closed triangular box is kept in an electric field of magnitude E = 2 × 10^3 N C^-1 as shown in the figure. Calculate the electric flux through the (a) vertical rectangular surface (b) slanted surface and (c) entire surface. Electric field of magnitude E = 2 × 10^3 NC^-1 (a) Vertical rectangular surface: Rectangular area A= 5 × 10^-2 × 15 × 10^-2 A = 75 × 10^-24 m^2 θ = 180° ⇒ cos 180° = -1 Electric flux, Φ[v.s] = EA cos θ = 2 × 10^3 × 75 × 10^-4 × cos 180° = -150 × 10^-1 Φ[v.s] = -15 Nm^2 C^-1 (b) Slanted surface: cos θ = cos 60° = 0.5 sin θ = sin 30° = \(\frac { Opposite }{ hyp }\) hyp = \(\frac {{ 5 × 10 }^{2}}{ 0.5 }\) hyp = 0.1m Area of slanted surface A[2] = (0.1 × 15 × 10^-2) A[2] = 0.015 M^2 Electric flux, Φ[v.s] = EA = cos θ = 2 × 10^3 × 0.015 × cos 60° = 2 × 10^3 × 0.015 × 10^3 = 0.015 × 10^3 Φ[v.s] = 15 Nm^2 C^-1 Horizontal surface θ = 90° ; cos 90° = 0 Electric flux, Φ[H.S] = E. A[3] Cos 90° = 0 (c) Entire surface: Φ[Total] = Φ[V.S] + Φ[S.S] + Φ[H.S] = -15 + 15 + 0 Φ[Total] = 0 Question 8. The electrostatic potential is given as a function of x in figure (a) and (b). Calculate the corresponding electric fields in regions A, B, C and D. Plot the electric field as a function of x for the figure (b). The relation between electric field and potential E = – \(\frac { dv }{ dx }\) (a) Region A : dv = -3V ; dx = 0.2 m Electric field, E[A] = \(\frac { (-3) }{ 0.2 }\) = 15 V m^-1 Region B: dv = 0V ; dx = 0.2 m Electric field, E[B] = \(\frac { 0 }{ 0.2 }\) = 0 Region C: dv = 2V ; dx = 0.2 m Electric field, E[C] = \(\frac { -2 }{ 0.2 }\) = 10 V m^-1 Region D: dv = -6V ; dx = 0.2 m Electric field, E[D] = \(-\left(\frac{-6}{0.2}\right)\) = 10 V m^-1 = 30 V m^-1 Electric field, E[A] = 15 V m^-1 Electric field, E[B] = 0 Electric field, E[C] = \(\frac { (-3) }{ 0.2 }\) = 10 V m^-1 Electric field, E[D] = 30 V m^-1 Question 9. A spark plug in a bike or a car is used to ignite the air-fuel mixture in the engine. It consists of two electrodes separated by a gap of around 0.6 mm gap as shown in the figure. To create the spark, an electric field of magnitude 3 x 10^6Vm^-1 is required, (a) What potential difference must be applied to produce the spark? (b) If the gap is increased, does the potential difference increase, decrease or remains the same? (c) find the potential difference if the gap is 1 mm. Separation gap between two electrodes, d = 0.6 mm d = 0.6 × 10^-3 m Magnetude of electric field Electric field = E = 3 × 10^6 V m^-1 Electric field E = \(\frac { V }{ d }\) (a) Applied potential difference, V = E . d = 3 × 10^6 × 0.6 10^-13 = 1.8 × 10^3 V = 1800 V (b) From equation, V = E . d If the gap (distance) between the electrodes increased, the potential difference also increases. (c) Gap between the electrodes, d = 1mm = 1 x 10^-3 m Potential difference, V = E.d = 3 × 10^6 × 1 × 10^-3 = 3 × 103 V = 3000 V Question 10. A point charge of +10 μC is placed at a distance of 20 cm from another identical point charge of +10 μC. A point charge of -2 μC is moved from point a to b as shown in the figure. Calculate the change in potential energy of the system? Interpret your result. q[1] = 10μC = 10 x 10^-6 C q[2] = 2μC = -2 x 10^-6 C distance, r = 5cm = 5 x 10^-2 m Change in potential energy, = -36 × 1 × 10^9 × 10^-12 × 10^2 = -36 × 10^-1 ∆ U = -3.6 J Negative sign implies that to move the charge -2pC no external work is required. System spends its stored energy to move the charge from point a to point b. ∆ U = -3.6 J, negative sign implies that to move the charge -2μC no external work is required. System spends its stored energy to move the charge from point a to point b. Question 11. Calculate the resultant capacitances for each of the following combinations of capacitors. Parallel combination of capacitor 1 and 2 C[p] = C[0] + C[0] = 2C[0] Series combination of capacitor C[p] and 3 \(\frac { 1 }{{ C }_{S}}\) = \(\frac { 1 }{{ C }_{p}}\) + \(\frac { 1 }{{ C }_{3}}\) = \(\frac { 1 }{{ 2C }_{0}}\) + \(\frac { 1 }{{ C }_{0}}\) = (or) \(\frac { 1 }{{ C }_{S}}\) = \(\frac { 3 }{ 2 } \) C[0 ](or)C[S ]= \(\frac { 2 }{ 3 }\) C[0 ] \(\frac { 1 }{ { C }_{ { S }_{ 1 } } } \) = \(\frac { 1 }{{ C }_{1}}\) + \(\frac { 1 }{{ C }_{2}}\) = \(\frac { 1 }{{ C }_{0}}\) + \(\frac { 1 }{{ C }_{0}}\) = \(\frac { 1 }{{ C }_{0}}\) (or) \(\frac { 1 }{ { C }_{ { S }_{ 1 } } } \) = \(\frac { 2 }{{ C }_{0}}\) (or) \({ C }_{ { S }_{ 1 } }\) = \(\frac {{ C }_{0}}{ 2 }\) Similarly 3 and 4 are series combination \(\frac { 1 }{ { C }_{ { S }_{ 2 } } } \) = \(\frac { 1 }{{ C }_{3}}\) + \(\frac { 1 }{{ C }_{4}}\) = \(\frac { 1 }{{ C }_{0}}\) + \(\frac { 1 }{{ C }_{0}}\) = \(\frac { 2 }{{ C }_{0}}\) (or) \({ C } _{ { S }_{ 2 } }\) = \(\frac {{ C }{0}}{ 2 }\) \({ C }_{ { S }_{ 1 } }\) and \({ C }_{ { S }_{ 2 } }\) are in parallel combination C[p] = \({ C }_{ { S }_{ 1 } }\) + \({ C }_{ { S }_{ 2 } }\) = \(\frac {{ C }_{0}}{ 2 }\) + \(\frac {{ C }_{0}}{ 2 }\) (or) C[p] = \(\frac {{ 2C }_{0}}{ 2 }\) C[p] = C[0] (c) Capacitor 1, 2 and 3 are in parallel combination C[p] = C[0] + C[0] + C[0] = 3C[0] C[p] = 3C[0] (d) Capacitar C[1] and C[2] are in combination Similarly C[3] and C[4] are in series combination \({ C }_{ { S }_{ 1 } }\) and \({ C }_{ { S }_{ 2 } }\) are in parallel combination across RS: (e) Capacitor 1 and 2 are series combination Similarly 3 and 4 are series combination \(\frac { 1 }{ { C }_{ { S }_{ 2 } } } \) = \(\frac { 2 }{{ C }_{0}}\) (or) \({ C }_{ { S }_{ 2 } }\) = \(\frac {{ C }_{0}}{ 2 }\) Three capacitors are in parallel combination Question 12. An electron and a proton are allowed to fall through the separation between the plates of a parallel plate capacitor of voltage 5 V and separation distance h = 1 mm as shown in the figure. (a) Calculate the time of flight for both electron and proton (b) Suppose if a neutron is allowed to fall, what is the time of flight? (c) Among the three, which one will reach the bottom first? (Take m[p] = 1.6 x 10^-27 kg, m[e]= 9.1 x 10^-31 kg and g = 10 m s^-2) Potential difference between the parallel plates V = 5 V Separation distance, h = 1 mm =1 x 10^-3 m Mass of proton, mp = 1.6 x 10^-27 kg Mass of proton, m =9.1 x 10^-31 kg Charge of an a proton (or) electron, e— 1.6 x 10^-19 C [u = 0; s = h] From equation of motion, S = ut + \(\frac { 1 }{ 2 }\) at^2 From equation of motion, h = \(\frac { 1 }{ 2 }\) at^2 t = \(\sqrt { \frac { 2h }{ a } } \) Acceleration of an electron due to electric field, a = \(\frac { F }{ m }\) = \(\frac { eE }{ m }\) [E = \(\frac { V }{ d }\)] (a) Time of flight for both electron and proton, t[p] = 63 ns……. (2) (b) time of flight of neutron t[n] = \(\sqrt { \frac { 2h }{ g } } \) = \(\sqrt{\frac{2 \times 1 \times 10^{3}}{10}}\) = \(\sqrt{0.2 \times 10^{-3}}\) t[n] = 0.0141 s = 14.1 x 10^-3 s t[n] = 14.1 x 10^-3 ms ……. (3) (c) Compairision of values 1,2 and 3. The electron will reach the bottom first. Question 13. During a thunderstorm, the movement of water molecules within the clouds creates friction, partially causing the bottom part of the clouds to become negatively charged. This implies that the bottom of the cloud and the ground act as a parallel plate capacitor. If the electric field between the cloud and ground exceeds the dielectric breakdown of the air (3 x 10^6 Vm^-1 ), lightning will occur. (a) If the bottom part of the cloud is 1000 m above the ground, determine the electric potential difference that exists between the cloud and ground. (b) In a typical lightning phenomenon, around 25C of electrons are transferred from cloud to ground. How much electrostatic potential energy is transferred to the ground? (a) Electric field between the cloud and ground, V = E.d V= 3 x 10^6 x 1000 = 3 x 10^9V (a) Electrons transfered from cloud to ground, q = 25 C Electron static potential energy, U = \(\frac { 1 }{ 2 }\) CV^2 [C = \(\frac { q }{ V }\)] = \(\frac { 1 }{ 2 }\) qV = \(\frac { 1 }{ 2 }\) x 25 x 3 x 10^9 U = 37.5 x 10^9 J Question 14. For the given capacitor configuration (a) Find the charges on each capacitor (b) potential difference across them (c) energy stored in each capacitor. Capacitor b and c in parallel combination C[p] = C[b] + C[c] = (6 + 2) μF = 8 μF Capacitor a, c[p] and d are in series combination, so the resulatant copacitance \(\frac { 1 }{{ C }_{s}}\) = \(\frac { 1 }{{ C }_{a}}\) + \(\frac { 1 }{{ C }_{cp}}\) + \(\frac { 1 }{{ C }_{d}}\) = \(\frac { 1 }{ 8 }\) + \(\frac { 1 }{ 8 }\) + \(\frac { 1 }{ 8 }\) = \(\frac { 3 } { 8 }\) C[s] = \(\frac { 8 }{ 3 }\) μF (a) Charge on each capacitor, Charge on capacitor a, Q[a] = C[s] V = \(\frac { 8 }{ 3 }\) x 9 Q[a] = 24 μC Charge on capacitor, d, Q[d] = C[s] V = \(\frac { 8 }{ 3 }\) x 9 Q[d] = 24 μC Capacitor b and c in parallel Charge on capacitor, b, Q[b] = \(\frac { 6 }{ 3 }\) x 9 = 18 Q[b] = 18 μC Charge on capacitor, c, Q[c] = \(\frac { 2 }{ 3 }\) x 9 = 6 Q[c] = 6 μC (b) Potential difference across each capacitor, V = \(\frac { q }{ C }\) Capacitor C[a], V[a] = \(\frac{ { q }_{a}}{{ C }_{a}}\) = \(\frac {{ 24 × 10 }^{6}}{{ 8 × 10 }^{6}}\) = 3 V Capacitor C[b], V[b] = \(\frac{ { q }_{b}}{{ C }_{b}}\) = \(\frac {{ 18 × 10 }^{6}}{{ 6 × 10 }^{6}}\) = 3 V Capacitor C[c], V[c] = \(\frac{ { q }_{c}}{{ C }_{c}}\) = \(\frac {{ 6 × 10 }^{6}}{{ 2 × 10 }^{6}}\) = 3 V Capacitor C[d], V[d] = \(\frac{ { q }_{d}}{{ C }_{d}}\) = \(\frac {{ 24 × 10 }^{6}}{{ 8 × 10 }^{6}}\) = 3 V (c) Energy stores in a capacitor, U = \(\frac { 1 }{ 2 }\) CV^2 Energy in capacitor C[a], U[a] = \(\frac { 1 }{ 2 }\) C[a] \({ V }_{ a }^{ 2 }\) = \(\frac { 1 }{ 2 }\) x 8 x 10^-6 x (3)^2 U[a] = 36 μJ Capacitor C[b], U[b] = \(\frac { 1 }{ 2 }\) C[b] \({ V }_{ b }^{ 2 }\) = \(\frac { 1 }{ 2 }\) x 6 x 10^-6 x (3)^2 U[a] = 27 μJ C[c], U[c] = \(\frac { 1 }{ 2 }\) C[c] \({ V }_{ c }^{ 2 }\) = \(\frac { 1 }{ 2 }\) x 2 x 10^-6 x (3)^2 U[a] = 9 μJ C[d], U[d] = \(\frac { 1 }{ 2 }\) C[d] \({ V }_{ d }^{ 2 }\) = \(\frac { 1 }{ 2 }\) x 8 x 10^-6 x (3)^2 U[a] = 36 μJ Question 15. Capacitors P and Q have identical cross-sectional areas A and separation d. The space between the capacitors is filled with a dielectric of dielectric constant as shown in the figure. Calculate the capacitance of capacitors P and Q. Cross-sectional area of parallel plate capacitor = A Each area of different medium between parallel plate capacitor = \(\frac { A }{ 2 }\) Separation distance = d Capacitance of parallel plate capacitor, C = \(\frac { εA }{ d }\) Air medium of dielectric constant, ε[r] = 1 dielectric medium of dielectric constant = ε[r] Case 1: Capacitance of air filled capacitor Capacitance of dielectric-filled capacitor Capacitance of parallel plate capacitor Case 2: Each distance of different medium between the parallel plate capacitor = \(\frac { d }{ 2 }\) Capacitance of dielectric-filled capacitor Capacitance of air filled capacitor, Capacitance of parallel plate capacitor, Samacheer Kalvi 12th Physics Electrostatics Additional Questions Solved I. Multiple Choice Questions Question 1. When a solid body is negatively charged by friction, it means that the body has (a) acquired excess of electrons (b) lost some, problems (c) acquired some electrons and lost a lesser number of protons (d) lost some positive ions (a) acquired excess of electrons Question 2. A force of 0.01 N is exerted on a charge of 1.2 x 10^-5 G at a certain point. The electric field at that point is (a) 5.3 x 10^4 NC^-1 (b) 8.3 x 10^-4 NC^-1 (c) 5.3 x 10^2 NC^-1 (d) 8.3 x 10^4 NC^-1 (d) 8.3 x 10^4 NC^-1 E = \(\frac { F }{ q }\) = \(\frac { 0.01 }{{ 1.2 × 10 }^{-5}}\) = 8.3 x 10^2 NC^-1 Question 3. The electric field intensity at a point 20 cm away from a charge of 2 x 10 5 C is (a) 4.5 x 10^6 NC^-1 (b) 3.5 x 10^5 NC^-1 (c) 3.5 x 10^6 NC^-1 (d) 4.5 x 10^5 NC^-1 (a) 4.5 x 10^6 NC^-1 E = \(\frac{q}{4 \pi \varepsilon_{0} r^{2}}\) = \(\frac{9 \times 10^{9} \times 2 \times 10^{-5}}{(0.2)^{2}}\) = 4.5 x 10^6 NC^-1 Question 4. How many electrons will have a charge of one coulomb? (a) 6.25 x 10^18 (b) 6.25 x 10^19 (c) 1.6 x 10^18 (d) 1.6 x 10^19 (a) 6.25 x 10^18 Number of electron, n = \(\frac { q }{ e }\) = \(\frac { 1 }{{ 1.6 × 10 }^{-19}}\) = 6.25 × 10^18 Question 5. The ratio of the force between two charges in air and that in a medium of dielectric constant K is (a) K : 1 (b) 1 : K (c) K^2 : 1 (d) 1 : K^2 (a) K : 1 Question 6. The work done in moving a positive charge on an equipotential surface is (a) finite and positive (b) infinite (c) finite and negative (d) zero (d) zero Question 7. If a charge is moved against the Coulomb force of an electric field. (a) work is done by the electric field (b) energy is used from some outside source (c) the strength of the field is decreased (d) the energy of the system is decreased (b) energy is used from some outside source Question 8. No current flows between two charged bodies when connected (a) if they have the same capacitance (b) if they have the same quantity of charge (c) if they have the same potential (d) if they have the same charge density (c) if they have the same potential Question 9. Electric field lines about a negative point charge are (a) circular, anticlockwise (b) circular, clockwise (c) radial, inwards (d) radial, outwards (c) radial, inwards Question 10. Two plates are 1 cm apart and the potential difference between them is 10 V. The electric field between the plates is (a) 10 NC^-1 (b) 250 NC^-1 (c) 500 N^-1 (d) 1000 NC^-1 (d) 1000 NC^-1 E = \(\frac { V }{ d }\) = \(\frac { 10 }{{ 1 × 10 }^{-2}}\) = 8.3 x 10^2 NC^-1 Question 11. At a large distance (r), the electric field due to a dipole varies as (a) \(\frac { 1 }{ r }\) (b) \(\frac { 1 }{{ r }^{2}}\) (c) \(\frac { 1 }{{ r }^{3}}\) (d) \(\frac { 1 }{{ r }^{4}}\) (c) \(\frac { 1 }{{ r }^{3}}\) Question 12. Two thin infinite parallel plates have uniform charge densities +c and -σ. The electric field in the space between then is (a) \(\frac { σ }{{ 2ε }_{0}}\) (b) \(\frac { σ }{{ ε }_{0}}\) (c) \(\frac { 2σ }{{ 2ε }_{0}}\) (d) Zero (b) \(\frac { σ }{{ ε }_{0}}\) Question 13. Two isolated, charged conducting spheres of radii R[1], and R[2] produce the same electric field near their surfaces. The ratio of electric potentials on their surfaces is- (a) \(\frac {{ R }_{1}}{{ R }_{2}}\) (b) \(\frac {{ R }_{2}}{{ R }_{1}}\) (c) \(\frac { { R }_{ 1 }^{ 2 } }{ { R }_{ 2 }^{ 2 } } \) (d) \(\frac { { R }_{ 2 }^{ 2 } }{ { R }_{ 1 }^{ 2 } } \) (b) \(\frac {{ R }_{2}}{{ R }_{1}}\) Question 14. A 100 μF capacitor is to have an energy content of 50 J in order to operator a flash lamp. The voltage required to charge the capacitor is (a) 500 V (b) 1000 V (c) 1500 V (d) 2000 V (b) 1000 V Question 15. A 1 μF capacitor is placed in parallel with a 2 μF capacitor across a 100 V supply. The total charge on the system is (a) \(\frac { 100 }{ 3 }\) μC (b) 100 μC (c) 150 μC (d) 300 μC (d) 300 μC Equivalent capacitor = 1 + 2 = 3 μF Total charge, q = CV = 3 x 100 = 300 μF Question 16. A parallel plate capacitor of capacitance 100 μF is charged to 500 V. The plate separation is then reduced to half its original value. Then the potential on the capacitor becomes (a) 250 V (b) 500 V (c) 1000V (d) 2000 V (a) 250 V Here, C’ = 2C, since the charge remains the same. q = C’V’ = CV ⇒ V = \(\frac { CV }{ 2C }\) = \(\frac { 500 }{ 2 }\) = 250 V Question 17. A point charge q is placed at the midpoint of a cube of side L. The electric flux emerging from the cube is ‘ (a) \(\frac { q }{{ ε }_{0}}\) (b) \(\frac { q }{{ 6Lε }_{0}}\) (c) \(\frac { 6Lq }{{ ε }_{0}}\) (d) zero (a) \(\frac { q }{{ ε }_{0}}\) Question 18. The capacitor C of a spherical conductor of radius R is proportional to (a) R^2 (b) R (c) R^-1 (d) R^0 (b) R Question 19. Energy of a capacitor of capacitance C, when subjected to a potential V, is given by (a) \(\frac { 1 }{ 2 }\) CV^2 (b) \(\frac { 1 }{ 2 }\) C^2V (c) \(\frac { 1 }{ 2 }\) CV (d) \(\frac { 1 }{ 2 }\) \(\frac { C }{ V }\) (a) \(\frac { 1 }{ 2 }\) CV^2 Question 20. The electric field due to a dipole at a distance r from its centre is proportional to (a) \(\frac { 1 }{{ r }^{3/2}}\) (b) \(\frac { 1 }{{ r }^{3}}\) (c) \(\frac { 1 }{ r }\) (d) \(\frac { 1 }{{ r }^{3}}\) (b) \(\frac { 1 }{{ r }^{3}}\) Question 21. A point charge q is rotating around a charge Q in a circle of radius r. The work done on it by the Coulomb force is (a) 2πrq (b) 2πQq (c) \(\frac { Q }{{ 2ε }^{0}r}\) (d) zero (d) zero Question 22. The workdone in rotating an electric dipole of moment P in an electric field E through an angle 0 from the direction of the field is (a) pE (1 – cos θ) (b) 2pE (c) zero (d) -pE cos θ (a) pE (1 – cos θ) W = pE(cos θ[0] – cos θ) [θ[0] = cos 0, cos 0 = 1] W = pE(1 – cos θ) Question 23. The capacitance of a parallel plate capacitor can be increased by (a) increasing the distance between the plates (b) increasing the thickness of the plates (c) decreasing the thickness of the plates (d) decreasing the distance between the plates (d) decreasing the distance between the plates Question 24. Two charges are placed in vacuum at a distance d apart. The force between them is F. If a medium of dielectric constant 2 is introduced between them, the force will now be (a) 4F (b) 2F (c) F/2 (d) F/4 (d) F/4 Question 25. An electric charge is placed at the centre of a cube of side a. The electric flux through one of its faces will be (a) \(\frac { q }{{ 6ε }^{0}}\) (b) \(\frac { q }{ { ε }_{ 0 }{ a }^{ 2 } } \) (c) \(\frac { q }{ { 4πε }_{ 0 }{ a }^{ 2 } } \) (a) \(\frac { q }{{ ε }^{0}}\) (a) \(\frac { q }{{ 6ε }^{0}}\) According to Gauss’s law, the electric flux through the cube is \(\frac { q }{{ ε }^{0}}\). Since there are six faces, the flux through one face is \(\frac { q }{{ 6ε }^{0}}\). Question 26. The electric field in the region between two concentric charged spherical shells- (a) is zero (b) increases with distance from centre (c) is constant (d) decreases with distance from centre (d) decreases with distance from centre Question 27. A hollow metal sphere of radius 10 cm is charged such that the potential on its surface is 80V. The potential at the centre of the sphere is- (a) 800 V (b) zero (c) 8 V (d) 80 V (d) 80 V Question 28. A 4 μF capacitor is charged to 400 V and then its plates are joined through a resistance of 1 K Ω. The heat produced in the resistance is- (a) 0.16 J (b) 0.32 J (c) 0.64 J (d) 1.28 J (b) 0.32 J The energy stored in capacitor is converted into heat U = H = \(\frac { 1 }{ 2 }\) CV^2 = \(\frac { 1 }{ 2 }\) x 4 x 10^-6 x (400)^2 = 0.32 J Question 29. The work done in carrying a charge Q, once round a circle of radius R with a charge Q[2] at the centre is- (a) \(\frac{\mathrm{Q}_{1} \mathrm{Q}_{2}}{4 \pi \varepsilon_{0} \mathrm{R}^{2}}\) (b) zero (c) \(\frac{\mathrm{Q}_{1} \mathrm{Q}_{2}}{4 \pi \varepsilon_{0} \mathrm{R}}\) (d) infinite (b) zero The electric field is conservative. Therefore, no work is done in moving a charge around a closed path in an electric field. Question 30. Two plates are 2 cm apart. If a potential difference of 10 V is applied between them. The electric field between the plates will be (a) 20 NC^-1 (b) 500 NC^-1 (c) 5 NC^-1 (d) 250 NC^-1 (b) 500 NC^-1 \(\frac { V }{ d }\) = \(\frac { 10 }{{ 2 ×10 }^{-2}}\) 500 NC^-1 Question 31. The capacitance of a parallel plate capacitor does not depend on (a) area of the plates (b) metal of the plates (c) medium between the plates (d) distance between the plates (b) metal of the plates Question 32. A capacitor of 50 μF is charged to 10 volts. Its energy in joules is (a) 2.5 x 10^-3 (b) 5 x 10^-3 (c) 10 x 10^-4 (d) 2.5 x 10^-4 (a) 2.5 x 10^-3 U = \(\frac { 1 }{ 2 }\) CV^2 = \(\frac { 1 }{ 2 }\) x 50 x 10^-6 x (10)^2 = 2.5 x 10^-3 J Question 33. A cube of side b has a charge q at each of its vertices. The electric field due to this charge distribution at the centre of the cube is (a) \(\frac { q }{{b}^{ 2 }}\) (b) \(\frac { q }{{2b}^{ 2 }}\) (c) \(\frac { 32q }{{b}^{ 2 }}\) (d) zero Answer:(d) zero is an equal charge at diagonally opposite comer. The fields due the these at the centre cancel out. Therefore, the net field at the centre is zero. Question 34. Total electric fulx coming out of a unit positive charge put in air is (a) ε[0] (b) \({ \varepsilon }_{ 0 }^{ -1 }\) (c) (4πε[0])^-1 (d) 4πε[0] (b) \({ \varepsilon }_{ 0 }^{ -1 }\) Question 35. Electron volt (eV) is a unit of (a) energy (b) potential (c) current (d) charge (a) energy Question 36. A point Q lies on the perpendicular bisector of an electric dipole of dipole moment P. If the distance of Q from the dipole is r, then the electric field at Q is proportional to- (a) p^-1 and r^-2 (b) p and r^-2 (c) p and r^-3 (d) p^2 and r^-3 (c) p and r^-3 Question 37. A hollow insulated conducting sphere is given a positive charge of 10 μC. What will be the electric field at the centre of the sphere is its radius is 2 metres? (a) zero (b) 8 μCm^-2 (c) 20 μCm^-2 (d) 5 μCm^-2 (d) zero Question 38. A particle of charge q is placed at rest in a uniform electric field E and then released. The kinetic energy attained by the particle after moving a distance y is- (a) qE^2y (b) q^2Ey (c) qEy^2 (d) qEy (d) qEy Force on the particle = qE KE = Work done by the force = F.y = qEy Question 39. Dielectric constant of metals is- (a) 1 (b) greater then 1 (c) zero (d) infinite (d) infinite Question 40. When a positively charged conductor is earth connected (a) protons flow from the conductor to the earth (b) electrons flow from the earth to the conductor (c) electrons flow from the conductor to the earth (d) no charge flow occurs (b) electrons flow from the earth to the conductor Question 41. The SI unit of electric flux is (a) volt metre^2 (b) newton per coulomb (c) volt metre (d) joule per coulomb (c) volt metre Question 42. Twenty seven water drops of the same size are charged to the same potential. If they are combined to form a big drop, the ratio of the potential of the big drop to that of a small drop is- (a) 3 (b) 6 (c) 9 (d) 27 (c) 9 V’ = n^2/3 V ⇒ \(\frac { V’ }{ V }\) = (27)^2/3 = 9 Question 43. A point charge +q is placed at the midpoint of a cube of side l. The electric flux emerging ’ from the cube is- (a) \(\frac { q }{{ ε }^{0}}\) (b) \(\frac {{ 6ql }^{2}}{{ ε }^{0}}\) (c) \(\frac { q }{ { 6l }^{ 2 }{ { ε }^{ 0 } } } \) (d) \(\frac { { C }^{ 2 }{ V }^{ 2 } }{ 2 } \) (a) \(\frac { q }{{ ε }^{0}}\) Question 44. The energy stored in a capacitor of capacitance C, having a potential difference V between the plates, is- Question 45. The electric potential at the centre of a charged conductor is- (a) zero (b) twice that on the surface (c) half that on the surface (d) same as that on the surface (d) same as that on the surface Question 46. The energy stored in a capacitor is given by (a) qV (b) \(\frac { 1 }{ 2 }\)qV (c) \(\frac { 1 }{ 2 }\) CV (d) \(\frac { q }{ 2C }\) (b) \(\frac { 1 }{ 2 }\)qV Question 47. The unit of permitivity of free space so is (a) coulomb/newton-metre (b) newton-metre^2/coulomb^2 (c) coulomb^2/newton-metre^2 (d) coulomb/(newton-metre)^2 (c) coulomb^2/newton-metre^2 Question 48. An electric dipole has the magnitude of its charge as q and its dipole moment is p. It is placed in a uniform electric field E. If its dipole moment is along the direction of the field, the force on it and its potential energy are, respectively. (a) 2qE and minimum (b) qE and pE (c) zero and minimum (d) qE and maximum (c) zero and minimum Potential energy, U = -pE cos θ For q = 0°; U = -pE, which is minimum. Question 49. An electric dipole of moment \(\vec { P } \) is lying along a uniform electric field \(\vec { E } \) . The workdone in rotating the dipole by 90° is (a) \(\frac { pE }{ 2 }\) (b) 2pE (c) pE (d) √2pE (c) pE Question 50. A parallel plate air capacitor is charged to a potential difference of V volts. After disconnecting the charging battery the distance between the plates of the capacitor is increased using an insulating handle. As a result the potential difference between the plates (a) does not charge (b) becomes zero (c) increases (d) decreases (c) increases Question 51. When air is replaced by a dielectric medium of constant K, the maximum force of attraction between two charges separated by a distance (a) increases K times (b) increases K^-1 times (c) decreases K times (d) remains constant (c) decreases K times Question 52. A comb runs through one’s dry hair attracts small bits of paper. This is due to the fact that (a) comb is a good conductor (b) paper is a good conductor (c) the atoms in the paper get polarised by the charged comb (d) the comb posseses magnetic properties (c) the atoms in the paper get polarised by the charged comb Question 53. Which of the following is not a property of equipotential surfaces? (a) they do not cross each other (b) they are concentric spheres for uniform electric field (c) the rate of change of potential with distance on them is zero (d) they can be imaginary spheres. (b) they are concentric spheres for uniform electric field Question 54. A charge Q is enclosed by a Gaussian spherical surface of radius R. If the radius is doubled, then the outward electric flux will be (a) reduced to half (b) doubled (c) becomes 4 times (d) remains the same (d) remains the same Question 55. If the electric field in a region is given by \(\vec { E } \) = 5\(\hat{j} \) + 4\(\hat{j} \) + 9\(\hat{k} \) , then the electric flux through a surface of area 20 units lying in the y-z plane will (a) 20 units (b) 80 units (c) 100 units (d) 180 units (c) 100 units The area vector \(\vec { A } \) = 20\(\hat{j} \); \(\vec { E } \) = (5\(\hat{j} \) + 4\(\hat{j} \) + 9\(\hat{k} \)) Flux (Φ) = \(\vec { E } \) – \(\vec { A } \) = 5 x 20 =100 units Question 56. A, B and C are three points in a uniform electric field. The electric potential is- (a) maximum at A (b) maximum at B (c) maximum at B (d) same at all the three points A, B, and C (b) maximum at B The potential decreases in the direction of the field. Therefore V[B] > V[C]>C[A]. Question 57. A conducting sphere of radius R is give a charge Q. The electric potential and the electric field at the centre of the sphere are, respectively- (a) zero, \(\frac { Q }{ { 4\pi ε }_{ 0 }{ R }^{ 2 } } \) (b) \(\frac { Q }{ { 4\pi ε }_{ 0 }{ R } } \) (c) \(\frac { Q }{ { 4\pi ε }_{ 0 }{ R } } \), zero (d) zero,zero (c) \(\frac { Q }{ { 4\pi ε }_{ 0 }{ R } } \), zero. II. Fill in the blanks Question 1. A dipole is placed in a uniform electric field with its axis parallel to the field. It experiences ………………… neither a net force nor a torque Question 2. The unit of permittivity is………………… Question 3. The branch of physics which deals with static electric charges or charges at rest is ………………… Question 4. The charges in an electrostatic field are analogous to ………………… in a gravitational field. Question 5. The substances which acquire charges on rubbing are said to be ………………… Question 6. Electron means ………………… Question 7. A glass rod rubbed with a silk cloth. Glass rod and silk cloth acquire………………… positive and negative charge respectively Question 8. When ebonite rod is rubbed with fur, ebonite rod and fur acquires ………………… negative and positive charge respectively Question 9. ………………… termed the classification of positive and negative charges. Question 10. Applications such as electrostatic point spraying and powder coating, are based on the property of ………………… between charged bodies. attraction and repulsion Question 11. Bodies which allow the charge to pass through them are called ………………… Question 12. Bodies which do not allow the charge to pass through them are called ………………… Question 13. The unit of electric charge is ………………… Question 14. Total charge in an isolated system ………………… remains a constant Question 15. The force between two charged bodies was studied by ………………… Question 16. The unit of permittivity in free space (s0) is ………………… Question 17. The value of s, for air or vacuum is ………………… Question 18. Charges can neither be created nor be destroyed is the statement of the law of conservation of ………………… Question 19. The space around the test charge, in which it experiences a force is known as field ………………… Question 20. Electric field at a point is measured in terms of ………………… electric field intensity Question 21. The unit of electric field intensity is ………………… Question 22. The lines of force are far apart, when electric field E is ………………… Question 23. The lines of force are close together when electric field E is ………………… Question 24. Electric dipole moment ………………… P = 2qd Question 25. Torque experienced by electric dipole is ………………… x = PE sin θ Question 26. An electric dipole placed in a non-uniform electric field at an angle θ experiences ………………… both torque and force Question 27. When thee dipole is aligned parallel to the field, its electric potential energy is ………………… u = -PE Question 28. Change of potential with distance is known as ………………… potential distance Question 29. The number of electric lines of force crossing through the given area is ………………… electric flux Question 30. The process of isolating a certain region of space from the external field is called ………………… electrostatic shielding Question 31. A capacitor is a device to store ………………… Question 32. The charge density in maximum at ………………… Question 33. The principle made use of lightning arrestor is ………………… action of points Question 34. Van de Graaff generator producers large electrostatic potential difference of the order of ………………… 10^7 V III. Match the following Question 1. (i) → (d) (ii) → (a) (iii) → (b) (iv) → (c) Question 2. (i) → (c) (ii) → (d) (iii) → (a) (iv) → (b) Question 3. (i) → (b) (ii) → (d) (iii) → (a) (iv) → (c) Question 4. (i) → (b) (ii) → (d) (iii) → (a) (iv) → (c) IV. Assertion and reason type (a) If both assertion and reason are true and the reason is the correct explanation of the assertion. (b) If both assertion and reason are true but the reason is not correct explanation of the assertion. (c) If the assertion is true but the reason is false. (d) If the assertion and reason both are false. (e) If the assertion is false but the reason is true. Question 1. Assertion: Electric lines of force cross each other. Reason: Electric field at a point superimposed to give one resultant electric field. (e) Both assertion and reason are true but the reason is not the correct explanation of the assertion. Explanation: If electric lines of forces cross each other, then the electric field at the point of intersection will have two directions simultaneously which is not possible physically. Question 2. Assertion: Charge is quantized. Reason: Charge, which is less than 1 C is not possible. (c) If assertion is true but reason is false. Explanation: Q = ±ne and charge lesser than 1 C is possible. Question 3. A point charge is brought in an electric field. The field at a nearby point will increase, whatever be the nature of the charge. Reason: The electric field is independent of the nature of the charge. (d) If the assertion and reason both are false. Explanation: Electric field at the nearby-point will be resultant of the existing field and field due to the charge brought. It may increase or decrease if the charge is positive or negative depending on the position of the point with respect to the charge brought. Question 4. Assertion: The tyre’s of aircraft are slightly conducting. Reason: If a conductor is connected to the ground, the extra charge induced on the conductor will flow to the ground. (b) Both assertion and reason are true but the reason is not the correct explanation of the assertion. Explanation: During take-off and landing, the friction between treys and the runway may cause electrification of treys. Due to conducting to a ground and election sparking is avoided. Question 5. Assertion: The lightning conductor at the top of a high building has sharp ends. Reason: The surface density of charge at sharp points is very high, resulting in the setting up of electric wind. (a) Both assertion and reason are true and the reason is the correct explanation of the assertion. Samacheer Kalvi 12th Physics Electrostatics Short Answer Questions Question 1. What is meant by triboelectric charging? Charging the objects through rubbing is called triboelectric charging. Question 2. What is meant by the conservation of total charges? The total electric charge in the universe is constant and the charge can neither be created nor be destroyed. In any physical process, the net change in charge will always be zero. Question 3. State Gauss’s Law? Gauss’s law states that if a charge Q is enclosed by an arbitrary closed surface, then the total electric flux OE through the closed surface is Φ[E] = \(\oint { \vec { E } } \) .d\(\vec { A } \) = \(\frac {{ q }_{encl}}{{ ε }_{0}}\) Question 4. What is meant by electrostatic shielding? During lightning accompanied by a thunderstorm, it is always safer to sit inside a bus than in open ground or under a tree. The metal body of the bus provides electrostatic shielding, since the electric field inside is zero. During lightning, the charges flow through the body of the conductor to the ground with no effect on the person inside that bus. Question 5. What is meant by dielectric? A dielectric is a non-conducting material and has no free electrons. The electrons in a dielectric are bound within the atoms. Ebonite, glass and mica are some examples of dielectrics. Question 6. What are non-polar molecules? Give examples. A non-polar molecule is one in which centers of positive and negative charges coincide. As a result, it has no permanent dipole moment. Examples of non-polar molecules are hydrogen (H[2]), oxygen (O [2]), and carbon dioxide (CO[2]) etc. Question 7. What are polar molecules? Give examples. In polar molecules, the centers of the positive and negative charges are separated even in the absence of an external electric field. They have a permanent dipole moment. The net dipole moment is zero in the absence of an external electric field. Examples of polar molecules are H[2]O, N[2]O, HCl, NH[3]. Question 8. What is a capacitor? A capacitor is a device used to store electric charge and electrical energy. Capacitors are widely used in many electronic circuits and have applications in many areas of science and technology. Samacheer Kalvi 12th Physics Electrostatics Long Answer Questions Question 1. Derive an expression for the electric field due to the system of point charges? Electric field due to the system of point charges: Suppose a number of point charges are distributed in space. To find the electric field at some point P due to this collection of point charges, the superposition principle is used. The electric field at an arbitrary point due to a collection of point charges is simply equal to the vector sum of the electric fields created by the individual point charges. This is called the superposition of electric fields. Consider a collection of point charges q[1], q[2], q[3],…., q[n] located at various points in space. The ‘ total electric field at some point P due to all these n charges is given by Here r[1p], r[2p], r[3p],…., r[np], are the distance of the charges [1], q[2], q[3],…., q[n] from the point respectively. Also \(\hat{r} \)[1p] + \(\hat{r} \)[2p] + \(\hat{r} \)[3p],…., \(\hat{r} \) [np] are the corresponding unit vectors directed from q[1], q[2], q[3],…., q[n] tpo P. Equation (2) can be re-written as, For example in figure, the resultant electric field due to three point charges q[1], q[2], q[3] at point P is shown. Note that the relative lengths of the electric field vectors for the charges depend on relative distantes of the charges to the point P. Question 2. Derive an expression for the electric flux of rectangular area placed in a uniform electric field. (i) Electric flux for uniform Electric field: Consider a uniform electric field in a region of space. Let us choose an area A normal to the electric field lines as shown in figure (a). The electric flux for this case is Φ[E] = EA ….. (1) Suppose the same area A is kept parallel to the uniform electric field, then no electric field lines pierce through the area A, as shown in figure (b). The electric flux for this case is zero. Φ[E] = 0 ….. (2) If the area is inclined at an angle θ with the field, then the component of the electric field perpendicular to the area alone contributes to the electric flux. The electric field component parallel to the surface area will not contribute to the electric flux. This is shown in figure (c). For this case, the electric flux ΦE = (E cosθ) A …(3) Further, θ is also the angle between the electric field and the direction normal to the area. Hence in general, for uniform electric field, the electric flux is defined as Φ[E]= \(\vec { E } \).\(\vec { A } \) = EA cos θ …(4) Here, note that \(\vec { A } \) is the area vector \(\vec { A } \) = A\(\hat{n} \). Its magnitude is simply the area A and the direction is along the unit vector h perpendicular to the area. Using this definition for flux, Φ[E]= \(\vec { E } \).\(\vec { A } \), equations (2) and (3) can be obtained as special cases. In figure (a), θ = 0° so Φ[E]= \(\vec { E } \).\(\vec { A } \) = EA In figure (b), θ = 90° so Φ[E]= \(\vec { E } \).\(\vec { A } \) = 0 (ii) Electric flux in a non uniform electric field and an arbitrarily shaped area: Suppose the electric field’is not uniform and the area A is not flat, then the entire area is divided into n small area segments ∆\(\vec { A } \)[1] ∆\(\vec { A } \)[2], ∆\(\vec { A } \)[3],…..∆\(\vec { A } \)[n], such that each area element is almost flat and the electric field over each area element is considered to be uniform. The electric flux for the entire area A is approximately written as By taking the limit ∆\(\vec { A } \)[1] → 0 (for all i) the summation in equation (5) becomes integration. The total electric flux for the entire area is given by Φ[E] = ∫\(\vec { E } \).d\(\vec { A } \) ….. (6) From Equation (6), it is clear that the electric flux for a given surface depends on both the electric field pattern on the surface area and the orientation of the surface with respect to the electric field. (iii) Electric flux for closed surfaces: In the previous section, the electric flux for any arbitrary curved surface is discussed. Suppose a closed surface is present in the region of the non-uniform electric field as shown in figure (a). The total electric flux over this closed surface is written as Φ[E] = \(\oint { \vec { E } } \).d\(\vec { A } \) …… (7) Note the difference between equations (6) and (7). The integration in equation (7) is a closed surface integration and for each areal element, the outward normal is the direction of d\(\vec { A } \) as shown in figure (b). The total electric flux over a closed surface can be negative, positive or zero. In figure (b), it is shown that in one area element, the angle between d\(\vec { A } \) and \(\vec { E } \) is less than 90°, then the electric flux is positive and in another areal element, the angle between dA and E is greater than 90°, then the electric flux is negative. In general, the electric flux is negative if the electric field lines enter the closed surface and positive if the electric field lines leave the closed surface. Samacheer Kalvi 12th Physics Electrostatics Numerical Problems Question 1. Electrons are caused to fall through a potential difference of 1500 volts. If they were initially at rest. Then calculate their final speed. The electrical potential energy is converted into kinetic energy. If v is the final speed then Question 2. Small mercury drops of the same size are charged to the same potential V. If n such drops coalesce to form a single large drop, then calculate its potential. Let r be the radius of a small drop and R that of the large drop. Then, since the volume remains conserved, \(\frac { 1 }{ 2 }\) πR^2 = \(\frac { 4 }{ 3 }\) πR^3n ⇒ R^3 = r^3n R = r^3(n)^1/3 Further, since the total charge remains conserved, we have, using Q = CV C[large] V = n C[small] v Where V is the potential of the large drop. 4πε[0] RV = n (4πε[0]r)v V = \(\frac { nrv }{ R }\) = \(\frac { nrv }{{ r(n) }^{1/3}}\) V = vn^2/3 Question 3. Two particles having charges Q[1] and Q[2] when kept at a certain distance, exert a force F on each other. If the distance between the two particles is reduced to half and the charge on each particle is doubled. Find the force between the particles. F = \(\frac { 1 }{{ 4πε }_{0}}\) \(\frac{\mathrm{Q}_{1} \mathrm{Q}_{2}}{r^{2}}\) If the distance is educed by half and two particles of charges are doubled. Question 4. Two charged spheres, separated by a distance d, exert a force F on each other. If they are immersed in a liquid of dielectric constant 2, then what is the force. Force between the charges (vacuum) Force between the charges (medium) Question 5. Find the force of attraction between the plates of a parallel plate capacitor. Let d be the distance between the plates. Then the capacitor is C = \(\frac { { \varepsilon }_{ 0 }A }{ d } \) Energy stored in a capacitor, Energy magnitude of the force is, Leave a Comment
{"url":"https://samacheerkalvi.guru/samacheer-kalvi-12th-physics-solutions-chapter-1/","timestamp":"2024-11-13T01:25:21Z","content_type":"text/html","content_length":"356118","record_id":"<urn:uuid:41edd3dd-88d6-4926-972a-fc8fa2abde4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00431.warc.gz"}
7 Iron Distance | Loft | Length (The Complete Guide) - Golf Storage Ideas7 Iron Distance | Loft | Length (The Complete Guide) | Golf Storage Ideas 7 Iron Distance | Loft | Length (The Complete Guide) What is a 7 Iron Golf Club Used For? The 7 iron golf club is classified as a mid-iron golf club and has a loft angle that ranges between 29 degrees to 33 degrees. The 7 iron can be used for a variety of shots which include off the tees and reaching long par 3s. The 7 iron offers versatile shots to the golfers and therefore is suitable for both beginners and professional golfers. It is an easy club to learn and can offer trajectory and distance greatly to the users. It is easy to hit 100 yards once you get comfortable using it. In order to improve the shots made with 7 irons, there are several steps that you can follow in order to get more accurate shots. It is better to point your body at the target and transfer your weight at the front leg for better shots. Moreover, it is ideal for playing the ball with a 7 iron by placing it in the middle of your feet. For experienced users, it is even easier to get used to the 7-iron golf club. The faster speed of the clubhead helps in covering more distance on course than the beginners. To increase the consistency of the shots made with the 7 iron, it is better to keep more weight on the front leg when the ball is being set up for the shot. Your backswing should reduce by half to the current swing speed and put down your arms as if they are dropping down on the ground. What Degree of Loft is a 7 Iron? The loft angle of each iron club varies for each manufacturer; hence, there isn’t a fixed loft angle. However, an average loft angle of the irons can be considered. The manufacturers even have changed the loft angles over the course of time. For example, previously the 7 iron was known to have a loft angle of 38 degrees, but it is now being replaced with a 34 degrees loft angle in the 7 iron. The loft angle greatly affects the distance; hence, with an increased loft angle, the manufacturers claim that it covers more distance on the course. The typical angle for a 7 iron is approximately 34 degrees. The loft of the club is much stronger if the cavity is large; hence the club is more forgiving. The design of the golf club is based on the idea that stronger lofts help in achieving greater distance on the course. However, for someone who likes to focus more on the shots than the distance, a weaker loft does the job as it has a smaller cavity and the golf club is less forgiving in contrast to the stronger loft. The main issue with an iron is that for every bent degree of angle, a degree of bounce is lost by the club. What is the Average Swing Speed for a 7 Iron? The swing speed for a 7 iron can vary between 80 to 83 mph. If the golfer is a flipper, then the swing speed will be greater than a player who plays square to square. To cover a distance of about 160 yards with a 7 iron, a swing speed of 86 mph is needed. A standard device for measuring swing speed doesn’t measure the speed of the clubhead, but instead, it measures the speed of the ball. The speed of the clubhead is a fixed calculation since the actual speed of the club heads is greater than the measured speed. The device uses the number from the ball speed to generate a value for the speed of the clubhead. This is why the actual speed of the clubhead is greater. The mph can be about 3-4 greater in real if you’re playing accurate shots with your 7 iron. With a greater loft angle, the speed of the clubhead is also greater due to the smash factor potential. If you’re covering an average distance of about 165 yards to 170 yards, then there is a likely chance that your swing speed varies between 86 to 88 mph. Hitting a 7 Iron 150 Yards With an accurate shot made with the 7 iron, the golfer can easily cover a distance ranging between 145 yards to 150 yards. The golfer can find the distance covered on the course to be short if his shots are not being consistent and accurate. This can affect the game flow. With a 7 iron, one disadvantage can be losing control over the shots if the ball has a lower flight. If you change your swing speed from a hard swing to a smooth swing, then the distance of the shots might be compromised, but the accuracy is greatly increased. To increase the distance covered on course with the 7-iron golf club, several factors need to be put into consideration. There are a few variables like the wind, temperature, and the loft angle that decide the distance covered by the club. For covering approximately 170 yards on the course, the hole should be uphill along with strong wind and an open stance. This can increase the yardage much easily. The distance can also be increased with a higher trajectory. What is the Average 7 Iron Ball Speed? The loft angle of the golf club greatly affects the speed of the ball. This means that it’s possible to get a greater ball speed with a low loft angle. It is possible to get a ball speed of 110 mph with the 7-iron golf club if the golfer makes an accurate shot. Due to different loft angles for the 7 iron of each manufacturer, the ball speed also varies accordingly. If the golfer can’t get a land angle on the 7-iron golf club, then it’s not very effective. To get the right shot, the 7 iron should then be adjusted and fixed. For each manufacturer, the ball speed with the 7-iron golf club is different. For example, with the Mizuno MP15, the ball speed can be 110 mph with the 7 iron while for the TaylorMade M4 it can go up to 120 mph. What is a Good Smash Factor for a 7 Iron? The average smash factor for a7 iron can range between 1.4 and 1.5. The smash factor has a direct link with the distance covered on the course and the swing and ball speed of the golfer. A higher smash factor can also result in less control over the distance covered along with the distance gaps. The fractional smash factor can be lost to different variables like the sound, heat, backspin, and compression. In the opinion of some golfers, the smash factor doesn’t affect the distance and is simply a ratio of the ball speed and the club speed. Each shot made with the 7 iron is based upon what the golfer puts into it. The different factors can be the spin, launch speed, speed of the clubhead, face angle, etc. However, none of these factors have anything to do with the smash factor of the golf club. If the smash factor of the 7-iron club is high for a golfer that means that his ball speed is also high; however, the other metrics like the spin, face angle are hosed. This can cause an inconsistency in the shots made as the ball is moving at a greater rate. The smash factor can often affect the gapping with the irons. With the increasing loft angle of the club, the smash factor is also likely to increase. Often the devices used for measuring the smash factor don’t record the exact smash factor. Instead, the system only measures the ball speed and takes an estimate of the smash factor of the golf club. What is the ideal spin rate for a 7 Iron? The spin rate can be estimated by 1000 rpm into the number of the club. The ideal spin rate for a 7 iron is 7000 spin rates. This number is adequate if the launch of the ball is not too high. The launch and spin need to match in order to get an ideal spin rate. With a 7 iron, the golfer can have more carry than a rollout. A spin rate varying between 6000 and 7000 is good for a 7-iron golf club. Different factors should be put into consideration to judge the spin rate for the golf club. This includes the launch angle along with the ball speed, apex, distance gaps, etc. 1000 into the swing speed is not considered an accurate measure for calculating the spin rate as it just gives an average and doesn’t form a good baseline. With a well-placed shot, the 7 iron can have a spin on the lower side with a shallow attack angle. A steep angle on the other hand, will generate more spin. What is the ideal Launch Angle for a 7 Iron? The launch angle of a 7 iron club can range between 23.5 degrees to 25 degrees for an average golfer. Professional golfers can expect a launch angle of 16 degrees. Releasing the ball early can help in lowering your launch angle. The launch angle can be almost half of the loft angle on the golf club. Therefore, for a 7-iron golf club with a loft angle of 35 degrees, a launch angle of 17 to 18 degrees is suitable if the ball has a good impact with the golf club. The launch angle is easily attainable in this case. The launch angle can vary according to the different courses and the shot played. The backspin numbers can vary differently for each type of ball used. The numbers can go really high if a distance ball is used. What is the Standard Length of a 7 Iron? The standard length of a 7 iron is 37 inches on steel and 37.5 inches with the structure made of graphite. For women, the standard length of a graphite golf club is 36.5 inches, and for steel, it is 36 inches. The standard lengths are set from the shaft weights of the golf club. The length for the golf club should be chosen according to the grip that suits the golfer. For a golfer with average height, an iron golf club with a longer length should be preferred, so the golfer doesn’t have to bend over the club. The length for the shaft of the golf club can be trimmed to improve the grip. Epoxy should be placed in the tip of the shaft, and then the tip should be inserted in the hosel. It should be ensured that the tip is all the way in. If the golfer wants to keep the 7-iron length at 37.5 inches, then the shaft should be trimmed to approximately 37.35 inches. For golfers who have a tall height, the inches of the golf club can be increased to 1.5 inches longer than the standard length of the 7-iron golf club. How Far Can You Hit a 7 Iron? Since the 7 iron is the most used golf club by many golfers, a lot of them are concerned about how far a 7 iron can be hit. On an average of golfers ranging from beginners to experts, the average distance covered on course for men is 120 yards while for women it is 80 yards. Since the 7-iron golf club for each manufacturer varies, the distance covered also varies accordingly. For each different brand, the 7-iron golf club has a different loft angle; hence, the distance is also likely to change. If the golfer swings the ball out well along with hitting it high fades, the 7-iron golf club can cover a distance of 160 to 165 yards. The Sterling 7 iron golf club, for example, can cover a distance of 183 yards to 187 yards. With the swing on the golf club dialed in, the shots made with the 7 iron can be more consistent. If there is a shaft lean on impact, the ball flight can be penetrating. At a reasonable estimate, the 7 iron can also cover a distance of 160 yards with about 4 yards on the rollout. 7 Iron Distance The average distance of a 7 iron golf club is 172 yards. The distance covered with a 7-iron golf club varies with the different factors. With no wind and a high uphill shot, the golf club can be used to cover a distance of 172 yards. If the golfer doesn’t try to hit the ball too hard, the distance can be controlled easily on the golf course. There can be a variance of about 10 to 15 yards depending on how well the shot is played every time. The gap can increase with longer golf clubs. One specific yardage number can’t be pinned to any golf club since it is likely to vary every time the golfer plays. Average 7 Iron Distance on the PGA Tour Since PGA tours are done by professional and well-experienced golfers, the distance covered using each golf club also varies accordingly in contrast to an ordinary golfer. With a club speed of 90 mph, a ball speed of 120 mph along with a 1.33 smash factor, the distance covered can go up to 172 yards. The height covered by the ball can be around 32 yards. The distance covered is because the golfers swing the club much faster, so the average ball speed also increases in a similar way. If these numbers are applied to the swing speed of an average golfer, the golfer will struggle to get the ball in the air for the maximum distance. With an exceptional and comparable swing speed, even the ordinary golfers can chase the PGA style figures. The LPGA’s average swing speed for drivers which is around 94 mph is close to the swing speed of many golfers. How to Hit a 7 Iron Getting consistent shots with the 7-iron golf club is quite easy if the golfer practices well and knows how to play his shots. If the ball is hit straight every time, the golfer can easily get yardage of 160 yards to 170 yards. With the following steps, the 7 iron can be hit well. 1. The golfer should put the ball in the middle of the stance for the 7-iron golf club. 2. The idea of hitting the ball with the iron golf clubs is so that the contact is first made with the ball and then the ground. 3. The head level of the golfer should be high as he swings the ball. 4. The divot should be checked before to ensure that the ball is being contacted before the ground. The shape of the divot should also be checked to ensure the aim on the target is set perfectly. 5. The golfer should continue to practice until the contact with the ball, and the ground is made consistent, and the shape of the shots is well understood. How to Hit a 7 Iron Off the Tee For hitting a 7 iron off the tee, the golfer shouldn’t have a totally different approach: 1. The ball should not be teed absurdly high, and the golfer should give himself a perfect lie. The swing shouldn’t be changed due to teeing up the ball. 2. For accurate shots, the ball should be teed low so that the golfer can get the bottom of the ball touching the grass. 3. The tee shouldn’t be done above the grass, and it could be set up in front of an existing tee to use it as a guard against a fat shot. 4. With the iron golf clubs, the ball shouldn’t be hit up, but instead, the golfer should hit the ball down. 5. The golfer can get a perfect spin if the contact is nicely made between the golf club and the ball. How to Hit a 7 Iron Straight • To hit the 7 iron straight, the backswing should be made more stable so that the golfer is able to control the swing of the shot and hit the golf club straight. • The golfer often faces a lack of stability due to the back leg locking on the top, which should be controlled. • The position of the right should be kept similar throughout in order to maintain it when going back. • A head cover or towel can be placed under the outside of the right foot in order to give the golfer the feeling that he is swinging back against something. This would help in maintaining the knee reflex during the backswing. How to hit a 7 Iron 200 yards In order to hit a 7 iron 200 yards, the golfer should have a swing speed of 130 mph. Delofting the 7-iron club into a 4-iron club or 5 iron club can help in getting yardage of up to 200 yards with For professionals who carry interchangeable weights, they place lighter weights in their puts for slower green and prefer heavier weights for a quicker green. Delofting the iron golf club doesn’t mean that its angle should be decreased by 7 degrees as the golfer will not be able to hit the ball really high. Having a good swing speed and ball speed can help in easily covering 200 yards on the course. The ball should be hit with a descending blow while the hands are ahead of the ball while there is an impact. The professionals, however, prefer delofting when they are willing to control the trajectory of the golf club. 7 Iron Vs 9 Iron • The 7 hybrid is easier to hit than the 9-iron golf club. However, there are some golfers who prefer iron golf clubs over the hybrid as they are more comfortable with it although the hybrids have been designed as a replacement for the iron golf clubs. • If the club has a lower loft angle, it is more difficult to hit the ball high. However, by number, the hybrids and golf clubs have the same numbers, but the design of the hybrid club is what makes it different. • For an amateur golfer, it is difficult to hit the ball high with the 9 iron as it needs good swing skill to hit the ball with consistency. The hybrids were made to address this problem faced by the golfers. • Being thicker than the conventional iron golf clubs, the golfer finds it easier to hit it with the 9 iron than the 7 iron. • Often golfers seem to cover the same distance with the 7 iron as with the 9 iron as there can be a fitting issue with length or flex. 7 Iron Vs 3 Hybrid • The 3 hybrids should be preferred over the 7 iron in certain conditions. This is when the golfer is playing from an uneven surface or from the rough and sand. • If the golfer is playing downwind and wants to land the ball softly, then he should choose the 3 hybrids over the 7-iron golf club. • Since hybrids are most forgiving, they can be used for consistent and accurate shots if the golfer is having a bad day on the course. • The stance for the 7 iron and the 3 hybrids can be kept the same, but the ball should be positioned farther back with the hybrid to get a downward hit. 7 Iron Vs 5 Iron • If the 5 iron is lofted a bit, the loft angle of both the 7 iron and 5 irons is the same, which is 27 degrees. • Having the same loft angle means that both the iron golf clubs have the same length and also cover the same distance on the course. • The spin of the 7-iron golf club is quite less in contrast to the 5-iron golf club. • The shaft of the 5-iron golf club can is an inch shorter than the 7-iron golf club; hence, it has a tighter dispersion.
{"url":"https://www.golfstorageguide.com/7-iron-golf-club-distance/","timestamp":"2024-11-14T18:38:12Z","content_type":"text/html","content_length":"188161","record_id":"<urn:uuid:a830554e-7f31-4cf7-b058-640ab9203fd7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00853.warc.gz"}
Define the Term Statistics. - Owlgen Define the Term Statistics. The term statistics has been defined in singular sense as well as plural sense. Statistics in Singular Sense: In singular sense, the term statistics has been defined as statistical methods. The best definition of statistics in singular sense has been given by Croxton and Cowden. According to Croxton and Cowden “Statistics may be defined as the science of collection, presentation., analysis and interpretation of numerical data.” The above definition reveals the following stages of statistical Collection of data: This is the first step in a statistical study and is the foundation of statistical analysis. Under this stage, data are collected from various sources Primary or Secondary. When data are collected originally by an investigator or agency, these are called primary data. When data are collected from published or unpublished data which have already been collected by some other agency, these are called secondary data. Organisation and presentation of data : Figures that are collected by an investigator need to be organized by editing, classifying and tabulating. Data collected and organised are presented in some systematic manner to make statistical analysis easier..The organized data may be presented with the help of tables, graphs and diagrams. Analysis of data : The next stage is the analysis of the presented data. There are large number of statistical measures which are used to analyse the data such as averages, dispersion, correlation, regression etc. Interpretation of data : Interpretation of data implies drawing of conclusions on the basis of data analyzed in the earlier stage. Interpretation of data requires high degree of skill, experience and common sense. Statistics in Plural Sense: In plural sense, the term statistics has been defined as statistical methods. The best definition of statistics in plural sense has been given by Horace Secrist Horace Secrist, has defined statistics as aggregates of facts affected to a marked extent by multiplicity of causes, numerically expressed, enumerated or estimated according to reasonable standard of accuracy, collected in. a system manner for a pre-determined purpose and placed in relation to each other. As per this definition ; following are the characteristics of statistics. 1. Statistics are aggregates of facts : Single and isolated figures do not constitute statistics because such figures are unrelated and cannot be compared. For ex. if X’s income is Rs. 80,000 per annum, it would not constitute statistics. 2. Statistics are affected to a marked extent by multiplicity of causes: For ex. statistics of production of rice are affected by the rainfall, quality of soil, seeds and manure, method of cultivation etc. It is very difficult to study separately the effect of each of these forces on the production of rice. 3. Statistics are numerically expressed : All statistics are expressed in numbers. Qualitative statements such as “The population of India is increasing rapidly” does not constitute statistics. 4. Statistics are enumerated or estimated according to reasonable standards of accuracy : For ex. number of students in a class would be obtained by counting whereas the number of people who witnessed the Republic Day parade would be an estimate. 5. Statistics are collected in a systematic manner according to a suitable plan. 6. Statistics are collected for a pre-determined purpose : The purpose should be specific and well defined. 7. Statistics should be placed in relation to each other: Statistical data are often compared period wise or region wise. For ex. Population of India for the year 2007 can be compared with the Population of China for the same year.
{"url":"https://www.owlgen.org/define-the-term-statistics/","timestamp":"2024-11-03T10:34:16Z","content_type":"text/html","content_length":"116667","record_id":"<urn:uuid:68fbdeff-bb75-42b5-a389-e2c22415f35f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00671.warc.gz"}
TL Catenary 3D From Wikipedia: In physics and geometry, a catenary is a curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends. The catenary curve has a U-like shape, superficially similar in appearance to a parabolic arch, but it is not a parabola. Mathematically, the catenary curve is the graph of the hyperbolic cosine function. Catenaries and related curves are used in architecture and engineering, in the design of bridges and arches, so that forces do not result in bending moments. In the Catenary equation, the parameter a represents the length of a chain whose weight on Earth is equal in magnitude to the tension/compression at the Sag/Crest point. Please visit https://en.wikipedia.org/wiki/Catenary for more information This app will compute and draw the catenary curve as a polyline in Autodesk® AutoCAD® by numerically solving the transcendental equation using a variation of the dampened Newton-Raphson root-finding algorithm. The app supports both "hanging" (positive parameter along the direction of the gravity vector) and "standing" (negative parameter along the direction of the gravity vector) curves. All 2D Catenary commands assume a positive direction of gravity along the vector (0,-1) of the current/active UCS and operate within the current UCS XY plane All 3D Catenary commands assume a positive direction of gravity along the vector (0,0,-1) of the current/active UCS and operate within the current UCS 3D space The following different modes of defining the catenary are supported: 1. One point and parameter a 2. Two points and curve length 3. Two points and a slope of the curve tangent at a specified location 4. Two points and curve sag 5. Two points and curve bulge 6. Two points and curve vertical bulge 7. Two points and curve area 8. Three points 9. Best Fit Please refer to the Catenary diagram for definitions of all parameters Each method is implemented as a separate command. All modes/commands are fully interactive, and the user is able to dynamically preview the result on screen. Upon completion of the command a technical report is generated with all input parameters used as well as the final mathematical equation of the computed catenary curve and the computed values of all catenary elements. Note that values of the computed elements such as length, area, bulge etc may differ slightly from the linework generated in AutoCAD due to the numerical approximation of the final polyline geometry. In those cases the computed values should be considered more accurate. Increasing the number of vertices of the catenary polyline should also help minimize these deviations. The number of vertices of the generated polyline is adjustable within a 5-1000 points range. All numerical data is reported using the current AutoCAD setting for the linear precision as specified in the "units" dialog box. • Scriptable command line interface • Interactive commands with a dynamic on-screen preview of the result • All commands operate in the current/active AutoCAD UCS • Full support for both "hanging" and "standing" catenary curves • Multiple CPUs/Multi-core CPU support • Adjustable number of points for the final AutoCAD polyline • Technical report for each successfully computed catenary curve • Rigorous least-squares data-fitting analysis for an unlimited number of catenary data points • Old values of the various input parameters are retained and can be reused by accepting the defaults at the command prompt About This Version Version 2023.6.3.1, 7/25/2023 TLCatenaryBestFit & TLCatenaryBestFit3d commands updated to allow selection of multiple Autocad point objects Customer Reviews • Highly recommended This year the company I work for purchased a laser scanner for scanning power line wires. I was trying to find practical ways to reconstruct power lines wires in Civil3D from a point cloud. Each segment of wire typically has anywhere from 2000-5000 points per wire. I was pleased to determine this add-on successfully best fits 3d catenary curves of extremely large datasets almost instantaneously with no issues. This add-on was exactly what I was looking for. Also, the developer is very open to feedback. I asked him a question about the options and within a couple of days he made an update to the tool based on my question. It’s a small price to pay for such a powerful tool!!
{"url":"https://apps.autodesk.com/ACAD_E/en/Detail/Index?id=7878977395994937844&appLang=en&os=Win32_64","timestamp":"2024-11-04T11:18:46Z","content_type":"text/html","content_length":"82902","record_id":"<urn:uuid:3ee16077-a7e6-4ec5-8d61-007191e279cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00648.warc.gz"}
Implication - (Model Theory) - Vocab, Definition, Explanations | Fiveable from class: Model Theory Implication, in the context of first-order logic, refers to a logical connective that denotes a relationship between two statements where the truth of one statement guarantees the truth of another. It is commonly expressed in the form 'if A then B', symbolized as $$A \rightarrow B$$, indicating that whenever A is true, B must also be true. Understanding implication is crucial for constructing valid formulas and reasoning about the relationships between different propositions in logic. congrats on reading the definition of Implication. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The implication $$A \rightarrow B$$ can be rephrased as 'A is sufficient for B', meaning that if A holds, B must also hold. 2. In cases where A is false, the implication $$A \rightarrow B$$ is considered true regardless of the truth value of B. 3. Implication is often used in proofs and argumentation to establish logical consequences from given premises. 4. The contrapositive of an implication, expressed as $$\neg B \rightarrow \neg A$$, holds the same truth value as the original implication. 5. Understanding implication is fundamental in model theory as it helps determine when a model satisfies a given formula. Review Questions • How does implication relate to other logical connectives like conjunction and disjunction? □ Implication is distinct from conjunction and disjunction as it expresses a conditional relationship rather than simply combining statements. While conjunction ('and') requires both statements to be true and disjunction ('or') requires at least one to be true, implication emphasizes the dependence of one statement on another. This means that in an implication, the truth of the consequent relies on the antecedent, showcasing how these logical operations serve different purposes in formal reasoning. • What role does contrapositive play in understanding implications and how does it relate to proof strategies? □ The contrapositive of an implication provides a powerful tool in proofs because it maintains the same truth value as the original statement. When attempting to prove an implication $$A \ rightarrow B$$, proving its contrapositive $$\neg B \rightarrow \neg A$$ can sometimes be more straightforward. This connection between an implication and its contrapositive allows for different approaches to establishing logical conclusions in mathematical arguments. • Evaluate the importance of understanding implications in first-order logic and its impact on constructing valid arguments within model theory. □ Understanding implications in first-order logic is crucial as they form the backbone of logical reasoning and argument construction. By grasping how implications operate, one can effectively create valid arguments and analyze the relationships between different formulas. This comprehension not only aids in formal proofs but also enriches the understanding of how models satisfy specific conditions, ultimately influencing the development of more complex logical theories and frameworks within model theory. ยฉ 2024 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/model-theory/implication","timestamp":"2024-11-12T17:02:45Z","content_type":"text/html","content_length":"155254","record_id":"<urn:uuid:8dcbcc2e-c449-46e6-a4b1-f3bba2285684>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00155.warc.gz"}
shape of a relativistic wheel Not open for further replies. Valued Senior Member Easily. Can you stop dodging and provide a mathematical proof? Once you do it, I'll show you why it's false. Don't worry, unlike you I will deliver the math. Easily. Can you stop dodging and provide a mathematical proof? Once you do it, I'll show you why it's false. Don't worry, unlike you I will deliver the math. Tach, can't you provide the math yourself? If so, then prove it! Valued Senior Member Can you stop dodging and provide a mathematical proof? I already gave one in [POST=2848877]post #93[/POST], which you never adequately addressed. I also gave an argument in [POST=2848712]post #59[/POST], which you basically dismissed out of hand (you never actually explained what was wrong with it). You, by contrast, have . Given that and the fact [POST=2643009]you were wrong last time you demanded people do calculations while refusing to do them yourself, for all your chest beating[/POST], I think it's about time we saw this "proof" of yours. Fully ionized Registered Senior Member Easily. Can you stop dodging and provide a mathematical proof? Once you do it, I'll show you why it's false. Don't worry, unlike you I will deliver the math. Tach, it's ridiculous for you to suggest przyk doesn't 'deliver the math', he's engaged in a great many lengthy threads involving the details of things like special relativity and electromagnetism. Perhaps the only person who consistently posts more actual algebra than przyk is Rpenner. On the other hand you're known for doing what you do at the moment, saying you have a proof and then not providing it and demanding other people provide theirs. More than one person has commented on this tactic of yours in just this thread. This might sound insulting but that's the tactic Chinglu uses and I don't think you want to be compared to him. I think enough people have asked you now that it's time you provided what you claim to have done, regardless of whether or not you deem przyk to have provided sufficient mathematics. You complained the mods were being slack with Chinglu so you can't complain when I'm now stepping up and asking you to lay your cards on the table. Why don't you do it first, since you claim to have already done it? Actually it proves quite a bit. If you choose a parameterisation of the perimeter of the wheel in the ground frame where the parameter measures distance in a homogenous manner (e.g. it's a distance in metres), the lower half of the wheel is 0 < , and the upper half is < 2 , then you can express the energy between the two halves as \Delta E \,=\, \int_{0}^{L} \mathrm{d}x \, \bigl( \rho(x + L) \, k[v(x+L)] \,-\, \rho(x) \, k[v(x)] \bigr) \,. It's an inescapable conclusion that $$\Delta E > 0$$ . The only way you could argue against this would either be to deny that 1. $$\rho \, k(v)$$ is larger in the top half than in the bottom half, Err, no. $$\rho \, k(v)$$ is NOT larger in the top half than in the bottom half . I'll give you a chance to remedy your proof, calculate the energy density $$k(v)$$. If you do it correctly, you will find out that it oscillates with time, sometimes it is larger for the upper half, sometimes it is higher for the lower half of the wheel. Like I told you, I did all these calculations when I claimed I did them, this is how I know your claim is false. Since you have no calculation per se, I suggest that you start by calculating k(v). If you do it correctly, you will be disproving your own claim. One more thing, before any of you makes any more claims for not providing the math, I have provided in this thread all the components for calculating $$k(v)$$. Actually, I have provided the math for the equation of the ellipse, of the spokes, of the tangential speed. Last edited: So, what is your point?That I have introduced you to this website about a year ago? No, I've seen it before. For example, it was mentioned in [post=2385907]DRZion's first Sciforums thread[/post] in 2009. My point is that you are now disagreeing with the same material you used to support your argument a year ago. Look at the rotating-in-place wheel (in green). Contrast with the rolling wheel (in blue). Do you see a contradiction? The spokes are curved downwards for the rotating one and upwards for the rolling one. How can that be? Their motion is different, so there are different length contraction and light delay effects. Where is the contradiction? At times you can see only 3 spokes above the midline for the green wheel while you are always seeing 6 for the blue wheel. Do you see the anomaly? When the wheels overlap their spokes point in OPPOSITE directions (one upward, the other one downward). So, you don't think this is absurd? No, I don't, and I'm sure you wouldn't either if you actually thought it through. Why don't you "Look inside their papers", like you suggested in [post=2778963]that thread[/post]? Or at least read the accompanying text? Try comparing the appearance of the green wheel in figures 12b and 12c. What makes you so sure that the website authors are correct? I can't say I'm certain they are correct, but I have read their papers and thought through the various effects and don't see any absurdities. We must first agree on the equations of the spokes, I don't think yours is correct. The equation of the spokes you posted early in the thread is correct: $$\frac{\gamma(x-vt)}{y-r}=tan(\omega \gamma (t-vx/c^2)+\phi_i)$$ where $$\phi_i=i \frac{ 2 \pi}{8} , Plugging in r = 2, v = 0.8c, t = 0, gives this: Last edited: No, I've seen it before. For example, it was mentioned in [post=2385907]DRZion's first Sciforums thread[/post] in 2009. My point is that you are now disagreeing with the same material you used to support your argument a year ago. I disagree with of the material. DrZion, before this thread was totally hijacked, brought up an interesting point in his OP: how come that more spokes are visible on the upper side and what would happen if one covered up the lower side with a piece of paper? I think that I know the answers to both questions. One: the equation of the spokes (for the rolling wheel) that I have derived, supports drawing (b), so that part is correct Two: the reason that a strip of paper would not preclude the spoke "migration" from showing is a rendering error made by the paper authors. This is not really a physics error, it is a computer graphics error. We can talk about it, if you are interested. Their motion is different, so there are different length contraction and light delay effects. Where is the contradiction? Because you can view the rotating-only wheel as a limit case of the rolling case. If you do that, you can see the contradiction immediately, the spokes that were upturned for the rolling wheel become downturned for the rotating-only case. No, I don't, and I'm sure you wouldn't either if you actually thought it through. Why don't you "Look inside their papers", like you suggested in [post=2778963]that thread[/post]? Or at least read the accompanying text? Try comparing the appearance of the green wheel in figures 12b and 12c. I did. There is NO supporting math in any of their papers, just a a lot of prose and pictures. This is why I ended up reconstructing the math for the equation of the ellipse , for the spokes, for the tangential speed, etc. The equation of the spokes you posted early in the thread is correct: $$\frac{\gamma(x-vt)}{y-r}=tan(\omega \gamma (t-vx/c^2)+\phi_i)$$ where $$\phi_i=i \frac{ 2 \pi}{8} , Plugging in r = 2, v = 0.8c, t = 0, gives this: OK, I cannot disagree with this Now , making $$v=0$$ gets us the equations of the spokes for the spinning wheel: $$\frac{x}{y-r}=tan(\omega t+\phi_i)$$ where $$\phi_i=i \frac{ 2 \pi}{8} , So, the spokes should be straight lines. Last edited: So, we're agreed that at a given instant in the moving frame there is more mass above y = R than below? So, we're agreed that at a given instant in the moving frame there is more mass above y = R than below? There isn't "more mass", The raytraced image of the spokes makes them look curved as in the picture. A simple test proves this: place a strip of paper between the wheel and the light source or between the wheel and the eye and you'll see only half the spokes. No spokes have "wandered" in the half of the wheel peeking over the fence. Now, the final error in the paper, which is quite serious, is that at the speeds in question, one can't really see anything because the whole image is totally blurred. I do not know how the authors managed to publish in two reputable computer graphics journals. Maybe because the referees were intimidated by the physics part. There is another very serious error: the images contradict the text. The authors claim (incorrectly) a Doppler shift in the frequency of the light reflected off the wheels while the pictures show (correctly) no such effect. Also, the spinning wheel (the one drawn in green) is downright wrong, the spokes should be straight (see simple proof at the end of previous posts) Last edited: You agree with this diagram for the actual shape of the spokes at t=0 in the ground frame, right? 5 spokes above y=R, 3 spokes below y=R, therefore more mass above y=R, right? The equation of the spokes you posted early in the thread is correct: $$\frac{\gamma(x-vt)}{y-r}=tan(\omega \gamma (t-vx/c^2)+\phi_i)$$ where $$\phi_i=i \frac{ 2 \pi}{8} , Plugging in r = 2, v = 0.8c, t = 0, gives this: Last edited: Because you can view the rotating-only wheel as a limit case of the rolling case. If you do that, you can see the contradiction immediately, the spokes that were upturned for the rolling wheel become downturned for the rotating-only case. I still don't see a contradiction. • The videos are rendering visual effects, they are showing what a camera would actually record (except for light intensity changes and doppler shift color changes). • In the camera rest frame the rolling wheel spokes are distorted while the spinning wheel spokes are straight.. • Remember Teller-Penrose - at a distance, the light-delay shape distortion of an approaching object is approximately opposite to the length-contraction shape distortion. • With the spinning wheel in the video, there is no length contraction distortion, only light-delay distortion. Did you notice that the spinning wheel distortion is reversed when looking at it from the other side? You agree with this diagram for the actual shape of the spokes at t=0 in the moving frame, right? 5 spokes above y=R, 3 spokes below y=R, therefore more mass above y=R, right? I still don't see a contradiction. □ The videos are rendering visual effects, they are showing what a camera would actually record (except for light intensity changes and doppler shift color changes). There is no Doppler shift. [*]In the camera rest frame the rolling wheel spokes are distorted while the spinning wheel spokes are straight. Nope, the spinning wheel spokes are bent DOWNWARDS. [*]Remember Teller-Penrose - at a distance, the light-delay shape distortion of an approaching object is approximately opposite to the length-contraction shape distortion. You mean Terrell right? Teller is the guy with the H bomb. Yes, I know the effect very well, the spinning wheel is stationary wrt the camera so there shouldn't be any Penrose effect. The differences in light transit time from the stationary wheel to the camera are negligible, so they cannot account for the spoke curvature. Did you notice that the spinning wheel distortion is reversed when looking at it from the other side? No, I missed that. There is no Doppler shift. Nope, the spinning wheel spokes are bent DOWNWARDS. No, their visual appearance is that they are bent downward, because the light from different f]parts of the wheel takes different times to reach the camera. You mean Terrell right? Teller is the guy with the H bomb. Yes, I know the effect very well, the spinning wheel is stationary wrt the camera so there shouldn't be any Penrose effect. The differences in light transit time from the stationary wheel to the camera are negligible, so they cannot account for the spoke curvature. No, not negligible at all. The rim of that wheel is moving at 0.93c. The whole point of that site is to demonstrate the effects of light transit times. The reason that the spinning wheel spokes appear curved upward from one side and downward from the other side is solely due to light transit differences. Last edited: Come on, Tach, this is really easy. You gave the equation for the shape of the spokes in the ground frame. Diagramming that shape at t=0 give the diagram shown. Clearly, more of the wheel material is higher than y=R then below y=R. Therefore, there is more mass above y=R than below. Why the sigh? I can prove this quite easily, there is no Doppler shift of a moving mirror. No, their visual appearance is that they are bent downward, because the light from different f]parts of the wheel takes different times to reach the camera. The differences in distance are insufficient to warrant a significant effect. Think about it, if the wheel has a radius of 1m, and the wheel is at 10m from the camera, the difference in distances is If the camera is at 100m, things get even dicier. Light travels at 300,000,000m/s, translate that in a difference in arrival time. No camera can tell the difference. No, not negligible at all. The whole point of that site is to demonstrate the effects of light transit times. Do the exercise above. Calculate the shutter speed. The reason that the spinning wheel spokes appear curved upward from one side and downward from the other side is solely due to light transit differences. I would be interested in a mathematical proof. Especially in the context of the transit time distances being what they are. Last edited: Come on, Tach, this is really easy. You gave the equation for the shape of the spokes in the ground frame. Diagramming that shape at t=0 give the diagram shown. Clearly, more of the wheel material is higher than y=R then below y=R. Therefore, there is more mass above y=R than below. Why the sigh? I am not going to touch this one with a ten foot pole. I can prove this quite easily, there is no Doppler shift of a moving mirror. I hesitate to start another sidetrack, but feel free to post your proof. The differences in distance are insufficient to warrant a significant effect. Think about it, if the wheel has a radius of 1m, and the wheel is at 10m from the camera, the difference in distances is $$\sqrt{101}-10$$. If the camera is at 100m, things get even dicier. Light travels at 300,000,000m/s, translate that in a difference in arrival time. No camera can tell the difference. Tach, the rim of the green wheel is moving at 0.93c relative to the camera. Figure out the scale. Hint - this is a simulation, not a practical camera recording, and it's not real-time (unless those wheels are light-seconds wide) Try it with a wheel 1 light second wide, and a camera 5 light-seconds away. I would be interested in a mathematical proof. Especially in the context of the transit time distances being what they are. It's not a difficult exercise. I am not going to touch this one with a ten foot pole. Really? You've invested all this time arguing about the mass distribution, and now you're suddenly just not interested? You agreed that this is an accurate diagram of the rolling wheel in the ground frame at t=0, right? Last edited: Not open for further replies.
{"url":"https://www.sciforums.com/threads/shape-of-a-relativistic-wheel.110763/page-9","timestamp":"2024-11-10T09:45:33Z","content_type":"text/html","content_length":"173908","record_id":"<urn:uuid:dcb1831f-c20d-4631-954d-b70280299df2>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00477.warc.gz"}
On the cover: Apollonian packing - Chalkdust What can you do with this space? So asks Andrew Stacey. ‘Fill it’ is the prompt reply, but fill it with what? Maybe like Andrew you want to use a single curve, but I want to use circles. If you do this in the way shown above in blue, the result is called an Apollonian packing, a variant of which can be seen on the cover of this issue. Here we shall explore the history of this entrancing object, which spans over 2000 years, and percolates into a surprising variety of mathematical disciplines. Starting in the familiar world of Euclidean geometry, Apollonian packings extend into fractal geometry and measure theory; Möbius transformations and the hyperbolic plane; and then on into the distant reaches of geometric group theory, number theory, orbital mechanics, and even ship navigation. At times we may wonder off into thickets of more obscure mathematics, so those readers who get lost should feel free to skip ahead to the next section. Apollonius of Perga Apollonius (c 230 BC) was a Hellenistic mathematician, considered one of the greatest after Euclid and Archimedes. Perhaps his most important work was his eight book treaties Κωνικα on conic sections—once lost to European civilisation, but fortuitously preserved by the more enlightened Middle Eastern scholars and later reintroduced by Edmund Halley in 1710. The same unfortunately cannot be said of Έπαφαι (De Tractionibus or Tangencies). Although now lost, we have accounts of the work from other ancient authors, particularly in the writings of Pappus of Alexandria. In it, Apollonius posed and solved the following problem. Problem: Given three geometric objects in the plane (points, lines, and/or circles), find all circles which meet all three simultaneously (ie which pass through any points, and are tangent to any lines or circles). So for example, given three points which don’t lie on the same line, there is exactly one circle which passes through all three. The case which interests us at present is when we are given three circles, each of which is tangent to the other two. In the very special case that all three are tangent at the same point there are infinitely many circles tangent to all three. Usually, however, the circles will be pairwise tangent at three distinct points, in which case there are exactly two other circles tangent to all three simultaneously. This is as far as Apollonius went; the next step would not be taken until 1643, when René Descartes discovered a formula for the size of the two tangent circles, which he wrote in a letter to Princess Elizabeth of the Palatinate. The same formula was later rediscovered by Frederick Soddy and published as a poem in Nature in 1936. The size of a circle is determined by its radius $r$. If $r$ is small, the circle will be small, but it will also be very curved. We can define the curvature of the circle to be $k=1/r$. Descartes showed that if three given circles are mutually tangent at three distinct points, and have curvatures $k_1$, $k_2$, and $k_3$, then a fourth circle which is tangent to all three has curvature $k_4$ $$\label{eq:descartes} (k_1+k_2+k_3+k_4)^2=2(k_1^2+k_2^2+k_3^2+k_4^2) \tag{1}$$ For technical algebraic reasons, sometimes this equation gives negative values for the curvature $k_4$, which we can interpret as corresponding to a circle with curvature $|k|$ which contains the other circles in its interior. Notice that this equation is quadratic in the variable $k_4$, so there are two solutions; these will correspond to the two possibilities for the fourth circle found by Apollonian packings So far we have constructed at most 5 mutually tangent circles. The step to infinity may seem obvious, but took another 63 years and some 1900 years after Apollonius. The earliest description seems to appear in a letter from Leibniz to des Bosses (11 March 1706): Imagine a circle; in it draw three other circles that are the same size and as large as possible, and in any new circle and in the space between circles again draw the three largest circles of the same size that are possible. Imagine proceeding to infinity in this way… What Leibniz is describing is in fact a nested Apollonian packing, since at each step he fills in every circle as well as the gaps between circles. This early description makes the nested Apollonian packing one of the first fractals, although it wasn’t studied properly until mathematicians like Cantor, Weierstrass, von Koch, and Sierpinski started discovering other fractals in the late nineteenth and early twentieth centuries. This may be because Leibniz was not interested in the mathematical construction, but rather was trying to draw an analogy to argue against the existence in infinitesimals in nature. Henceforth we shall only consider the un-nested Apollonian packing. As a fractal, it has a number of interesting properties: it is a set of measure 0, which means that if you tried to make it by starting with a disc of metal, and then drilled out infinitely many ever smaller holes (and if you ignore that metal is made out of atoms), then you would finish up with a single piece of metal (you haven’t removed everything), but nevertheless with exactly 0 mass. It has fractal dimension approximately 1.30568, which means that mathematically it lives somewhere between a 1D curve and a 2D area. Finally, if you look at just the portion of an Apollonian packing which lies in the triangular region between three tangent circles, this is homeomorphic to the Sierpinski triangle, which means that one can be bent and stretched to look like the other. There is a curious combinatorial consequence of Descartes’ formula for Apollonian packings. If we start with three mutually tangent circles with curvatures $k_1$, $k_2$, and $k_3$, we can solve \ eqref{eq:descartes} to find that the curvatures $k_4^+$ and $k_4^-$ of the other two circles are $$\label{eq:descartes2}\tag{2} k_4^\pm=k_1+k_2+k_3\pm2\sqrt{k_1k_2+k_2k_3+k_3k_1}$$ Now suppose we start constructing an Apollonian packing by drawing four mutually tangent circles whose curvatures $k_1$, $k_2$, $k_3$, and $k_4^+$ are all integers. From equation \eqref {eq:descartes2} it follows that $2\sqrt{k_1k_2+k_2k_3+k_3k_1}$ must be an integer since $k_4^+$ is an integer, and so $k_4^-$ is also an integer. Now we can build the packing by filling in a fifth circle wherever we see four mutually tangent circles. By the observation above, if the four circles have integer curvatures, the fifth circle will also have integer curvature. Inductively therefore we will end up with an Apollonian packing consisting of infinitely many tangent circles, all of which have integer curvatures. Hyperbolic geometry If you have some familiarity with non-Euclidean geometry, Apollonian packings may remind you of the Poincaré model of the hyperbolic plane. The hyperbolic plane $\mathbb{H}^2$ is a 2D surface on which we can do geometry just like we can on the flat Euclidean plane. Whereas a sphere has constant positive curvature (it curves the same way in all directions), and the Euclidean plane has constant zero curvature (it’s flat), $\mathbb{H}^2$ is an infinite surface which has constant negative curvature, which means that at every point it curves in the same way as a Pringle. This negative curvature makes the surface crinkle up on itself more and more as you move out towards infinity, which is inconvenient when we try to work with it. Usually then we represent it on a flat surface so we can draw pictures of it in magazines and the like. One way to do this is with the Poincaré model. This views the hyperbolic plane as a disc. In order to fit the whole infinity of $\mathbb{H}^2$ into a finite disc, we have to shrink distances as we move out towards the edge of the disc. Using this skewed way of measuring distances, the circular edge of the disc is infinitely far away from its centre. We can think of an Apollonian packing as living in the Poincaré disc, with the outermost circle of the packing as the boundary circle of $\mathbb{H}^2$. Then the circles in the packing which are not tangent to this boundary are also circles in the strange hyperbolic way of measuring distance, that is, all points are equidistant from some other point in the plane—the circle’s hyperbolic centre. Circles in the packing which are tangent to the boundary are called horocycles (in Greek this literally means border circle), which are circles with infinite radius in the hyperbolic metric. Horocycles have no analogue in the Euclidean plane. Something interesting happens when we see what an Apollonian packing looks like in the upper half-plane (UHP) model for $\mathbb{H}^2$. This model is similar to the Poincaré model, but instead of using a disc, we use the half-plane above the $x$-axis $\{(x,y)\in \mathbb{R}^2: y> 0\}$, where the $x$-axis behaves like the boundary circle and should be thought of as at infinity. There is a problem, in that in the Poincaré disc, the boundary of $\mathbb{H}^2$ was a circle, and so it closed up on itself. In the UHP, the boundary is a line which doesn’t close up on itself, but these are supposed to be models for the same thing. To fix this, we imagine there is a point at infinity $\infty$ which joins up the two ends of the boundary to form an infinite diameter circle. If we start with any Apollonian packing living in the Poincaré disc, there is a map from the disc to the UHP preserving hyperbolic distances, under which the outer circle of the packing becomes the $x$-axis (together with the point at infinity), and exactly one of the horocycles (one of the circles tangent to the outer circle in the packing) becomes the horizontal line $y=1$. All other circles and horocycles in the packing are sent to circles which are tangent to each other as before, but are now sandwiched between the lines $y=0$ and $y=1$. If we focus on just those circles which meet the $x$-axis we get what are called Ford circles. Remarkably each of these circles is tangent to the $x$-axis at a rational number $p/q$, and has radius $1/2q^2$. Moreover every rational number is the point of tangency of one of the circles (see below). Now some magic happens: suppose the Ford circles at $a/b$ and $c/d$ are tangent to each other, then there is a unique circle sandwiched between these two circles and the $x$-axis. The rational point at which this circle meets the $x$-axis is given by the Farey sum of $a/b$ and $c/d$ \[ \frac{a}{b}\oplus\frac{c}{d}=\frac{a+b}{c+d} \] Note that for this to be well-defined, $a/b$ and $c/d$ must be written in their simplest form. This Farey sum, and the associated Farey sequences $F_n$ you get by looking at all rational numbers between 0 and 1 which can be written as a fraction with denominator at most $n$, turn up in several places across number theory. These include rational approximation of irrational numbers and the Riemann Hypothesis. Möbius transformations If you haven’t seen hyperbolic geometry before, you may wonder how we can map the Poincaré disc model to the UHP model, and in such a way that the strange distance measure in the two models is preserved—for a start one is a finite region while the other is an infinite half-plane. The answer is to view both models as living inside the complex plane $\mathbb{C}$ (or more accurately the extended complex plane $\widehat{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$): the Poincaré disc is the unit disc $\{z\in\mathbb{C}: |z|<1\}$, and the UHP is the region above the real axis $\{z\in\mathbb {C}:$ Im$(z)>0\}$. Then a function like $$\label{eq:discToUHP}\tag{3} z\mapsto-\mathrm{i}\frac{z+1}{z-1}=\frac{-\mathrm{i} z-\mathrm{i}}{z-1}$$ will do the trick. This function is an example of a Möbius transformation, which in general is a complex function of the form \[ z\mapsto \frac{az+b}{bz+d} \] were we require $ad-bc\ne0$ so that this function is invertible. The function \eqref{eq:discToUHP} sends the unit disc to the UHP, but it is not the only Möbius transformation which does this. In fact there are infinitely many such functions, all of which preserve the hyperbolic metric. In the previous section I claimed that starting with any Apollonian packing, we could choose one of these Möbius transformations such that the image had a very specific form, sandwiched between the lines Im$(z)=0$ and Im$(z)=1$. An exercise: If you have seen Möbius transformations before, you may wish to try and prove that the purported mapping exists yourself. (Hint: remember that Möbius transformations send circles and lines to circles and lines, and are completely determined by their image on 3 distinct points.) The upshot of this is that all Apollonian packings are the same in the hyperbolic plane, because they can all be mapped to the same packing by (invertible) functions which preserve hyperbolic distance. Once we have started thinking about the Apollonian packing living in the complex plane, the whole world of complex functions is open to us, and we can start to do crazy things. If we don’t restrict ourselves to just Möbius transformations, but see what happens when we apply holomorphic or anti-holomorphic functions to the packing (these are complex functions with a good notion of derivative in the sense of calculus, which in particular have the property that they preserve angles between intersecting curves), we can get some very pretty designs. We need not even require (anti-)holomorphicity. The patterns featured on the front and back covers were drawn in this way. Beyond the packing Let us return to Apollonius of Perga. Remember that his treaties Έπαφαι, where he stated and solved the problem of finding tangent circles, is lost to history—how then do we know what he proved and how? The answer is that we don’t. The only record we have appears in the writings of Pappus of Alexandria, who lived some 400 years after Apollonius, but who references many of Apollonius’ works, including six which are no longer extant. All he says of Tangencies is the general problem which Apollonius was interested in, and that he solved it by solving many simple special cases and working up from there. The first person to reprove Apollonius’ results in ‘modern’ times was Adriaan van Roomen in 1596. His solution, however, does not use ruler and compass constructions, so cannot have been the one Apollonius used. The result was later proved using methods available to Apollonius, and in the way described by Pappus, by van Roomen’s friend François Viéte. The method of Viéte was later reworked and simplified by several mathematicians, including Isaac Newton in his Principia. Newton related the position of the centre of the fourth circle to its distance from the centres of the three circles to which it is supposed to be tangent. This is called hyperbolic positioning or trilateration. Newton used this viewpoint to describe the orbits of planets in the solar system, but it can also be used to help navigate ships, and to locate the source of a signal based on the different times the signal is received at three different locations. In the first world war this was used to locate artillery based on when shots were heard. This is also how modern GPS works (not by triangulation as is commonly believed). So this 2000-year-old problem in abstract geometry turned out to have extremely useful applications in the real world. The Apollonian packing also shows up in lots of different areas of mathematics. For example, Ford circles inspired the Hardy–Littlewood circle method, an important tool in analytic number theory which was used to solve Waring’s Problem: for an integer $k$, can every integer be written as a sum of at most $n$ $k$^th powers for some value of $n$? This is true: for example, every integer is the sum of 4 squares, 9 cubes, 19 fourth powers, and so on. In 2013, Harald Helfgott used the circle method to prove the weak Goldbach conjecture: every odd number greater than 5 is the sum of 3 primes. To infinity As a final application, I am a geometric group theorist, and I cannot help but talk about one place the Apollonian packing shows up in my field. Be warned: there is definitely some advanced maths coming up, but if you don’t mind skipping over some of the details, there are some very pretty pictures to make it worthwhile. It turns out that the extended complex plane $\widehat{\mathbb{C}}$ can be thought of as the boundary of 3 dimensional hyperbolic space $\mathbb{H}^3$. If we model $\mathbb{H}^3$ as the upper half- space $\{(x,y,z)\in\mathbb{R}^3\mid z\ge 0\}\cup\{\infty\}$ then $\widehat{\mathbb{C}}$ is identified with the plane $\{(x,y,z)\in\mathbb{R}^3\mid z=0\}\cup\{\infty\}$. When Möbius transformations act on $\widehat{\mathbb{C}}$, they also act on the whole of $\mathbb{H}^3$, and preserve hyperbolic distance. If we start by choosing just a few Möbius transformations, these generate a group which acts on $\mathbb{H}^3$. In doing so, the group creates a pattern on the complex plane called its limit set. This is a picture of how the group acts ‘at infinity’. Choosing the Möbius transformations carefully gives a group whose limit set is precisely the Apollonian packing. Let’s be a bit more precise; pick a point $p\in \widehat{\mathbb{C}}$ and choose $g$ pairs of circles $(C_i^+,C_i^-)_{i=1}^g$, each of which doesn’t intersect $p$. Each circle cuts $\widehat{\mathbb {C}}$ into two regions, call the region containing $p$ the exterior of that circle, and the complementary region the circle’s interior. We also want to arrange things so that no two circles have overlapping interiors (although two circles are allowed to be tangent). Next, for each pair of circles $(C_i^+,C_i^-)$ choose a Möbius transformation $m_i$ which maps $C_i^+$ to $C_i^-$ and which sends the interior of $C_i^+$ to the exterior of $C_i^-$. The group $G=\langle m_1,\dots,m_g\rangle$ generated by these transformations is called a (classical) Schottky group and it acts as a subgroup of the group of isometries of $\mathbb{H}^3$. Since we chose the circles to have non-overlapping interiors, we can use the delightfully named ‘Ping-Pong Lemma’ to prove that $G$ is abstractly isomorphic to the free group on $g$ generators. So how do we get a Schottky group whose limit set is the Apollonian packing? We can cheat slightly by working backwards; starting off with the picture we want to create, then we will choose the pairs of circles in the right way. Remember that one way we thought about constructing the Apollonian packing was to start off with four mutually tangent circles and then inductively draw the fifth circle wherever we can. Our strategy will be to choose Möbius transformations which do the same thing. We are helped by the following curious fact which you may want to try and prove yourself (again using Möbius transformations): given any three mutually tangent circles, there is a unique circle (possibly through $\infty$) which passes through all three circles at right angles. Given the four initial circles, there are ${4\choose 3}=4$ triples of mutually tangent circles, so we let $C_1^\pm$ and $C_2^\pm$ be the four circles orthogonal to each of these triples, as shown on the left. The corresponding Möbius transformations are : \begin{align*} m_1: z \mapsto \frac{z}{-2\mathrm{i}z+1} && m_2: z \mapsto \frac{(1-\mathrm{i})z+1}{z+(1+\mathrm{i})} \end{align*} The limit set of $G=\langle m_1, m_2\rangle$ is indeed the Apollonian packing we started with. If we perturb the starting Möbius transformations just slightly by varying the matrix entries (while being careful to ensure that the resulting group acts nicely on $\mathbb{H}^3$), we get a group whose limit set is a twisted Apollonian packing. Even though some of these perturbed limit sets look like they are still made up more or less of circles, they are in fact made up of a single continuous closed curve which is fractal, and does not intersect itself anywhere. They are examples of Jordan curves and illustrate why the Jordan Curve Theorem is so difficult to prove despite being ‘obvious’. Playing around more with different choices of Möbius generators we can produce even more beautiful examples of fractal limit sets; below are just a few to finish off. If you want to learn more about Schottky groups, their limit sets, and how to draw these pictures, I highly recommend the book Indra’s pearls: the vision of Felix Klein. It is the basis of this final section of this article, and gives details on exactly how you can draw these and many other pictures yourself.
{"url":"https://chalkdustmagazine.com/regulars/on-the-cover/on-the-cover-apollonian-packing/","timestamp":"2024-11-03T19:20:05Z","content_type":"text/html","content_length":"120247","record_id":"<urn:uuid:0ec56217-ec66-4aa9-a6c8-46d80d0a2e29>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00841.warc.gz"}
CDNPREP procedure • Genstat Knowledge Base 2024 Constructs a multi-location partially-replicated design using CycDesigN (R.W. Payne). PRINT = strings Controls printed output (design, report, factors, blocknumbers); default * i.e. none LEVELS = scalar Numbers of levels of the treatment factor; if unset, takes the numbers of levels declared for the factor specified by the TREATMENTS option NLOCATIONS = scalar Number of locations NBLOCKS = scalar Number of blocks at each location NUNITSPERLOCATION = Number of units at each location NREPLICATEDPERBLOCK = Number of treatments in each block that are replicated at the location containing the block TREATMENTS = factor Treatment factor LOCATIONS = factor Locations factor BLOCKS = factor Block factor UNITS = factor Unit-within-block factor SEED = scalar or Scalar or variate with two values specifying seeds for the random numbers used by CycDesigN to search for the best design and to randomize it – if a scalar is specified the same variate seed is used for both purposes; default 0 i.e. set automatically SPREADSHEET = string Whether to put the design factors into a spreadsheet (design); default * TIMELIMIT = scalar Time in minutes to search; default 1 No parameters CycDesigN is a package for the computer generation of experimental designs which constructs optimal or near-optimal block and row-column designs; see the book Cyclic and Computer Generated Designs by John & Williams (1995). CycDesigN can also operate as a batch program, that can be called from within Genstat. This program is distributed with Genstat, and there are procedures to call the program, read its output back into Genstat, and form the relevant design factors. There are also Genstat add-in and resource files to define user menus, which can be downloaded from the VSNi website. However, before CycDesigN can be used, a license must be obtained; see vsni.co.uk/software/cycdesign for details. This procedure, CDNPREP, uses the CycDesigN algorithms to form a partially-replicated block design. The assumption in CycDesigN is that the experiment will contain incomplete-block designs conducted at several locations and that, at each location, some treatments will occur twice, others may occur only once, and others may not occur at all. However, the treatments are all replicated the same number of times over the whole design. So there is the constraint that the total number of units, or plots, in the design must be a multiple of the number of treatments. Also, the number of units at each location must be greater than the number of treatments, and less than twice the number of treatments. The LEVELS option can be set to a scalar to define the number of treatments, and the TREATMENTS option can save a factor containing the generated values. LEVELS can be omitted if the TREATMENTS factor has already been declared with the right numbers of levels. Alternatively, if you only want to print the design and do not want to save the values, you can specify the number of levels using LEVELS, and leave TREATMENTS unset. Similarly, the NLOCATIONS option can define the number of locations, and the LOCATIONS option can supply a factor to save the values generated for the locations factor. You can omit NLOCATIONS if LOCATIONS is set to a factor that has already been defined with the correct number of levels. The number of units, or plots, at each location must be specified by the NUNITSPERLOCATION option, and must satisfy the constraints mentioned above. CycDesigN also needs to know the number of blocks at each location, and the number of treatments in each block that will be amongst those that are replicated (i.e. occur twice) at each location. These can be specified by the NBLOCKS and NREPLICATEDPERBLOCK options, respectively. However, designs are available for only limited combinations of values, and CDNPREP will give a fault diagnostic if you specify values that are not included in the feasible combinations. You can set option PRINT=blocknumbers to print the possibilities, and CDNPREP will then stop unless NBLOCKS and NREPLICATEDPERBLOCK are both set. Alternatively, if you are running Genstat interactively, CDNPREP will use the QUESTION procedure to prompt you to choose values from those that are feasible. Finally, if you are running Genstat in batch, CDNPREP will take the median number of feasible blocks and the corresponding median number of replicated treatments per block. Smaller values for NREPLICATEDPERBLOCK allow more of the treatments to be represented at each location, while larger values provide more residual degrees of freedom. The BLOCKS option can supply a factor to save the values generated for the block factor, and the UNITS option can supply a factor to save the values generated for the unit-within-block factor (which identifies the units within each block). Printed output is controlled by the PRINT option, with settings: design to print the design, report to print a report by CycDesigN on the design, factors to print the factor values, and blocksizes to print the feasible block sizes, and corresponding minimum and maximum numbers of replicated treatments in each block. The SEED option lets you supply seeds for the random numbers to be used within CycDesigN to search for the best design and to randomize it. You can specify a variate with two values to supply a different seed for each purpose, or a scalar to use the same one for both. If a zero value is specified, the corresponding seed is set automatically. The default is the scalar zero. You can set option SPREADSHEET=design to put the design factors into a Genstat spreadsheet. The TIMELIMIT defines the time in minutes to search. The default is 1. Options: PRINT, LEVELS, NLOCATIONS, NBLOCKS, NUNITSPERLOCATION, NREPLICATEDPERBLOCK, TREATMENTS, LOCATIONS, BLOCKS, UNITS, SEED, SPREADSHEET, TIMELIMIT. Parameters: none. The batch program CycDesRun is called using the SUSPEND directive. The underlying algorithm is described by Williams, John & Whitaker (2014). John, J.A. & Williams, E.R. (1995). Cyclic and Computer Generated Designs. London: Chapman and Hall. Williams, E.R., John, J.A. & Whitaker, D. (2014). Construction of more flexible and efficient p-rep designs. Australian & New Zealand Journal of Statistics, 56, 89-96. See also Procedures: AFPREP, CDNAUGMENTEDDESIGN, CDNBLOCKDESIGN, CDNROWCOLUMNDESIGN. Commands for: Design of experiments. CAPTION 'CDNPREP example',\ !t('Design for 120 treatments at 3 sites,',\ 'each with 8 blocks of 20 plots.'); STYLE=meta,plain " look at feasible numbers of blocks and treatments replicated in each block " CDNPREP [PRINT=blocknumbers; LEVELS=120; NLOCATIONS=3; NUNITSPERLOCATION=160] " form design with 8 blocks, each containing 15 replicated treatments " CDNPREP [PRINT=design; LEVELS=120; NLOCATIONS=3; NUNITSPERLOCATION=160;\ NBLOCKS=8; NREPLICATEDPERBLOCK=15; SEED=!(397010,822399); TIME=0.2]
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/cdnprep/","timestamp":"2024-11-02T16:58:10Z","content_type":"text/html","content_length":"45513","record_id":"<urn:uuid:64fe2acc-c6b5-4c0c-acc3-ddb5213af41a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00552.warc.gz"}
Economic Value Added - Breaking Down Finance Economic Value Added Economic Value Added (EVA) analysis measures the value added for shareholders by management over a given year. It is not the same as residual income. Whereas residual income starts from net income after interest expense, EVA uses the net operating profit after tax (NOPAT). Still, both measures are used by analysts to measure economic income. Another related measure is market value added. EVA is related but not completely the same. Thus, we also discuss the Market Value Added (MVA) so we can distinguish between the two. On this page, we discuss the formula, the advantages of Economic Value Added, the typical adjustments analysts make when using EVA, as well as a economic value added formula Excel implementation. The spreadsheet can be downloaded at the bottom the page. Economic value added formula How to calculate economic value added? There are two formulas we can use to calculate EVA. The first economic value added formula equals where NOPAT is the net operating profit after tax, WACC is the after-tax weighted average cost of capital in decimal terms, and total capital is the net working capital plus net fixed assets. Alternatively, total capital can also be calculated as the book value of long-term debt plus the book value of equity. A second way to calculate EVA is using Earnings Before Interest and Tax (EBIT) where t is the marginal tax rate and $WACC is the dollar cost of capital. The advantage of EVA is that it is a measure of economic income rather than accounting income. Thus, it better reflects the economic performance of the company. Market value added (MVA) is similar to EVA. MVA measures the management’s added value since the company’s inception. It is calculated as Accounting adjustments Before calculating NOPAT and total capital, the analyst may make any of the following adjustments: • Treat operating leases as capital leases • Capitalize and amortize R&D • Add the LIFO reserve to invested capital and add change in LIFO reserve to NOPAT • Add back charges on strategic investments • Eliminate deferred taxes Economic value added example Let’s finish with an economic value added calculation example. The table below applies the formula using data on all the variables discussed above. The spreadsheet is available for download below. We discussed the EVA, a method that is commonly used to evaluate management’s added value in a given year. Want to have an implementation in Excel? Download the Excel file: EVA calculator
{"url":"https://breakingdownfinance.com/finance-topics/equity-valuation/economic-value-added/","timestamp":"2024-11-07T10:44:31Z","content_type":"text/html","content_length":"241493","record_id":"<urn:uuid:fec5f3a9-5609-46b7-b06f-d59353c0ff3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00642.warc.gz"}
Understanding Mathematical Functions: Which Of The Following Functions When it comes to mathematical functions, one important concept that often comes up is continuity. Understanding which functions are continuous is crucial in various mathematical applications, from calculus to real-world problem-solving. In this blog post, we will explore the concept of continuity and discuss which of the following functions are continuous. Key Takeaways • Understanding continuity is crucial in various mathematical applications. • Mathematical functions play a significant role in representing relationships between variables. • Continuity in functions is defined by the concept of limit. • Examples of continuous functions include linear, polynomial, exponential, and trigonometric functions. • Graphical analysis can be used to determine the continuity of a function. Understanding Mathematical Functions Mathematical functions are a fundamental concept in mathematics, serving as a key tool for representing relationships between variables. They play a crucial role in various fields, including calculus, algebra, and statistics. In this chapter, we will delve into the definition of a mathematical function and explore its significance in understanding continuous functions. A. What is a mathematical function? A mathematical function is a rule or correspondence that assigns a unique output to each input in a specified set. In simpler terms, it is a relationship between two sets of numbers, where each input has exactly one output. Functions are commonly denoted by symbols such as f(x), g(x), or h(x), with "x" representing the input variable. Define a mathematical function in the context of mathematics In mathematics, a function can be defined as a relation between a set of inputs, called the domain, and a set of outputs, known as the range. The domain and range are essential components of a function, as they determine the set of possible inputs and outputs. Explain the role of functions in representing relationships between variables Functions serve as a powerful tool for representing and analyzing relationships between variables. They enable mathematicians to model real-world phenomena, make predictions, and solve complex problems. By understanding functions, professionals in various fields can gain insights into patterns, trends, and dependencies within data sets. B. Which functions are continuous? Continuity is a fundamental property of functions, representing the absence of any abrupt changes or breaks in their graphs. A continuous function can be drawn without lifting the pen from the paper, resulting in a smooth, unbroken curve. In the context of mathematical functions, it is essential to identify which types of functions exhibit continuity. Discuss the concept of continuity in mathematical functions In mathematics, a function is considered continuous if, for every point in its domain, the limit of the function as the input approaches that point exists and is equal to the value of the function at that point. This property ensures that the function's graph has no abrupt jumps, holes, or gaps. • Explain the role of limits in determining continuity • Address the significance of continuity in calculus and real analysis Understanding the concept of continuity is crucial for analyzing functions and their behavior. By identifying which functions are continuous, mathematicians can make accurate predictions and calculations, leading to practical applications in various scientific and engineering fields. Understanding Continuity in Functions Mathematical functions play a crucial role in various fields, from engineering to economics. Understanding the concept of continuity in functions is essential for analyzing their behavior and properties. In this chapter, we will delve into the definition of continuity and its connection to the concept of a limit. A. Define Continuity in the Context of Mathematical Functions The concept of continuity in mathematical functions refers to the absence of any abrupt jumps, breaks, or holes in the graph of the function. A function is considered continuous if its graph can be drawn without lifting the pencil from the paper. In other words, there are no gaps, breaks, or sharp turns in the graph. 1. Definition of Continuity • A function f(x) is continuous at a point c if the following three conditions are met: • - The function is defined at c • - The limit of f(x) as x approaches c exists • - The limit of f(x) as x approaches c is equal to f(c) 2. Types of Discontinuities • - Point discontinuity: A function has a point discontinuity at a specific point when the function is defined at that point, but the limit as x approaches that point does not equal the function • - Jump discontinuity: A function has a jump discontinuity when there is an abrupt change in the function value at a specific point. • - Infinite discontinuity: A function has an infinite discontinuity at a point when the limit as x approaches that point is infinite. B. Discuss the Concept of Limit and Its Connection to Continuity The concept of a limit is closely related to the idea of continuity in mathematical functions. The limit of a function at a particular point gives us insight into the behavior of the function as it approaches that point, and it is a fundamental concept in calculus. 1. Definition of Limit • The limit of a function f(x) as x approaches a specific value c is the value that f(x) approaches as x gets closer and closer to c. • - Mathematically, the limit of f(x) as x approaches c is denoted as lim(x → c) f(x). 2. Connection to Continuity • - A function is continuous at a point c if the limit of the function as x approaches c exists and is equal to the function value at c. • - If a function is not continuous at a point, there is a discontinuity present, which can manifest as a jump, hole, or other irregular behavior in the graph of the function. Examples of Continuous Functions When it comes to understanding mathematical functions, one important aspect to consider is continuity. Continuous functions are those that do not have any breaks, jumps, or gaps in their graph. In other words, the function can be drawn without lifting the pen from the paper. Here are some examples of elementary continuous functions: A. Elementary Continuous Functions 1. Linear Functions Linear functions take the form of f(x) = mx + b, where m and b are constants. These functions are continuous because they form straight lines with no breaks or holes. As you trace the graph, you will notice that it can be drawn without lifting the pen, making it a continuous function. 2. Polynomial Functions Polynomial functions are made up of terms involving x raised to a non-negative integer power. For example, f(x) = 3x^2 - 2x + 5 is a polynomial function. These functions are continuous for all real numbers x, meaning there are no disruptions in the graph and it can be drawn without lifting the pen. 3. Exponential Functions Exponential functions take the form of f(x) = a^x, where a is a positive constant not equal to 1. These functions exhibit continuous growth or decay, and their graphs do not have any breaks or jumps. 4. Trigonometric Functions Trigonometric functions such as sine, cosine, and tangent are also continuous. These functions have smooth and continuous wave-like graphs with no interruptions. B. Explanation of Continuity So, why are these functions considered continuous? The key factor is that they do not have any sudden changes, jumps, or breaks in their graph. This means that as you move along the x-axis, the corresponding y-values change smoothly without any disruptions. This property makes these functions suitable for various mathematical and real-world applications where continuity is crucial. Examples of non-continuous functions When it comes to mathematical functions, not all of them are continuous. There are certain types of functions that exhibit non-continuous behavior, and it's important to understand these examples in order to grasp the concept of continuity in mathematics. A. Provide examples of functions that are not continuous One common example of a non-continuous function is the step function. This type of function has a constant value within specific intervals and undergoes an abrupt change at the boundaries of these intervals. Another example is the piecewise function, which is defined by different rules or formulas for different intervals of the independent variable. Additionally, functions with removable discontinuities are considered non-continuous, as they have a hole or gap at a certain point that can be filled to make the function continuous. B. Discuss the characteristics that make these functions non-continuous Non-continuous functions exhibit certain characteristics that differentiate them from continuous functions. One common characteristic is the presence of discontinuities, which are points where the function is not defined or undergoes a sudden change in value. In the case of step functions, the abrupt transitions between constant values result in discontinuities. Piecewise functions also have discontinuities at the boundaries of the different intervals where the rules or formulas change. Functions with removable discontinuities have gaps or holes at specific points, causing a break in the continuity of the function. Understanding Mathematical Functions: Determining Continuity Using Graphical Analysis When it comes to understanding the continuity of mathematical functions, graphical analysis is a powerful tool that can help us determine whether a function is continuous or not. By visually examining the graph of a function, we can identify any breaks, jumps, or other disruptions in the function's behavior that would indicate a lack of continuity. A. Discuss how to determine continuity of a function graphically Continuity of a function can be determined graphically by looking for three main characteristics: 1. No breaks or jumps A continuous function will have a graph that does not contain any breaks or jumps. This means that there are no sudden changes in the value of the function as it moves from one point to another. If there are any sharp corners or discontinuities in the graph, then the function is not continuous. 2. No asymptotes Another characteristic of continuity is the absence of asymptotes in the graph. An asymptote is a line that the graph approaches but never touches. If a function has an asymptote, it means that there is a point where the function is not defined, and therefore it is not continuous at that point. 3. No holes A continuous function will not have any holes in its graph. If there are any missing points or gaps in the graph, then the function is not continuous at those points. B. Provide examples of graphical analysis to determine continuity of functions Let's look at a few examples of graphical analysis to determine the continuity of functions: • Example 1: The function f(x) = x^2 is continuous for all real numbers. Its graph is a smooth parabola that does not contain any breaks, jumps, asymptotes, or holes, indicating that it is • Example 2: The function g(x) = 1/x is not continuous at x = 0. Its graph has an asymptote at x = 0, indicating that the function is not defined at that point and therefore not continuous. • Example 3: The function h(x) = |x| has a sharp corner at x = 0. This indicates a lack of continuity at that point, as the function's graph changes direction abruptly. In summary, we have discussed several mathematical functions and whether they are continuous or not. We learned that linear functions, quadratic functions, cubic functions, and sine and cosine functions are all examples of continuous functions, while piecewise functions, step functions, and absolute value functions are not continuous at every point. Understanding the concept of continuity in mathematical functions is crucial for further studies in mathematics. Importance of Understanding Continuity • Continuity is essential in mathematical analysis and calculus. • It helps in understanding the behavior of a function at different points. • Understanding continuity is fundamental in solving real-world problems using mathematical models. By grasping the concept of continuity, mathematicians and scientists can make accurate predictions and interpretations based on mathematical functions. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-which-functions-are-continuous","timestamp":"2024-11-14T17:41:26Z","content_type":"text/html","content_length":"218708","record_id":"<urn:uuid:7f1671a0-5759-4370-8c3b-929fcca035de>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00581.warc.gz"}
Length Conversion in Metric (SI) & US Customary (USC) Units getcalc.com's Length Units Converter is an online tool to execute measurement units conversions in Metric (SI) & US Customary (USC) number system. The metric (SI) units of kilometers, meters, centimeters, millimeters & micrometers and the US Customary units of miles, yards, feet & inches used to measure the physical quantity of length or distance. By using this converter, any length measurement unit can be converted with in the units of US customary or metric system, or from metric to US customary units and vice versa.
{"url":"https://getcalc.com/length-converter.htm","timestamp":"2024-11-07T10:15:35Z","content_type":"text/html","content_length":"34618","record_id":"<urn:uuid:a35d438e-c3a4-44ef-a6a4-518deedcd19e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00669.warc.gz"}
How Do You Solve For Momentum - A Plus Topper What Is Momentum 1. Every moving object has momentum. However, the momentum of a timber lorry is very much bigger than that of a cyclist, despite them moving at the same speed. 2. The product of mass and velocity is called ‘momentum‘. \(\text{i}\text{.e}\text{. }p=m\overset{\to }{\mathop{v}}\) 3. Unit: SI unit of momentum is kg-m/s. 4. It is a vector quantity. Also Read About: Principle of Conservation of Momentum Activity 1 Aim: To study the effect of stopping two objects A. of the same mass moving at different speeds B. of different masses moving at the same speed Materials: Two identical glass marbles, soft plasticine, a steel marble of the same size as the glass marbles but with a bigger mass Apparatus: Meter ruler, vernier callipers A. Same Mass Moving at Different Speeds 1. Two identical pieces of soft plasticine are placed on a table. 2. A glass marble is dropped on the plasticine from a height of 20 cm. Another identical glass marble is dropped on the other piece of plasticine from a height of 60 cm as shown in Figure (a). 3. The diameter of the cavity on the surface of each plasticine caused by the falling marbles is measured using a pair of vernier callipers and compared. B. Different Masses Moving at the Same Speed 1. The method in section A is repeated by releasing a glass marble and a steel marble of the same size, both from a height of 60 cm. A. Same Mass Moving at Different Speeds The diameter of the cavity on the surface of the plasticine caused by the glass marble that is dropped from a height of 60 cm is larger than that caused by the glass marble that is dropped from a height of 20 cm. B. Different Masses Moving at the Same Speed The diameter of the cavity on the surface of the plasticine caused by the steel marble is larger than that caused by the glass marble. A larger cavity on the surface of the plasticine caused by a falling marble means a bigger effect on the plasticine in stopping the marble. The size of the cavity is a measure of the magnitude of the momentum of the moving object. A. Same Mass Moving at Different Speeds 1. The glass marble that is dropped from a higher height will reach the surface of the plasticine with a higher speed. 2. Both marbles are identical and hence, they have the same mass. Therefore, for two objects of equal mass, stopping the one with a higher speed requires greater effort than the one with a lower B. Different Masses Moving at the Same Speed 1. Both the glass marble and the steel marble are dropped from the same height. Therefore, both of them will reach the surface of the plasticine with the same velocity. 2. The steel marble has a greater mass than the glass marble. Therefore, for two objects of different masses but moving at the same speed, the effect of stopping the one with a bigger mass is greater than the one with a smaller mass. The linear momentum of an object depends on its mass and speed. For an object with constant mass, the higher its speed, the higher is its momentum. For an object with constant speed, the larger its mass, the higher is its momentum. Momentum Example Problems With Solutions Example 1. Calculate the force required to produce an acceleration of 5 m/s^2 in a body of mass 2.4 kg. Solution: We know that force = mass × acceleration = 2.4 kg × 5 m/s^2 = 12.0N Example 2. A body of mass 2.5 kg is moving with a velocity of 20 m/s. Calculate its momentum. Solution: Momentum, p = mass × velocity Here, mass m = 2.5 kg Velocity, v = 20 m/s ∴ Momentum, p = mv = 2.5 × 20 kg-m/s = 50 kg-m/s Example 3. The total mass of a lorry is 20000 kg and the total mass of a car is 2000 kg. If both the lorry and the car are travelling at a velocity of 25 m s^-1, calculate the momentum of the lorry and the car respectively. Momentum of the lorry = 20000 × 25 = 5 × 10^5 kg m s^-1 Momentum of the car = 2000 × 25 = 5 × 10 kg m s^-1
{"url":"https://www.aplustopper.com/momentum/","timestamp":"2024-11-14T15:36:39Z","content_type":"text/html","content_length":"45761","record_id":"<urn:uuid:df9e62b4-8912-450c-b6bf-6875b9383211>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00495.warc.gz"}
Yet Another Contest 3 P2 - Work Experience Submit solution Points: 10 (partial) Time limit: 2.0s Memory limit: 512M Josh, Nils, and Mike are being allocated to various work experience sites! The town consists of different buildings indexed from to , connected by bidirectional roads such that it is possible to get from any building to any other building using only the roads. The -th road connects buildings and . Josh, Nils, and Mike will each be allocated to one of the buildings to undergo work experience there. Note that more than one of them can be allocated to the same building. After a long day's work, they plan to meet up with each other. First, they will select one of the buildings. Then, each of them will walk from their current building to the selected building along the roads, taking the route with the fewest roads possible. Exhausted from all of the work, they will select the building such that the total number of roads traversed by all of them is minimised. If multiple buildings can be selected whilst minimising the total number of roads traversed, then out of those buildings, they will select the building with the lowest index. However, none of them have been allocated to a building yet. All buildings are understaffed, so any allocation of Josh, Nils, and Mike to the buildings is possible. For each building , they would like to know the number of different ways to allocate them to the buildings such that they will select and meet up at building . Two ways are considered different if any of Josh, Nils, or Mike are allocated to different buildings amongst the two ways. It is guaranteed that any building is reachable from any other building by traversing the roads. Subtask 1 [10%] The graph is linear. More specifically, for , and . Subtask 2 [20%] Subtask 3 [30%] Note that Subtask 2 must be passed for this subtask to be evaluated. Subtask 4 [40%] No additional constraints. Note that all previous subtasks must be passed for this subtask to be evaluated. Input Specification The first line of input contains a single integer, . The -th of the following lines of input contain two space-separated integers and , representing that the -th road connects buildings and . Output Specification Print a single line containing space-separated integers. The -th of these integers should be the number of different ways to allocate Josh, Nils and Mike to the buildings such that they will select and meet up at building . Sample Input Sample Output Let's consider the scenario where Josh is allocated to building , Nils is allocated to building , and Mike is allocated to building . If they were to meet up at building , they would traverse a total of roads: • Josh would not need to move at all, so he would traverse roads. • Nils would take the second road to building , and then take the first road to building , traversing roads. • Mike would take the -th road, arriving at building after traversing road. In total, they would traverse roads. It can be shown that choosing any other building to meet up at would require a greater total number of traversed roads, so in this scenario, they would select and meet up at building . Considering all scenarios, it can be shown that: • In scenarios, building is selected. • In scenarios, building is selected. • In scenarios, building is selected. • In scenarios, building is selected. • In scenarios, building is selected. There are no comments at the moment.
{"url":"https://dmoj.ca/problem/yac3p2","timestamp":"2024-11-14T14:49:40Z","content_type":"text/html","content_length":"34807","record_id":"<urn:uuid:9b0a664d-02f4-4fe4-9a87-18692b8ff46a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00385.warc.gz"}
Generalised friezes and a modified Caldero-Chapoton map depending on a rigid object, II It is an important aspect of cluster theory that cluster categories are "categorifications" of cluster algebras. This is expressed formally by the (original) Caldero-Chapoton map X which sends certain objects of cluster categories to elements of cluster algebras.Let τ c→ b→ c be an Auslander-Reiten triangle. The map X has the salient property that X(τ c) X( c) - X( b) = 1. This is part of the definition of a so-called frieze, see [1].The construction of X depends on a cluster tilting object. In a previous paper [14], we introduced a modified Caldero-Chapoton map ρ depending on a rigid object; these are more general than cluster tilting objects. The map ρ sends objects of sufficiently nice triangulated categories to integers and has the key property that ρ(τ c)ρ( c) - ρ( b) is 0 or 1. This is part of the definition of what we call a generalised frieze.Here we develop the theory further by constructing a modified Caldero-Chapoton map, still depending on a rigid object, which sends objects of sufficiently nice triangulated categories to elements of a commutative ring A. We derive conditions under which the map is a generalised frieze, and show how the conditions can be satisfied if A is a Laurent polynomial ring over the integers.The new map is a proper generalisation of the maps X and ρ. Auslander-Reiten triangle, Categorification, Cluster algebra, Cluster category, Cluster tilting object, Rigid object ASJC Scopus subject areas Cite this • Standard • Harvard • Apa • Vancouver • Author • BibTeX • RIS title = "Generalised friezes and a modified Caldero-Chapoton map depending on a rigid object, II", abstract = "It is an important aspect of cluster theory that cluster categories are {"}categorifications{"} of cluster algebras. This is expressed formally by the (original) Caldero-Chapoton map X which sends certain objects of cluster categories to elements of cluster algebras.Let τ c→ b→ c be an Auslander-Reiten triangle. The map X has the salient property that X(τ c) X( c) - X( b) = 1. This is part of the definition of a so-called frieze, see [1].The construction of X depends on a cluster tilting object. In a previous paper [14], we introduced a modified Caldero-Chapoton map ρ depending on a rigid object; these are more general than cluster tilting objects. The map ρ sends objects of sufficiently nice triangulated categories to integers and has the key property that ρ(τ c)ρ( c) - ρ( b) is 0 or 1. This is part of the definition of what we call a generalised frieze.Here we develop the theory further by constructing a modified Caldero-Chapoton map, still depending on a rigid object, which sends objects of sufficiently nice triangulated categories to elements of a commutative ring A. We derive conditions under which the map is a generalised frieze, and show how the conditions can be satisfied if A is a Laurent polynomial ring over the integers.The new map is a proper generalisation of the maps X and ρ.", keywords = "Auslander-Reiten triangle, Categorification, Cluster algebra, Cluster category, Cluster tilting object, Rigid object", author = "Thorsten Holm and Peter J{\o}rgensen", note = "Funding information: Part of this work was done while Peter J{\o}rgensen was visiting the Leibniz Universit{\"a}t Hannover. He thanks Christine Bessenrodt, Thorsten Holm, and the Institut f{\ "u}r Algebra, Zahlentheorie und Diskrete Mathematik for their hospitality. He gratefully acknowledges support from Thorsten Holm's grant HO 1880/5-1 , which falls under the research priority programme SPP 1388 Darstellungstheorie of the Deutsche Forschungsgemeinschaft (DFG).", year = "2016", month = may, day = "1", doi = "10.1016/j.bulsci.2015.05.001", language = "English", volume = "140", pages = "112--131", journal = "Bulletin des Sciences Mathematiques", issn = "0007-4497", publisher = "Elsevier Masson SAS", number = "4", TY - JOUR T1 - Generalised friezes and a modified Caldero-Chapoton map depending on a rigid object, II AU - Holm, Thorsten AU - Jørgensen, Peter N1 - Funding information: Part of this work was done while Peter Jørgensen was visiting the Leibniz Universität Hannover. He thanks Christine Bessenrodt, Thorsten Holm, and the Institut für Algebra, Zahlentheorie und Diskrete Mathematik for their hospitality. He gratefully acknowledges support from Thorsten Holm's grant HO 1880/5-1 , which falls under the research priority programme SPP 1388 Darstellungstheorie of the Deutsche Forschungsgemeinschaft (DFG). PY - 2016/5/1 Y1 - 2016/5/1 N2 - It is an important aspect of cluster theory that cluster categories are "categorifications" of cluster algebras. This is expressed formally by the (original) Caldero-Chapoton map X which sends certain objects of cluster categories to elements of cluster algebras.Let τ c→ b→ c be an Auslander-Reiten triangle. The map X has the salient property that X(τ c) X( c) - X( b) = 1. This is part of the definition of a so-called frieze, see [1].The construction of X depends on a cluster tilting object. In a previous paper [14], we introduced a modified Caldero-Chapoton map ρ depending on a rigid object; these are more general than cluster tilting objects. The map ρ sends objects of sufficiently nice triangulated categories to integers and has the key property that ρ(τ c)ρ( c) - ρ( b) is 0 or 1. This is part of the definition of what we call a generalised frieze.Here we develop the theory further by constructing a modified Caldero-Chapoton map, still depending on a rigid object, which sends objects of sufficiently nice triangulated categories to elements of a commutative ring A. We derive conditions under which the map is a generalised frieze, and show how the conditions can be satisfied if A is a Laurent polynomial ring over the integers.The new map is a proper generalisation of the maps X and ρ. AB - It is an important aspect of cluster theory that cluster categories are "categorifications" of cluster algebras. This is expressed formally by the (original) Caldero-Chapoton map X which sends certain objects of cluster categories to elements of cluster algebras.Let τ c→ b→ c be an Auslander-Reiten triangle. The map X has the salient property that X(τ c) X( c) - X( b) = 1. This is part of the definition of a so-called frieze, see [1].The construction of X depends on a cluster tilting object. In a previous paper [14], we introduced a modified Caldero-Chapoton map ρ depending on a rigid object; these are more general than cluster tilting objects. The map ρ sends objects of sufficiently nice triangulated categories to integers and has the key property that ρ(τ c)ρ( c) - ρ( b) is 0 or 1. This is part of the definition of what we call a generalised frieze.Here we develop the theory further by constructing a modified Caldero-Chapoton map, still depending on a rigid object, which sends objects of sufficiently nice triangulated categories to elements of a commutative ring A. We derive conditions under which the map is a generalised frieze, and show how the conditions can be satisfied if A is a Laurent polynomial ring over the integers.The new map is a proper generalisation of the maps X and ρ. KW - Auslander-Reiten triangle KW - Categorification KW - Cluster algebra KW - Cluster category KW - Cluster tilting object KW - Rigid object UR - http://www.scopus.com/inward/record.url?scp=84929591094&partnerID=8YFLogxK U2 - 10.1016/j.bulsci.2015.05.001 DO - 10.1016/j.bulsci.2015.05.001 M3 - Article AN - SCOPUS:84929591094 VL - 140 SP - 112 EP - 131 JO - Bulletin des Sciences Mathematiques JF - Bulletin des Sciences Mathematiques SN - 0007-4497 IS - 4 ER -
{"url":"https://www.fis.uni-hannover.de/portal/en/publications/generalised-friezes-and-a-modified-calderochapoton-map-depending-on-a-rigid-object-ii(890cc94d-8514-495d-8c78-57a2d0d55d44).html","timestamp":"2024-11-04T20:04:00Z","content_type":"text/html","content_length":"54415","record_id":"<urn:uuid:73c69711-8f81-41e3-b43f-d8cfb32579cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00603.warc.gz"}
How do you find the slope for y=18? | HIX Tutor How do you find the slope for y=18? Answer 1 #y=18# allocates a constant value of 18 to y. So no mater what value #x# is #y# will always be 18 The result is a straight line that is parallel to the x-axis and the line passes through the point on the y-axis #(x,y)->(0,18)# ' ~ ~~~~~~~ The slope is #("change in y")/("change in x")# Let any point #P_1->(x_1,y_1)=(x_1,18)# Let another point #P_2->(x_2,y_2)->(x_2,18)# where #P_1!=P_2# Then slope #->m->("change in y")/("change in x") = (y_2-y_1)/(x_2-x_1) = (18-18)/(x_2-x_1) =0# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 For the equation (y = 18), since it represents a horizontal line, the slope is 0. Horizontal lines have a slope of 0 because they are parallel to the x-axis and do not rise or fall vertically. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-slope-for-y-18-8f9af91b74","timestamp":"2024-11-03T14:05:01Z","content_type":"text/html","content_length":"568994","record_id":"<urn:uuid:ab8b2b54-79b9-4556-b954-f19b27899e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00577.warc.gz"}
Magnet Pull Force Calculator - Calculator Doc Magnet Pull Force Calculator Magnet pull force refers to the strength of the attraction or repulsion between two charged objects. This force can be calculated based on the charges of the objects, the distance between them, and the magnetic permeability of the medium. The formula to calculate the magnet pull force (F) is: F = μ * q1 * q2 / (4 * π * r) • μ is the magnetic permeability. • q1 and q2 are the charges of the two objects. • r is the distance between the two charges. How to Use 1. Input the magnetic permeability (μ) of the medium. 2. Enter the value for charge 1 (q1) and charge 2 (q2). 3. Specify the distance (r) between the two charges. 4. Press “Calculate” to find the pull force (F). If the magnetic permeability (μ) is 4π × 10^-7 H/m, the charges (q1 and q2) are 3 and 5 coulombs, and the distance (r) is 2 meters, the pull force would be calculated as: F = (4π × 10^-7) * 3 * 5 / (4 * π * 2) = 3.75 × 10^-7 N 1. What is magnetic pull force? Magnetic pull force is the force between two charged objects, which can either attract or repel based on the charges and distance. 2. What is magnetic permeability (μ)? Magnetic permeability (μ) is a measure of how a material responds to the presence of a magnetic field. 3. How does distance affect magnetic pull force? The force decreases as the distance between the charges increases, according to the inverse square law. 4. What units are used for charge (q1 and q2)? Charge is measured in coulombs (C). 5. How do you calculate magnetic pull force between two magnets? You can use the formula F = μ * q1 * q2 / (4 * π * r) to calculate the force between two magnets. 6. Is the magnetic pull force always attractive? No, the force can be attractive or repulsive depending on whether the charges are like or opposite. 7. Does the medium affect magnetic pull force? Yes, the magnetic permeability of the medium plays a role in determining the force between two charges. 8. Can I calculate magnetic pull force for real-world magnets? Yes, but real-world scenarios often involve complex factors like shape, alignment, and non-ideal conditions. 9. What is the value of π in the formula? π (Pi) is approximately 3.14159, a constant used in the formula. 10. How accurate is this magnet pull force formula? The formula provides an idealized calculation. For more complex shapes and interactions, a more detailed model may be needed. 11. What happens if the charges are very far apart? As the distance increases, the magnetic pull force diminishes and can approach zero. 12. Can this formula be applied to both attraction and repulsion? Yes, the formula works for both attractive and repulsive forces, depending on the charge signs. 13. What are some typical values for magnetic permeability? In a vacuum, the permeability of free space is 4π × 10^-7 H/m. Different materials will have varying permeability values. 14. How do we measure magnetic pull force in practice? Magnetic pull force can be measured using force meters or specialized equipment in laboratory conditions. 15. What factors can alter magnetic pull force? Factors like temperature, medium, and charge magnitudes can influence the force. 16. Why is the force divided by 4π in the formula? The factor 4π accounts for the geometric distribution of the magnetic field in space, based on spherical symmetry. 17. Can we apply this formula for moving charges? No, this formula applies to static charges. For moving charges, more complex equations involving electromagnetic fields are required. 18. How do the charges’ signs affect the force? If the charges have the same sign, the force will be repulsive. If they have opposite signs, the force will be attractive. 19. What is the relationship between magnet strength and charge? Stronger magnets or charges will result in a higher pull force when all other factors are constant. 20. Is magnetic pull force significant in everyday life? While noticeable in certain cases like magnets, the force between typical objects with low charge is usually negligible. Understanding magnet pull force is essential in both theoretical physics and practical applications. By using this calculator, you can easily determine the force between two charges, helping you in various fields like engineering, physics, and material science.
{"url":"https://calculatordoc.com/magnet-pull-force-calculator/","timestamp":"2024-11-02T11:08:16Z","content_type":"text/html","content_length":"86904","record_id":"<urn:uuid:39975c9d-2176-462c-9b0e-45bd6fb55ebf>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00636.warc.gz"}
Narayan Venkatasubramanyan (This is the fourth in the PuneTech series of articles on optimization by Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management. The first one was an ‘overview’ case study of optimization. The second was architecture of a decision support system. The third was optimization and organizational readiness for For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. For the full series of articles, click here.) this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through the problem of choice the wikipedia article on optimization provides a great overview of the field. it does a thorough job by providing a brief history of the field of mathematical optimization, breaking down the field into its various sub-fields, and even making a passing reference to commercially available packages that help in the rapid development of optimization-based solutions. the rich set of links in this page lead to detailed discussions of each of the topics touched on in the overview. i’m tempted to stop here and say that my job is done but there is one slight problem: there is a complete absence of any reference to helicopter scheduling in an offshore oil-field. not a trace! this brings me to the biggest problem facing a young practitioner in the field: what to do when faced with a practical problem? of course, the first instinct is to run with the technique one is most familiar with. being among the few in our mba program that had chosen the elective titled “selected topics in operations research” (a title that i’m now convinced was designed to bore and/or scare off prospective students who weren’t self-selected card-carrying nerds), we came to the problem of helicopter scheduling armed with a wealth of text-book knowledge. the lines represent the constraints. the blue region is the set of all “permissible values”. the objective function is used to choose one (“the most optimal”) out of the blue points. image via having recently studied linear and integer programming, we first tried to write down a mathematical formulation of the problem. we knew we could describe each sortie in terms of variables (known as decision variables). we then had to write down constraints that ensured the following: • any set of values of those decision variables that satisfied all the constrains would correspond to a sortie • any sortie could be described by a set of permissible set of values of those decision variables this approach is one of the cornerstones of mathematical programming: given a practical situation to optimize, first write down a set of equations whose solutions have a one-to-one correspondence to the set of possible decisions. typically, these equations have many solutions. click here for an animated presentation that shows how the solutions to a system of inequalities can be viewed graphically. the other cornerstone is what is called an objective function, i.e., a mathematical function in those same variables that were used to describe the set of all feasible solutions. the solver is directed to pick the “best” solution, i.e., one that maximizes (or minimizes) the objective function. the set of constraints and the objective function together constitute a mathematical programming problem. the solution that maximizes (or minimizes) the objective function is called an optimal linear programming – an example googling for “linear programming examples” leads to millions of hits, so let me borrow an example at random from here: “A farmer has 10 acres to plant in wheat and rye. He has to plant at least 7 acres. However, he has only $1200 to spend and each acre of wheat costs $200 to plant and each acre of rye costs $100 to plant. Moreover, the farmer has to get the planting done in 12 hours and it takes an hour to plant an acre of wheat and 2 hours to plant an acre of rye. If the profit is $500 per acre of wheat and $300 per acre of rye how many acres of each should be planted to maximize the decisions the farmer needs to make are: how many acres of wheat to plant? how many acres of rye to plant? let us call these x and y respectively. so what values can x and y take? • since we know that he has only 10 acres, it is clear that x+y must be less than 10. • the problem says that he has to plant at least 7 acres. we have two choices: we can be good students and write down the constraint “x+y >= 7” or we can be good practitioners and demand to know more about the origins of this constraint (i’m sure every OR professional of long standing has scars to show from the times when they failed to ask that question.) • the budget constraint implies that 200x + 100y <= 1200. again, should we not be asking why this farmer cannot borrow money if doing so will increase his returns? • finally, the time constraint translates into x + 2y <= 12. can he not employ farm-hands to increase his options? • the non-negativity constraints (x, y >= 0) are often forgotten. in the absence of these constraints, the farmer could plant a negative amount of rye because doing so would seem to get him more land, more money, and more time. clearly, this is practically impossible. as you will see if you were to scroll down that page, these inequalities define a triangular region in the x,y plane. all points on that triangle and its interior represents feasible solutions: i.e., if you were to pick a point, say (5,2), it means that the the farmer plants 5 acres each of wheat and 2 acres of rye. it is easy to confirm that this represents no more than 10 acres, no less than 7 acres, no more than $1200 and no more than 12 hours. but is this the best solution? or is there another point within that triangle? this is where the objective function helps. the objective is to maximize the profit earner, i.e., maximize 500x + 300y. from among all the points (x,y) in that triangle, which one has the highest value for 500x + 300y? this is the essence of linear programming. LPs are a subset of problems that are called mathematical programs. real life isn’t always lp in practice, not all mathematical programs are equally hard. as we saw above, if all the constraints and the objective function are linear in the decision variables and if the decision variables can take on any real value, we have a linear program. this is the easiest class of mathematical programs. linear programming models can be used to describe, sometimes approximately,a large number of commercially interesting problems like supply chain planning. commercial packages like OPL, GAMS, AMPL, etc can be used to model such problems without having to know much programming. packages like CPLEX can solve problems with millions of decision variables and constraints and produce an optimal solution in reasonable time. lately, there have been many open source solvers (e.g., GLPK) that have been growing in their capability and competing with commercial packages. integer programming problems constrain the solution to specific discrete values. while the blue lines represent the “feasible region”, the solution is only allowed to take on values represented by the red dots. this makes the problem significantly more difficult. image via wikipedia in many interesting commercial problems, the decision variables is required to take on discrete values. for example, a sortie that carries 1/3 of a passenger from point a to point b and transports the other 2/3 on a second flight from point a to point b would not work in practice. a helicopter that lands 0.3 in point c and 0.7 in point d is equally impractical. these variables have to be restricted to integer values. such problems are called integer programming problems. (there is a special class of problems in which the decision variables are required to be 0 or 1; such problems are called 0-1 programming problems.) integer programming problems are surprisingly hard to solve. such problems occur routinely in scheduling problems as well as in any problem that involves discrete decisions. commercial packages like CPLEX include a variety of sophisticated techniques to find good (although not always optimal) solutions to such problems. what makes these problems hard is the reality that the solution time for such problems grows exponentially with the growth in the size of the problem. another class of interesting commercial problems involves non-linear constraints and/or objective functions. such problems occur routinely in situations such refinery planning where the dynamics of the process cannot be described (even approximately) with linear functions. some non-linear problems are relatively easy because they are guaranteed to have unique minima (or maxima). such well-behaved problems are easy to solve because one can always move along an improving path and find the optimal solution. when the functions involved are non-convex, you could have local minima (or maxima) that are worse than the global minima (or maxima). such problems are relatively hard because short-sighted algorithms could find a local minimum and get stuck in it. fortunately for us, the helicopter scheduling problem had no non-linear effects (at least none that we accounted for in our model). unfortunately for us, the discrete constraints were themselves extremely hard to deal with. as we wrote down the formulation on paper, it became quickly apparent that the sheer size and complexity of the problem was beyond the capabilities of the IBM PC-XT that we had at our disposal. after kicking this idea around for a bit, we abandoned this approach. resorting to heuristics we decided to resort to a heuristic approach, i.e., an approach that used a set of rules to find good solutions to the problem. the approach we took involved the enumeration of all possible paths on a search tree and then an evaluation of those paths to find the most efficient one. for example, if the sortie was required to start at point A and drop off m1 men at point B and m2 men at point C, the helicopter could • leave point A with the m1 men and proceed to point B, or • leave point A with the m2 men and proceed to point C, or • leave point A with the m1 men and some of the m2 men and proceed to point B, or • leave point A with the m1 men and some of the m2 men and proceed to point C, or • . . . if we were to select the first possibility, it would drop off the m1 men and then consider all the options available to it (return to A for the m2 men? fly to point D to refuel?) we would then traverse this tree enumerating all the paths and evaluating them for their total cost. finally, we would pick the “best” path and publish it to the radio operator. at first, this may seem ridiculous. the explosion of possibilities meant that this tree was daunting. there were several ways around this problem. firstly, we never really explicitly enumerated all possible paths. we built out the possibilities as we went, keeping the best solution until we found one that was better. although the number of possible paths that a helicopter could fly in the course of a sortie was huge, there were simple rules that directed the search in promising directions so that the algorithm could quickly find a “good” sortie. once a complete sortie had been found, the algorithm could then use it to prune searches down branches that seemed to hold no promise for a better solution. the trick was to tune the search direction and prune the tree without eliminating any feasible possibilities. of course, aggressive pruning would speed up the search but could end up eliminating good solutions. similarly, good rules to direct the search could help find good solutions quickly but could defer searches in non-obvious directions. since we were limited in time, so the search tree was never completely searched, so if the rules were poor, good solutions could be pushed out so late in the search that they were never found, at least not in time to be implemented. one of the nice benefits of this approach was that it allowed the radio operator to lock down the first few steps in the sortie and leave the computer to continue to search for a good solution for the remainder of the sortie. this allowed the optimizer to continue to run even after the sortie had begun. this bought the algorithm precious time. allowing the radio operator the ability to override also had the added benefit of putting the user in control in case what the system recommended was infeasible or undesirable. notice that this approach is quite far from mathematical programming. there is no guarantee of an optimal solution (unless one can guarantee that pruning was never too aggressive and that we exhaustively searched the tree, neither of which could be guaranteed in practical cases). nevertheless, this turned out to be quite an effective strategy because it found a good solution quickly and then tried to improve on the solution within the time it was allowed. traditional operations research vs. artificial intelligence this may be a good juncture for an aside: the field of optimization has traditionally been the domain of operations researchers (i.e., applied mathematicians and industrial engineers). even though the field of artificial intelligence in computer science has been the source of many techniques that effectively solve many of the same problems as operations research techniques do, OR-traditionalists have always tended to look askance at their lowly competitors due to the perceived lack of rigour in the AI techniques. this attitude is apparent in the wikipedia article too: after listing all the approaches that are born from mathematical optimization, it introduces “non-traditional” methods with a somewhat off-handed “Here are a few other popular methods:” i find this both amusing and a little disappointing. there have been a few honest attempts at bringing these two fields together but a lot more can be done (i believe). it would be interesting to see how someone steeped in the AI tradition would have approached this problem. perhaps many of the techniques for directing the search and pruning the tree are specific instances of general approaches studied in that discipline. if there is a moral to this angle of our off-shore adventures, it is this: when approaching an optimization problem, it is tempting to shoot for the stars by going down a rigorous path. often, reality intrudes. even when making technical choices, we need to account for the context in which the software will be used, how much time there is to solve the problem, what are the computing resources available, and how it will fit into the normal routine of work. other articles in this series this article is the fourth in the series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links leads to a piece that dwells on one particular optimization: a case study architecture of a decision-support system optimization and organizational readiness for change optimization: a technical overview (this article) Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison. He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/ Optimization and Organizational Readiness for Change (This is the third in the PuneTech series of articles on optimization by Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management. The first one was an ‘overview’ case study of optimization. The second was architecture of a decision support system. For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. For the full series of articles, click here.) this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through organizational dynamics most discussions of optimization tend to focus on the technical details of problem formulation, algorithm design, the use of commercially available software, implementation details, etc. a fundamental point gets lost in that approach to this topic. in this piece, we will focus on that point: organizational readiness for change. the introduction of optimization in the decision-making process almost always requires change in that process. processes exist in the context of an organization. as such, when introducing change of this nature, organizations need to be treated much the same way a doctor would treat a recipient of an organ. careful steps need to be take to make sure that the organization is receptive to change. before the change is introduced, the affected elements in the organization need to be made aware of the need for change. also, the organization’s “immune system” needs to be neutralized while the change is introduced. the natural tendency of any organization to attack change and frustrate the change agent needs to be foreseen and planned for. the structure of the client’s project organization is critical. in my experience, every successful implementation of optimization has required support at 3 levels within the client organization: 1. a project needs “air cover” from the executive level. 2. at the project level, it needs a champion who will serve as the subject-matter expert, evangelist, manager, and cheerleader. 3. at the implementation level, it needs a group of people who are intimately familiar with the inner workings of the existing IT infrastructure. let me elaborate on that with specific emphasis on the first two: an executive sponsor is vital to ensuring that the team is given the time and resources it needs to succeed even as changes in circumstances cause high-level priorities to change. during the gestation period of a project — a typical project tends to take several months — the project team needs the assurance that their budget will be safe, the priorities that guide their work will remain largely unchanged, and the team as a whole will remain free of distractions. a project champion is the one person in the client organization whose professional success is completely aligned with the success of the project. he/she stands to get a huge bonus and/or a promotion upon the success of the project. such a person keeps the team focused on the deliverable, keeps the executive sponsor armed with all the information he/she needs to continue to make the case for the project, and keeps all affected parties informed of impending changes, in short, an internal change agent. in order to achieve this, the champion has to be from the business end of the organization, not from the IT department. unfortunately, most projects tend to focus on the third of the elements. strength in the implementation team alone will not save project that lacks a sponsor or a champion. let us examine the helicopter scheduling project in this light. it could be argued that executive sponsorship for this project came from the highest possible level. i heard once that our project had been blessed by the managing directors of the two companies. unfortunately, their involvement didn’t extend anywhere beyond that. neither managing director helped shape the project organization for success. who was our champion? there was one vitally important point that i mentioned in passing in the original narrative: the intended users of the system were radio operators. they reported to an on-shore manager in the electronics & telecommunication department. in reality, their work was intimately connected to the production department, i.e., the department that managed the operations in the field. as such, they were effectively reporting to the field production supervisor. the radio operators worked very much like the engineers in the field: they worked all day every day for 14 days at a time and then went home for the next 2 weeks. each position was manned by two radio operators — more about them later — who alternately occupied the radio room. as far as their helicopter-related role was concerned, they were expected to make sure that they did the best they could do to keep operations going as smoothly as possible. their manager, the person who initiated the project, had no direct control over the activities of the radio operator. meanwhile, the field production supervisor was in charge of maintaining the efficient flow of oil out of the field. the cost of helicopter operations was probably a miniscule fraction of the picture they viewed. because no one bore responsibility for the efficiency of helicopter usage, no one in the client organization really cared about the success of our project. unfortunately, we were neither tasked nor equipped to deal with this problem (although that may seem odd considering that there were two fresh MBAs on the team). in hindsight, it seems like this project was ill-structured right from the beginning. the project team soldiered on in the face of these odds, oblivious to the fact that we’d been dealt a losing hand. should the final outcome have ever been a surprise? other articles in this series this article is the third in a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect. optimization: a case study architecture of a decision-support system optimization and organizational readiness for change (this article) optimization: a technical overview About the author – Dr. Narayan Venkatasubramanyan Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison. He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/ Architecture of a decision-support system (PuneTech is honored to have Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management, as our contributor. I had the privilege of working closely with Narayan at i2 Technologies in Dallas for nearly 10 years. For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. This is the second in a series of articles that we will publish once a week for a month. The first one was an ‘overview’ case study of optimization. Click here for the full series.) this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through a layered view of decision-support systems it is useful to think of a decision-support system as consisting of 4 distinct layers: 1. data layer 2. visibility layer 3. predictive/simulation layer 4. optimization layer the job of the data layer is to capture all the data that is relevant and material to the decision at hand and to ensure that this data is correct, up-to-date, and easily accessible. in our case, this would include master/static data such as the map of the field, the operating characteristics of the helicopter, etc as well as dynamic data such as the requirements for the sortie, ambient conditions (wind, temperature), etc. this may seem rather obvious at first sight but a quick reading of the case study shows that we had to revisit the data layer several times over the course of the development of the solution. as the name implies, the visibility layer provides visibility into the data in a form that allows a human user to exercise his/her judgment. very often, a decision-support system requires no more than just this layer built on a robust data layer. for example, we could have offered a rather weak form of decision support by automating the capture of dynamic data and presenting to the radio operator all the data (both static and dynamic), suitably filtered to incorporate only parts of the field that are relevant to that sortie. he/she would be left to chart the route of the helicopter on a piece of paper, possibly checking off requirements on the screen as they are satisfied. even though this may seem trivial, it is important to note that most decision-support systems in everyday use are rather lightweight pieces of software that present relevant data to a human user in a filtered, organized form. the human decision-maker takes it from there. the predictive/simulation layer offers an additional layer of help to the human decision-maker. it has the intelligence to assess the decisions made (tentatively) by the user but offers no active support. for instance, a helicopter scheduling system that offers this level of support would present the radio operator with a screen on which the map of the field and the sortie’s requirements are depicted graphically. through a series of mouse-clicks, the user can decide whom to pick up, where to fly to, whether to refuel, etc. the system supports the user by automatically keeping track of the weight of the payload (passenger+fuel) and warning the user of violations, using the wind direction to compute the rate of fuel burn, warning the user of low-fuel conditions, monitoring whether crews arrive at their workplace on time, etc. in short, the user makes decisions, the system checks constraints and warns of violations, and provides a measure of goodness of the solution. few people acknowledge that much of corporate decision-making is at this level of sophistication. the widespread use of microsoft excel is clear evidence of this. the optimization layer is the last of the layers. it wrests control from the user and actively recommends decisions. it is obvious that the effectiveness of optimization layer is vitally dependent on the data layer. what is often overlooked is that the acceptance of the optimization layer by the human decision-maker often hinges on their ability to tweak the recommendations in the predictive layer, even if only to reassure themselves that the solution is correct. often, the post-optimization adjustments are indispensable because the human decision-maker knows things that the system does the art (and science) of modeling the term “decision-support system” may seem a little archaic but i will use it here because my experience with applying optimization has been in the realm of systems that recommend decisions, not ones that execute them. there is always human intervention that takes the form of approval and overrides. generally speaking, this is a necessary step. the system is never all-knowing. as a result, its view of reality is limited, possibly flawed. these limitations and flaws are reflected in its recommendations. this invites the question: if there are known limitations and flaws in the model, why not fix them? this is an important question. the answer to this is not nearly as obvious as it may appear. before we actually construct a model of reality, we must consciously draw a box around that portion of reality that we intend to include in the model. if the box is drawn too broadly, the model will be too complex to be tractable. if the box is drawn too tightly, vital elements of the model are excluded. it is rare to find a decision problem in which we find a perfect compromise, i.e., we are able to draw a box that includes all aspects of the problem without the problem becoming computationally intractable. unfortunately, it is hard to teach the subtleties of modeling in a classroom. in an academic setting, it is hard to wrestle with the messy job of making seemingly arbitrary choices about what to leave in and what to exclude. therefore, most students of optimization enter the real world with the impression that the process of modeling is quick and easy. on the contrary, it is at this level that most battles are won or lost. note: the term modeling is going to be unavoidably overloaded in this context. when i speak of models, students of operations research may immediately think in terms of mathematical equations. those models are still a little way down the road. at this point, i’m simply talking about the set of abstract interrelationships that characterize the behaviour of the system. some of these relationships may be too complex to be captured in a mathematical model. as a result, the mathematical model is yet another level removed from reality. consider our stumbling-and-bumbling approach to modeling the helicopter scheduling problem. we realized that the problem we faced wasn’t quite a text-book case. our initial approach was clearly very narrow. once we drew that box, our idealized world was significantly simpler than the real world. our world was flat. our helicopter never ran out of fuel. the amount of fuel it had was never so much that it compromised its seating capacity. it didn’t care which way the wind was blowing. it didn’t care how hot it was. in short, our model was far removed from reality. we had to incorporate each of these effects, one by one, because their exclusion made the gap between reality and model so large that the decisions recommended by the model were grossly unrealistic. it could be argued that we were just a bunch of kids who knew nothing about helicopters, so trial-and-error was the only approach to determining the shape of the box we had to draw. not true! here’s how we could have done it differently: if you were to examine what we did in the light of the four-layer architecture described above, you’d notice that we really only built two of the four: the data layer and the optimization layer. this is a tremendously risky approach, an approach that has often led to failure in many other contexts. it must be acknowledged that optimization experts are rarely experts in the domain that they are modeling. nevertheless, by bypassing the visibility and predictive layers, we had sealed off our model from the eyes of people who could have told us about the flaws in it. each iteration of the solution saw us expanding the data layer on which the software was built. in addition to expanding that data layer, we had to enhance the optimization layer to incorporate the rules implicit in the new pieces of data. here are the steps we took: 1. we added the fuel capacity and consumption rate of each helicopter to the data layer. and modified the search algorithm to “remember” the fuel level and find its way to a fuel stop before the chopper plunged into the arabian sea. 2. we added the payload limit to the data layer. and further modified search algorithm to “remember” not to pick up too many passengers too soon after refueling or risk plunging into the sea with 12 people on board. 3. we captured the wind direction in the data layer and modified the computation of the distance matrix used in the optimization layer. 4. we captured the ambient temperature as well as the relationship between temperature and maximum payload in the data layer. and we further trimmed the options available to the search algorithm. we could have continued down this path ad infinitum. at each step, our users would have “discovered” yet another constraint for us to include. back in those days, ongc used to charter several different helicopter agencies. i remember one of the radio operator telling me that some companies were sticklers for the rules while others would push things to the limit. as such, a route was feasible or not depending on whether the canadian company showed up or the italian one did! should we have incorporated that too in our model? how is one to know? this question isn’t merely rhetorical. the incorporation of a predictive/simulation layer puts the human decision-maker in the driver’s seat. if we had had a simulation layer, we would have quickly learned the factors that were relevant and material to the decision-making process. if the system didn’t tell the radio operator which way the wind was blowing, he/she would have immediately complained because it played such a major role in their choice. if the system didn’t tell him/her whether it was the canadian or the italian company and he didn’t ask, we would know it didn’t matter. in the absence of that layer, we merrily rushed into what is technically the most challenging aspect of the solution. implementing an optimization algorithm is no mean task. it is hugely time-consuming, but that is really the least of the problems. optimization algorithms tend to be brittle in the following sense: a slight change in the model can require a complete rewrite of the algorithm. it is but human that once one builds a complex algorithm, one tends to want the model to remain unchanged. one becomes married to that view of the world. even in the face of mounting evidence that the model is wrong, one tends to hang on. in hindsight, i would say we made a serious mistake by not architecting the system to validate the correctness of the box we had drawn before we rushed ahead to building an optimization algorithm. in other words, if we had built the solution systematically, layer by layer, many of the surprises that caused us to swing wildly between jubilation and depression would have been avoided. other articles in this series this article is the second in a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect. articles in this series: optimization: a case study architecture of a decision-support system (this article) optimization and organizational readiness for change optimization: a technical overview About the author – Dr. Narayan Venkatasubramanyan Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison. He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/ Optimization: A case study (PuneTech is honored to have Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management, as our contributor. I had the privilege of working closely with Narayan at i2 Technologies in Dallas for nearly 10 years. PuneTech has published some introductory articles on Supply Chain Management (SCM) and the optimization & decision support challenges involved in various real world SCM problems. Who better to write about this area in further depth than Narayan! For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. This is the first in a series of articles that we will publish once a week for a month. For the full series of articles, click here.) the following entry was prompted by a request for an article on the topic of “optimization” for publication in punetech.com, a website co-founded by amit paranjape, a friend and former colleague. for reasons that may have something to do with the fact that i’ve made a living for a couple of decades as a practitioner of that dark art known as optimization, he felt that i was best qualified to write about the subject for an audience that was technically savvy but not necessarily aware of the application of optimization. it took me a while to overcome my initial reluctance: is there really an audience for this after all, even my daughter feigns disgust every time i bring up the topic of what i do. after some thought, i accepted the challenge as long as i could take a slightly unusual approach to a “technical” topic: i decided to personalize it by rooting in a personal-professional experience. i could then branch off into a variety of different aspects of that experience, some technical, some not so much. read on … the year was 1985. i was fresh out of school, entering the “real” world for the first time. with a bachelors in engineering from IIT-Bombay and a graduate degree in business from IIM-Ahmedabad, and little else, i was primed for success. or disaster. and i was too naive to tell the difference. for those too young to remember those days, 1985 was early in rajiv gandhi‘s term as prime minister of india. he had come in with an obama-esque message of change. and change meant modernization (he was the first indian politician with a computer terminal situated quite prominently in his office). for a brief while, we believed that india had turned the corner, that the public sector companies in india would reclaim the “commanding heights” of the economy and exercise their power to make india a better place. CMC was a public sector company that had inherited much of the computer maintenance business in india after IBM was tossed out in 1977. quickly, they broadened well beyond computer maintenance into all things related to computers. that year, they recruited heavily in IIM-A. i was one of an unusually large number of graduates who saw CMC as a good bet. not too long into my tenure at at CMC, i was invited to meet with an mid-level manager in electronics & telecommunications department of the oil and natural gas commission of india (ONGC). the challenge he posed us was simple: save money by optimizing the utilization of helicopters in the bombay high oilfield. the problem the bombay high offshore oilfield, the setting of our story the bombay high oilfield is about 100 miles off the coast of bombay (see map). back then, it was a collection of about 50 oil platforms, divided roughly into two groups, bombay high north and bombay high south. (on a completely unrelated tangent: while writing this piece, i wandered off into searching for pictures of bombay high. i stumbled upon the work of captain nandu chitnis, ex-navy now ONGC, biker, amateur photographer … who i suspect is a pune native. click here for a few of his pictures that capture the outlandish beauty of an offshore oil field.) movement of personnel between platforms in each of these groups was managed by a radio operator who was centrally located. all but three of these platforms were unmanned. this meant that the people who worked on these platforms had to be flown out from the manned platforms every morning and brought back to their base platforms at the end of the day. at dawn every morning, two helicopters, flew out from the airbase in juhu, in northwestern bombay. meanwhile, the radio operator in each field would get a set of requirements of the form “move m men from platform x to platform y”. these requirements could be qualified by time windows (e.g., need to reach y by 9am, or not available for pick-up until 8:30am) or priority (e.g., as soon as possible). each chopper would arrive at one of the central platforms and gets its instructions for the morning sortie from the radio operator. after doing its rounds for the morning, it would return to the main platform. at lunchtime, it would fly lunchboxes to the crews working at unmanned platforms. for the final sortie of the day, the radio operator would send instructions that would ensure that all the crews are returned safely to their home platforms before the chopper was released to return to bombay for the night. the challenge for us was to build a computer system that would optimize the use of the helicopter. the requirements were ad hoc, i.e., there was no daily pattern to the movement of men within the field, so the problem was different every day. it was believed that the routes charted by the radio operator were inefficient. given the amount of fuel used in these operations, an improvement of 5% over what they did was sufficient to result in a payback period of 4-6 months for our project. Image by Captain Nandu Chitnis, Master Mariner via Flickr this was my first exposure to the real world of optimization. a colleague of mine — another IIM-A graduate and i — threw ourselves at this problem. later, we were joined yet another guy, an immensely bright guy who could make the lowly IBM PC-XT — remember, this was the state-of-the-art at that time — do unimaginable things. i couldn’t have asked to be a member of a team that was better suited to this job. the solution we collected all the static data that we thought we would need. we got the latitude and longitude of the on-shore base and of each platform (degrees, minutes, and seconds) and computed the distance between every pair of points on our map (i think we even briefly flirted with the idea of correcting for the curvature of the earth but decided against it, perhaps one of the few wise moves we made). we got the capacity (number of seats) and cruising speed of each of the helicopters. we collected a lot of sample data of actual requirements and the routes that were flown. we debated the mathematical formulation of the problem at length. we quickly realized that this was far harder than the classical “traveling salesman problem”. in that problem, you are given a set of points on a map and asked to find the shortest tour that starts at any city and touches every other city exactly once before returning to the starting point. in our problem, the “salesman” would pick and/or drop off passengers at each stop. the number he could pick up was constrained, so this meant that he could be forced to visit a city more than once. the TSP is known to be a “hard” problem, i.e., the time it takes to solve it grows very rapidly as you increase the number of cities in the problem. nevertheless, we forged ahead. i’m not sure if we actually completed the formulation of an integer programming problem but, even before we did, we came to the conclusion that this was too hard of a problem to be solved as an integer program on a first-generation desktop computer. instead, we designed and implemented a search algorithm that would apply some rules to quickly generate good routes and then proceed to search for better routes. we no longer had a guarantee of optimality but we figured we were smart enough to direct our search well and make it quick. we tested our algorithm against the test cases we’d selected and discovered that we were beating the radio operators quite handily. then came the moment we’d been waiting for: we finally met the radio operators. they looked at the routes our program was generating. and then came the first complaint. “your routes are not accounting for refueling!”, they said. no one had told us that the sorties were long enough that you could run out of fuel halfway, so we had not been monitoring that at all! so we went back to the drawing board. we now added a new dimension to the search algorithm: it had to keep track of fuel and, if it was running low on fuel during the sortie, direct the chopper to one of the few fuel bases. this meant that some of the routes that we had generated in the first attempt were no longer feasible. we weren’t beating the radio operators quite as easily as before. we went back to the users. they took another look at our routes. and then came their next complaint: “you’ve got more than 7 people on board after refueling!”, they said. “but it’s a 12-seater!”, we argued. it turns out they had a point: these choppers had a large fuel tank, so once they topped up the tank — as they always do when they stop to refuel — they were too heavy to take a full complement of passengers. this meant that the capacity of the chopper was two-dimensional: seats and weight. on a full tank, weight was the binding constraint. as the fuel burned off, the weight constraint eased; beyond a certain point, the number of seats became the binding constraint. we trooped back to the drawing board. “we can do this!”, we said to ourselves. and we did. remember, we were young and smart. and too stupid to see where all this was going. in our next iteration, the computer-generated routes were coming closer and closer to the user-generated ones. mind you, we were still beating them on an average but our payback period was slowly we went back to the users with our latest and greatest solution. they looked at it. and they asked: “which way is the wind blowing?” by then, we knew not to ask “why do you care?” it turns out that helicopters always land and take-off into the wind. for instance, if the chopper was flying from x to y and the wind was blowing from y to x, the setting was perfect. the chopper would take off from x in the direction of y and make a bee-line for y. on the other hand, if the wind was also blowing from x to y, it would take off in a direction away from y, do a 180-degree turn, fly toward and past y, do yet another 180-degree turn, and land. given that, it made sense to keep the chopper generally flying a long string of short hops into the wind. when it could go no further because they fuel was running low or it needed to go no further in that direction because there were no passengers on board headed that way, then and only then, did it make sense to turn around and make a long hop “bloody asymmetric distance matrix!”, we mumbled to ourselves. by then, we were beaten and bloodied but unbowed. we were determined to optimize these chopper routes, come hell or high water! so back we went to our desks. we modified the search algorithm yet another time. by now, the code had grown so long that our program broke the limits of the editor in turbo pascal. but we soldiered on. finally, we had all of our users’ requirements coded into the algorithm. or so we thought. we weren’t in the least bit surprised when, after looking at our latest output, they asked “was this in summer?”. we had now grown accustomed to this. they explained to us that the maximum payload of a chopper is a function of ambient temperature. on the hottest days of summer, choppers have to fly light. on a full tank, a 12-seater may now only accommodate 6 passengers. we were ready to give up. but not yet. back we went to our drawing board. and we went to the field one last time. in some cases, we found that the radio operators were doing better than the computer. in some cases, we beat them. i can’t say no creative accounting was involved but we did manage to eke out a few percentage point of improvement over the manually generated routes. you’d think we’d won this battle of attrition. we’d shown that we could accommodate all of their requirements. we’d proved that we could do better than the radio operators. we’d taken our machine to the radio operators cabin on the platform and installed it there. we didn’t realize that the final chapter hadn’t been written. a few weeks after we’d declared success, i got a call from ONGC. apparently, the system wasn’t working. no details were provided. i flew out to the platform. i sat with the radio operator as he grudgingly input the requirements into the computer. he read off the output from the screen and proceeded with this job. after the morning sortie was done, i retired to the lounge, glad that my work was done. a little before lunchtime, i got a call from the radio operator. “the system isn’t working!”, he said. i went back to his cabin. and discovered that he was right. it is not that our code had crashed. the system wouldn’t boot. when you turned on the machine, all you got was a lone blinking cursor on the top left corner of the screen. apparently, there was some kind of catastrophic hardware failure. in a moment of uncommon inspiration, i decided to open the box. i fiddled around with the cards and connectors, closed the box, and fired it up again. and it worked! it turned out that the radio operator’s cabin was sitting right atop the industrial-strength laundry room of the platform. every time they turned on the laundry, everything in the radio room would vibrate. there was a pretty good chance that our PC would regress to a comatose state every time they did the laundry. i then realized that this was a hopeless situation. can i really blame a user for rejecting a system that was prone to frequent and total failures? other articles in this series this blog entry is intended to set the stage for a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect. optimization: a case study (this article) architecture of a decision-support system optimization and organizational readiness for change optimization: a technical overview About the author – Dr. Narayan Venkatasubramanyan Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison. He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/
{"url":"https://punetech.com/tag/narayan-venkatasubramanyan/","timestamp":"2024-11-05T19:50:31Z","content_type":"text/html","content_length":"112077","record_id":"<urn:uuid:c41289e0-9d80-471d-bc59-60bda333085c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00310.warc.gz"}
The Stacks project Remark 50.9.3. Let $p : X \to S$ be a morphism of schemes. For $i > 0$ denote $\Omega ^ i_{X/S, log} \subset \Omega ^ i_{X/S}$ the abelian subsheaf generated by local sections of the form \[ \text{d}\log (u_1) \wedge \ldots \wedge \text{d}\log (u_ i) \] where $u_1, \ldots , u_ n$ are invertible local sections of $\mathcal{O}_ X$. For $i = 0$ the subsheaf $\Omega ^0_{X/S, log} \subset \mathcal{O}_ X$ is the image of $\mathbf{Z} \to \mathcal{O}_ X$. For every $i \geq 0$ we have a map of complexes \[ \Omega ^ i_{X/S, log}[-i] \longrightarrow \Omega ^\bullet _{X/S} \] because the derivative of a logarithmic form is zero. Moreover, wedging logarithmic forms gives another, hence we find bilinear maps \[ \wedge : \Omega ^ i_{X/S, log} \times \Omega ^ j_{X/S, log} \longrightarrow \Omega ^{i + j}_{X/S, log} \] compatible with (50.4.0.1) and the maps above. Let $\mathcal{L}$ be an invertible $\mathcal{O}_ X$-module. Using the map of abelian sheaves $\text{d}\log : \mathcal{O}_ X^* \to \Omega ^1_{X/S, log}$ and the identification $\mathop{\mathrm{Pic}}\nolimits (X) = H^1(X, \mathcal{O}_ X^*)$ we find a canonical cohomology class \[ \tilde\gamma _1(\mathcal{L}) \in H^1(X, \Omega ^1_{X/S, log}) \] These classes have the following properties 1. the image of $\tilde\gamma _1(\mathcal{L})$ under the canonical map $\Omega ^1_{X/S, log}[-1] \to \sigma _{\geq 1}\Omega ^\bullet _{X/S}$ sends $\tilde\gamma _1(\mathcal{L})$ to the class $\gamma _1(\mathcal{L}) \in H^2(X, \sigma _{\geq 1}\Omega ^\bullet _{X/S})$ of Remark 50.9.2, 2. the image of $\tilde\gamma _1(\mathcal{L})$ under the canonical map $\Omega ^1_{X/S, log}[-1] \to \Omega ^\bullet _{X/S}$ sends $\tilde\gamma _1(\mathcal{L})$ to $c_1^{dR}(\mathcal{L})$ in $H^2_ 3. the image of $\tilde\gamma _1(\mathcal{L})$ under the canonical map $\Omega ^1_{X/S, log} \to \Omega ^1_{X/S}$ sends $\tilde\gamma _1(\mathcal{L})$ to $c_1^{Hodge}(\mathcal{L})$ in $H^1(X, \Omega 4. the construction of these classes is compatible with pullbacks, 5. add more here. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0FMF. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0FMF, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0FMF","timestamp":"2024-11-14T03:53:58Z","content_type":"text/html","content_length":"15747","record_id":"<urn:uuid:96479fe0-3f59-464c-99e5-ea55311a2b0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00450.warc.gz"}
What is the roman representation of number 633,185? Question What is the roman representation of number 633,185? Short answer The roman representation of number 633,185 is DCXXXMMMCLXXXV. Large numbers to roman numbers 3,999 is the largest number you can write in Roman numerals. There is a convencion that you can represent numbers larger than 3,999 in Roman numerals using an overline. Matematically speaking, this means means you are multiplying that Roman numeral by 1,000. For example if you would like to write 70,000 in Roman numerals you would use the Roman numeral LXX. This moves the limit to write roman numerals to 3,999,999. Number larger then 3,999,999 in roman numbers. Extend to 3,999,999,999 Pleae note that this is an unofficial way to represent roman numeral. Similar to the overline convention below, you can also use an underline to push the limit to write roman numbers all the way to 3,999,999,999. To do this we take all the roman numerals and we multiply by 1,000,000. This is just a propowsal and it is not largelly accepted. This way you can represent 3,999,999,999 in Roman numeral like this: 3,999,999,999 = M M M CM XC IX CM XC IX CMXCIX
{"url":"https://tellmeanumber.hostcoder.com/633185/number-to-roman-number/","timestamp":"2024-11-09T00:15:06Z","content_type":"text/html","content_length":"10067","record_id":"<urn:uuid:c01c4bce-3bf3-4da8-bdeb-1adeec19752c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00037.warc.gz"}
Least Common Denominator Calculator LCD calculator finds the least common denominator of fractions, non-fraction, and mixed numbers quickly, with detailed steps. This LCD finder makes calculation easy by calculating the LCM of the denominators of all fractions. What is the Least Common Denominator (LCD)? The Least Common Denominator (LCD) is the smallest number that helps to make the common denominator of group fractions/nonfraction numbers. It is a positive integer that is divisible by each denominator of given fractions. The LCD is also known as the lowest common denominator or smallest common denominator of the group of fractions. It can be determined by finding the LCM of the denominators of the fractions. The LCD calculation is used for addition or subtraction of fractions and arranging the fractions from the least to greatest order. Methods to find LCD? There are four different manual methods used to find the Lowest common denominator (LCD) of a group of fraction or non-fraction numbers. The easiest way is to use our lcd calculator. The names of these methods are: • List of Multiples/Factors • Prime Factorization • Division Method • Greatest Common Divisor (GCD) Method Below we discuss the details of some methods to understand LCD calculation. Also, perform their examples with detailed explanations. How to Find LCD? As discussed above different methods to find the LCD of integers, mixed numbers, or mixed fractions but first convert all fractions into simple fraction numbers. Then calculate the LCD of all denominators by finding the LCM of the denominator using any method of LCD. To find LCM of any number use LCM Calculator. To evaluate the LCD of any set of numbers (fraction, mixed fraction, and integers) follow the below steps: • First, Convert each integer or mixed fraction number into a simple fraction. • Then find the LCD of the denominators of all fractions by following the LCD methods (Prime Factorization, List of Multiples, or Division method) or finding the least common multiple. • Finally, convert inputs to equivalent fractions by multiplying and dividing the LCD with each fraction. • After simplification make the denominators the same according to LCD and get the final fractions. For a better understanding, see the example below, which uses different methods. • Find LCD Using List of Multiples The smallest common denominator finding by a list of multiple is the easiest method. First, make a List of the few multiples of each denominator. Then find the smallest common multiple that appears in all lists of numbers. This smallest number is the required LCD of the given fractions. To make a list of multiple numbers, use Multiples If the fractions {4/3, 7/5} and denominators are 3 & 5, find its LCD with a list of multiple and the equivalent fraction with LCD. The given fractions are already in the fractions form then the first step can skip out. Step 1: Now, find the list of multiple for the denominator values. Multiples of 3: 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33... Multiples of 5: 5, 10, 15, 20, 25, 30, 35, 40, 45, 50... Step 2: Note the smallest number. In this case, the LCD of given numbers is “15”. The equivalent fraction according to the LCD, by multiplying or dividing the LCD value with each fraction becomes: 4/3 = 4/3 × 15/15 = 20/15 7/5 = 7/5 × 15/15 = 21/15 Alternatively, use our common denominator calculator to calculate the LCD effortlessly. • Find the LCD Using Prime Factorization The least common denominator calculation by the use of prime factorization is one important method from all methods of LCD. In this method, find the prime factor of the denominator of all fractions. Then note the common & uncommon factors from all prime factors and multiply all of them to find the lowest common denominator. But a common factor takes one time from all of them. To evaluate the prime factors of any number use prime factorization calculator. For more understanding see the below example, in which find LCD with prime factorization. Example: Find the LCD of {7/12, 4/15} by prime factorization. For the verification of results use lowest common denominator calculator. Step 1: First, find the prime factor of the denominator by factorization. 12 = 2 x 2 x 3 = 22 x 3 15 = 3 x 5 Step 2: Note the common & uncommon factors and multiply all of them. Common Factor= 3, Uncommon Factor = 22 x 5 LCD (12, 15) = 22 x 3 x 5 = 4 x 3 x 5 = 60 LCD (12, 15) = 60 Step 3: Now, convert the given fraction into equivalent fractions with LCD, by multiplying or dividing LCD value with each fraction. 7/12 = 7/12 × 60/60 = 35/60 4/15 = 4/15 × 60/60 = 16/60 Thus, the LCD is 60, and the equivalent fraction with LCD as a denominator is 35/60 & 16/60 by prime factorization. • Finding LCD Using GCD Method To find the LCD with GCD method is one new way that uses the LCD formula. In this method firstly find the GCF of all denominators. Then multiply the all denominator’s values with each other and divide the product by the GCD/GCF that is found in the starting. To find the GCF values use our GCF calculator. The mathematical formula, used in this method to find LCD is stated as: LCD Formula = Product of the numbers/GCD of the numbers Find the LCD of {7/15, 5/6} using the GCD method and convert it into the equivalent fractions with LCD. Step 1: First find the GCD of all denominators. To find GCD, find the factors of all denominators and note the highest common factors. Factors of 15 = 1, 3, 5, 15 Factors of 6 = 1, 2, 3, 6 Note that, the highest common factor is 3. So, GCD of 15 and 6 is 3. Step 2: Now, multiply all denominator’s values. Product of denominators = 15 × 6 = 90 Step 3: Put the values in LCD formula and simplify to evaluate the LCD. LCD Formula = Product of the numbers/GCD of the numbers = 90/3 = 30 Thus, the LCD is 30. Now, convert the given fraction into equivalent fractions with LCD. 7/15 = 7/15 × 30/30 = 14/30 5/6 = 5/6 × 30/30 = 25/30
{"url":"https://www.lcm-calculator.com/lcd-calculator","timestamp":"2024-11-10T14:33:22Z","content_type":"text/html","content_length":"46252","record_id":"<urn:uuid:ec56b33e-ec8c-4b18-8e29-211b1d9151cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00214.warc.gz"}
What is Newton's 1st 2nd and 3rd? In the first law, an object will not change its motion unless a force acts on it. In the second law, the force on an object is equal to its mass times its acceleration. In the third law, when two objects interact, they apply forces to each other of equal magnitude and opposite direction. What are the 3 types of Newton law? In the first law, we understand that an object will not change its motion unless a force acts on it. The second law states that the force on an object is equal to its mass times its acceleration. And finally, the third law states that there is an equal and opposite reaction for every action. What is force AP Physics? What are Newton’s 3 Laws of motion GCSE? According to Newton’s Third Law of motion, whenever two objects interact, they exert equal and opposite forces on each other. This is often worded as ‘every action has an equal and opposite reaction’. However, it is important to remember that the forces act on two different objects at the same time. What is Newton’s 2nd law called? The other name for Newton’s second law is the law of force and acceleration. What is Newton’s 3rd law? Newton’s third law simply states that for every action there is an equal and opposite reaction. So, if object A acts a force upon object B, then object B will exert an opposite yet equal force upon object A. What Newton’s laws explain? Answer: Newton’s first law of motion explains how inertia affects moving and nonmoving objects. Newton’s first law states that an object will remain at rest or move at a constant speed in a straight line unless it is acted on by an unbalanced force. What is Newton’s law formula? Newton’s second law, which states that the force F acting on a body is equal to the mass m of the body multiplied by the acceleration a of its centre of mass, F = ma, is the basic equation of motion in classical mechanics. What is Newton’s first law called? Newton’s First Law: Inertia Newton’s first law states that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external What are all the types of forces? • Applied Force. • Gravitational Force. • Normal Force. • Frictional Force. • Air Resistance Force. • Tension Force. • Spring Force. Is inertia a force? Inertia is the force that holds the universe together. Literally. Without it, matter would lack the electric forces necessary to form its current arrangement. Inertia is counteracted by the heat and kinetic energy produced by moving particles. What unit is force measured in? The SI unit of force is the newton, symbol N. What’s Newton’s 4th law? Any object has a tendency to stay in its current state. This tendency is called inertia. Any object has a tendency to stay in its current state. Is Karma Newtons third law? This is effectively what the Law of Karma means: It is the Law of Cause and Effect, very similar to Sir Isaac Newton’s Third Law of Motion, which states that for every action there is an equal or opposite reaction. The Law of Karma has a very significant contribution to an individual. What is Newton’s 1st law GCSE? According to Newton’s first law of motion, an object remains in the same state of motion unless a resultant force acts on it. If the resultant force on an object is zero, this means: a stationary object stays stationary. a moving object continues to move at the same velocity (at the same speed and in the same direction … What are the laws of inertia? law of inertia, also called Newton’s first law, postulate in physics that, if a body is at rest or moving at a constant speed in a straight line, it will remain at rest or keep moving in a straight line at constant speed unless it is acted upon by a force. What is the formula of law of inertia? Newton’s second law of motion describes this phenomenon and property with an inertia formula that states “Force = Mass * Acceleration”. The formula states that objects that have more mass require more force to change their acceleration. What are 5 examples of Newton’s third law? • Pulling an elastic band. • Swimming or rowing a boat. • Static friction while pushing an object. • Walking. • Standing on the ground or sitting on a chair. • The upward thrust of a rocket. • Resting against a wall or tree. • Slingshot. What is Newton’s 5th law? What is law of force and acceleration? For a constant mass, force equals mass times acceleration.” This is written in mathematical form as F = ma. F is force, m is mass and a is acceleration. The math behind this is quite simple. If you double the force, you double the acceleration, but if you double the mass, you cut the acceleration in half. What is Newton’s first law of gravitation? Newton’s law of gravitation, statement that any particle of matter in the universe attracts any other with a force varying directly as the product of the masses and inversely as the square of the distance between them. How many laws of physics are there? 34 Important Laws of Physics. What are the 4 formulas of motion? The equations are as follows: v=u+at,s=(u+v2)t,v2=u2+2as,s=ut+12at2,s=vtโ 12at2. What are the 3 equations of motion? • First Equation of Motion : v = u + a t. • Second Equation of Motion : s = u t + 1 2 a t 2. • Third Equation of Motion : v 2 = u 2 + 2 a s. How many formulas are in a newton? Newton’s three laws of motion can be briefly summarized as follows: Newton’s First law: The Law of Inertia. Newton’s Second Law: Law of Acceleration. Newton’s Third Law: Law of Opposing Forces.
{"url":"https://physics-network.org/what-is-newtons-1st-2nd-and-3rd/","timestamp":"2024-11-10T11:27:09Z","content_type":"text/html","content_length":"303220","record_id":"<urn:uuid:b5dc1de9-0f75-4884-8eab-2fe32dccfe7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00436.warc.gz"}
Uniform-in-time superconvergence of HDG methods for the heat equation We prove that the superconvergence properties of the hybridizable discontinuous Galerkin method for second-order elliptic problems do hold uniformly in time for the semidiscretization by the same method of the heat equation provided the solution is smooth enough. Thus, if the approximations are piecewise polynomials of degree k, the approximation to the gradient converges with the rate h ^k+1 for k ≥ 0 and the L ^2-projection of the error into a space of lower polynomial degree superconverges with the rate for k ≥ 1 uniformly in time. As a consequence, an elementby-element postprocessing converges with the rate for k ≥ 1 also uniformly in time. Similar results are proven for the Raviart-Thomas and the Brezzi-Douglas-Marini mixed methods. • Discontinuous galerkin methods • Hybridization • Parabolic problems • Superconvergence Dive into the research topics of 'Uniform-in-time superconvergence of HDG methods for the heat equation'. Together they form a unique fingerprint.
{"url":"https://experts.umn.edu/en/publications/uniform-in-time-superconvergence-of-hdg-methods-for-the-heat-equa","timestamp":"2024-11-06T15:10:17Z","content_type":"text/html","content_length":"50441","record_id":"<urn:uuid:b43762ec-98e9-4c9f-8926-0de33520d7c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00316.warc.gz"}
Geometric Methods in Rep Theory Seminar - Thomas Lam (Michigan) - Department of Mathematics Geometric Methods in Rep Theory Seminar – Thomas Lam (Michigan) November 3, 2023 @ 4:00 pm - 5:00 pm Title: Monotone links in the DAHA and EHA Abstract: Morton and Samuelson related certain skein algebras on the torus with the double affine Hecke algebra (DAHA) and the elliptic Hall algebra (EHA). We use this construction to study link homology of a class of “monotone links” on the torus, closely related to the Coxeter links of Oblomkov and Rozansky and to the positroid links in our earlier work. This is joint work with Pavel Related Events
{"url":"https://math.unc.edu/event/thomas-lam-michigan-geometric-methods-in-representation-theory-seminar/","timestamp":"2024-11-12T05:11:11Z","content_type":"text/html","content_length":"114015","record_id":"<urn:uuid:630bcbcd-250a-463b-ac23-3e0a08ec2854>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00584.warc.gz"}
Developing The Attitude And Creativity In Mathematics Education Dr. Marsigit, MA (2011) Developing The Attitude And Creativity In Mathematics Education. PROCEEDINGS International Seminar and the Fourth National Conference on Mathematics Education. ISSN U - 3.pdf Download (385kB) | Preview The structures in a traditionally-organized classroom of mathematics teaching can usually be linked readily with the routine classroom activities of teacher-exposition and teacher-supervised desk work, teacher’s initiation, teacher’s direction and strongly teacher’s expectations of the outcome of student learning. If the teacher wants to develop appropriate attitude and creativities in mathematics teaching learning it needs for him to develop innovation in mathematics teaching. The teacher may face challenge to develop various style of teaching i.e. various and flexible method of teaching, discussion method, problem-based method, various style of classroom interaction, contextual and or realistic mathematics approach. To develop mathematical attitude and creativity in mathematics teaching learning processes, the teacher may understand the nature and have the highly skill of implementing the aspects of the following: mathematics teaching materials, teacher’s preparation, student’s motivation and apperception, various interactions, small-group discussions, student’s works sheet development, students’ presentations, teacher’s facilitations, students’ conclusions, and the scheme of cognitive development.In the broader sense of developing attitude and creativity of mathematics learning, the teacher may needs to in-depth understanding of the nature of school mathematics, the nature of students learn mathematics and the nature of constructivism in learning mathematics. Key Word: mathematical attitude, creativity in mathematics, innovation of mathematics teaching,school mathematics, Actions (login required)
{"url":"http://eprints.uny.ac.id/117/","timestamp":"2024-11-03T03:45:09Z","content_type":"application/xhtml+xml","content_length":"23928","record_id":"<urn:uuid:8fae3f26-514a-416c-8c39-db790696744b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00362.warc.gz"}
Bearings & Distances Revision Bearings &amp; Distances Revision 1. Draw the following bearings a. B is N56oE from A b. B is S60oW from A c. B is S47oE from A d. B is N33oE from A 2. For each of the bearings in Q1, find the bearing of A from B 3. Express the bearings in Q1 in thre figure form 4. For each of the following, find the bearing of A from B in cardinal form a. B is 147o from A b. B is 225o from A c. B is 303o from A 5. A boy starts at A and walks 3km east to B then he walks 4km north to C, find the distance of C from A. 6. An aeroplane flies 400km west then 100km north. Find it’s distance from its starting point.
{"url":"https://studylib.net/doc/27062275/bearings-and-distances-revision","timestamp":"2024-11-09T20:19:53Z","content_type":"text/html","content_length":"57871","record_id":"<urn:uuid:dd2f36ef-853e-4f12-aa51-71ea9a10e57e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00693.warc.gz"}
Scatter Plot Maker Our online scatter plot maker allows you to create graphs to visualize data. It helps to understand the relationships between two numerical variables. What is a Scatter Plot? “A graph in which the values of two variables are plotted along two axes and represent the values by using dots is known as a scatter plot” The scatter plot generator enables you to observe the relationships between variables. It represents the data point on the cartesian system. These independent variables are plotted on the x-axis and the dependent variables are plotted on the y-axis. Types of Scatter Plot Correlation: A mutual relation between two variables is known as correlation. If the given variables are correlated, the points of the scatter plot graph will fall along a line or curve. It shows how closely two variables are related to each other. Therefore, better the correlation then the points will touch closely to the line. Try our scatterplot maker and see how it creates scatter plots for visually appealing data visualizations. There are three types of scatter plot graphs that are as follows: 1. Scatter Plot for Positive Correlation 2. Scatter Plot for Negative Correlation 3. Scatter Plot for Null Correlation How to Use a Scatter Plot Maker? To visually represent the relationships between two variables utilize a scatter plot creator. It begins to transform the raw data into the scatter plot. So, to analyze the business metrics start by inputting the below values. What you Enter? • Insert the values for dependent and independent variables • Some are optional points that you can insert or not What you Get? • Data Insight in Table form: Gain valuable insights by observing the spatial arrangement of data points, enabling quick identification of relationships between variables • Scatter plot Graph: Witness an instant graphical representation of your data, unveiling patterns, trends, or correlations. How to Create a Scatter Plot? To get a sketch of the scatter plot, you need sets of data for the x-axis and the y-axis and these are typically indicated as the independent and dependent variables. After identifying data you can follow the below steps: • Along the x-axis draw a sketch of a scatter plot for independent variables • Along the y-axis draw a sketch of a scatter plot for dependent variables • Where the given values meet, indicate them by adding dots or symbols • If two values fall on the same point then you can show these side by side Practical Example: Suppose there is a game competition in which the game played and scores obtained in each instance are as follows. So draw a scatter plot for a given set of data. • X Data = 7, 11, 21, 14, 23 • Y Data = 49, 74, 71, 77, 76 Generate a scatterplot diagram with the given X and Y variables: X Y Scattered Plot Graph: Our scatter plot maker is designed to observe relationships between numeric variables. The given graph displays values across two variables (one on each axis) for a dataset. It is constructed based on the predicted and response variables. Frequently Asked Questions: How many Sets of Data does a Scatter Plot Show? There are two sets of data in a scatter plot. One of the data sets is on the x-axis and the other set of data is on the y-axis. Differentiate between Bar Graphs, Line Charts, and Scatter Plots? Scatter Plot Bar Graph Line Chart It gives an overview of relationships between data across multiple variables It Compare larger changes or differences in data among groups It displays smaller changes in a trend over time Showing the relationship between height and weight Comparing the sales of different products Showing the trend in global temperatures over time From the source Wikipedia: Scatter plot, Overview and example, Scatter plot matrices. From the source chartio: What is a scatter plot? When you should use a scatter plot, Example of data structure, Common issues when using scatter plots.
{"url":"https://www.calculatored.com/scatter-plot-maker","timestamp":"2024-11-13T11:46:50Z","content_type":"text/html","content_length":"60987","record_id":"<urn:uuid:96097068-9034-4d9b-b122-734cd19f0c7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00627.warc.gz"}
Minimum Thickness of Two Way Slab as per ACI 318-11 for DeflectionMinimum Thickness of Two Way Slab as per ACI 318-11 for Deflection Control Sorry, you do not have permission to ask a question, You must login to ask question. Become VIP Member Join for free or log in to continue reading... 🕑 Reading time: 1 minute Minimum thickness for two way slab construction designed as per methods provided by ACI 318-11 Code namely, direct design method and equivalent frame method. The minimum thickness of slab is specified by ACI 318-11, section 9.5.3. Deflection of two way slab could be calculated by using available methods to make sure that the deflection will not be problematic and will not exceed serviceability limitations. Minimum Thickness of Two Way Slab as per ACI 318-11 for Deflection Control The deflection of two way slab is based on a number of parameters for example the flexural stiffness of the slab which is the function of slab thickness. The flexural stiffness of the slab is increased with increasing the thickness of the slab; as a result the deflection of the two-way slab is declined. In order to prevent excessive deflections and avoid calculation of two way slab deflection which is a complicated procedure, ACI 318-11 restricts the minimum thickness of two way slab by applying three empirical limitations. It will not be necessary to carry out deflection computation if these limitations are met. Finally, slab thicknesses smaller than those obtained from empirical limitations can be employed if deflection calculation is within specified limits which are determined by Code. ACI Code limitations for minimum thickness of two way slab are discussed in the following sections. 1. For larger than 0.2 but not larger than 2, the slab thickness must not be less than the values of equation-1: Where: h : Minimum slab thickness : clear span measured in long direction face to face of column or face to face of beam for slabs with beams : Ratio of clear span in longer direction to clear span in shorter direction : Average value of ( ) for all beams on the sides of a panel : is the ratio of flexural stiffness of the beam section to the flexural stiffness of the slab bounded laterally by centerline of the panel on each side of the beam. Computation of is according to the following equation: Where: E[cb] : concrete modulus of elasticity of beam and slab respectively which is usually the same : Moment of inertia of beam and slab respectively. Figure-1 & Figure-2 illustrate how to find moment of inertia for edge beam, internal beam, internal and edge slabs respectively: Figure-1: Portion of slabs to be included for moment of inertia calculation, edge beam (left side) & internal beam (right side) Figure-2: Dimensions of internal and external slab for moment of inertia calculations 2. For ( ) larger than 2, the thickness of the slab must not be less than the value of equation-2: 3. For ( ) equal or smaller than 0.2, minimum two-way slab thickness is provided by Table-1. Table-1: Minimum thickness of slabs without interior beams Moreover, the minimum thickness of any two way slab without interior beams should not be less than the following: • For slabs without drop panel 125 mm. • For slabs with drop panel 100 mm. Furthermore, both ( ) and ( ) will be zero when beams are not employed as in the case of flat plates. Lastly, ACI Code equations applied to calculate slab thickness take the panel shape, the influence of span length, the flexural stiffness of beams, and the yield stress of steel reinforcement into consideration. Equation-3 will control the slab thickness if substantially stiff beam is used since equation-1 gives smaller thickness. If no beams are used for example in the case of flat plates and flat slabs, the minimum thickness of the slab could be taken from table-1. Read More: Two Way Slab Design by Direct Design Method as per ACI 318-11
{"url":"https://test.theconstructor.org/structural-engg/minimum-thickness-two-way-slab-aci-318-deflection/15043/","timestamp":"2024-11-10T08:30:48Z","content_type":"text/html","content_length":"179445","record_id":"<urn:uuid:f9c48c4b-3e2e-4681-b2aa-e444f2f59e90>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00135.warc.gz"}
Optimum Moisture Content in context of cement stabilization calculator 27 Aug 2024 Title: Optimization of Moisture Content for Effective Cement Stabilization: A Calculator-Based Approach Cement stabilization is a widely used technique to improve the engineering properties of soils. However, the optimal moisture content (OMC) required for effective cement stabilization remains a topic of ongoing research. This study presents a calculator-based approach to determine the OMC for various soil-cement mixtures. A comprehensive review of existing literature on the subject is followed by the development of a novel formula to calculate the OMC. The proposed formula takes into account the soil’s initial moisture content, cement content, and other relevant factors. The calculator is validated through laboratory experiments and compared with existing methods. Cement stabilization is a popular technique used to improve the engineering properties of soils, such as their strength, stiffness, and durability (Kumar et al., 2018). The process involves mixing soil with cement and water to create a stabilized mixture. However, the optimal moisture content required for effective cement stabilization remains a challenge. Excessive moisture can lead to poor workability, reduced strength, and increased shrinkage cracking (Huang et al., 2017), while inadequate moisture can result in poor cement hydration and reduced durability (Li et al., 2020). Literature Review: Several studies have investigated the OMC for various soil-cement mixtures. For example, Kumar et al. (2018) proposed a formula to calculate the OMC based on the soil’s initial moisture content, cement content, and water-to-cement ratio. However, this formula has limitations, as it does not account for other factors that can affect the OMC, such as the soil’s particle size distribution and cement type. Proposed Formula: This study proposes a novel formula to calculate the OMC (OMC) based on the following variables: 1. Soil’s initial moisture content (M0) 2. Cement content (C) 3. Water-to-cement ratio (w/c) 4. Soil’s particle size distribution (PSD) The proposed formula is as follows: OMC = M0 + (C × w/c) × (1 - PSD/100) where PSD is the percentage of particles with a diameter less than 0.075 mm. Calculator Development: A calculator was developed to determine the OMC based on the proposed formula. The calculator takes into account the user’s input for M0, C, w/c, and PSD. The calculator then calculates the OMC using the proposed formula and provides the result in ASCII format: OMC = [M0 + (C × w/c) × (1 - PSD/100)] The proposed formula was validated through laboratory experiments involving different soil-cement mixtures. The results showed a good correlation between the calculated OMC and the actual OMC determined through standard testing procedures. Comparison with Existing Methods: The proposed calculator was compared with existing methods for determining the OMC, such as the empirical formula proposed by Kumar et al. (2018). The results showed that the proposed calculator provided more accurate predictions of the OMC than the existing method. This study presents a novel approach to determine the optimal moisture content required for effective cement stabilization. The proposed formula takes into account various factors that can affect the OMC and provides a more accurate prediction of the optimal moisture content. The developed calculator is a useful tool for engineers and researchers working on soil-cement mixtures. Huang, X., Li, Z., & Wang, Y. (2017). Effects of moisture content on the mechanical properties of cement-stabilized soils. Journal of Geotechnical Engineering, 143(9), 04017024. Kumar, P., Singh, D., & Kumar, V. (2018). Optimization of moisture content for effective cement stabilization of soils. International Journal of Civil Engineering, 6(2), 1-10. Li, Z., Huang, X., & Wang, Y. (2020). Influence of moisture content on the durability of cement-stabilized soils. Construction and Building Materials, 267, 120844. Formula in BODMAS format: OMC = M0 + (C × w/c) × (1 - PSD/100) • OMC = Optimum Moisture Content • M0 = Soil’s initial moisture content • C = Cement content • w/c = Water-to-cement ratio • PSD = Soil’s particle size distribution Related articles for ‘cement stabilization calculator’ : • Reading: Optimum Moisture Content in context of cement stabilization calculator Calculators for ‘cement stabilization calculator’
{"url":"https://blog.truegeometry.com/tutorials/education/019286689da40957c88082f71d7bf874/JSON_TO_ARTCL_Optimum_Moisture_Content_in_context_of_cement_stabilization_calcul.html","timestamp":"2024-11-04T13:55:23Z","content_type":"text/html","content_length":"20681","record_id":"<urn:uuid:8fdee6e5-da79-4ffb-aa6b-660c14435ebc>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00323.warc.gz"}
matchingMarkets 1.0-4 Please note that only the most significant changes are reported here. A full ChangeLog is available on GitHub. This is a minor update matchingMarkets 1.0-1 This is a minor update • Added data generating function for exploded logit in xlogit.data. matchingMarkets 1.0-0 This is a major update • Finalised estimators in stabit2 function, as well as algorithms in hri and hri2 for two-sided matching markets. matchingMarkets 0.3-6 This is a minor update • Added top-trading-cycle functions ttc2 and ttcc; random serial dictatorship rsd; and a function to check the stability of a given matching stabchk. matchingMarkets 0.3-5 This is a minor update • Added R wrapper for Roth-Peranson Algorithm in function hri2. matchingMarkets 0.3-3 This is a minor update • Implemented multi-core parrallel processing for estimators in function stabit2, which can be specified using the nCores argument. • Updated immediate acceptance algorithm iaa and top-trading-cycles ttc functions. Thanks to Sándor Sóvágó at Tinbergen Institute and Kevin Breuer at University of Cologne for the reports. matchingMarkets 0.3-1 This is a major update • Replaced stable matching algorithms with constraint programming model implemented for hospital/residents problem hri and stable roommates sri with incomplete lists. • Added plot and summary methods for estimators. • Allowed for thinning in stabit2 function. matchingMarkets 0.2-2 This is a minor update • Allowed for two selection equations in stabit2 function for two-sided matching markets. matchingMarkets 0.2-1 This is a major update • Added stabit2 function for two-sided matching markets. matchingMarkets 0.1-6 This is a major update matchingMarkets 0.1-5 This is a minor update. • Fixed daa function for college admissions problems when number of students exceeds number of colleges. Thanks to Jan Tilly at University of Pennsylvania for the report. matchingMarkets 0.1-1 Initial commit.
{"url":"http://cran.stat.auckland.ac.nz/web/packages/matchingMarkets/news/news.html","timestamp":"2024-11-12T22:20:21Z","content_type":"application/xhtml+xml","content_length":"4225","record_id":"<urn:uuid:3238f054-e404-4433-9f75-4a05d9e417fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00857.warc.gz"}
How Much Statistics is Needed for Data Science? FREE Resources How Much Statistics is Needed for Data Science? FREE Resources Included Do you want to learn Statistics for Data Science but have a doubt about “How Much Statistics is Needed for Data Science?”… If yes, then this blog is for you. In this blog, I will share everything you need to learn in Statistics for Data Science. Along with that, I will share the resources I used during my statistics learning journey. I will try to share my learning experience with statistics with So, without any further ado, let’s get started- How Much Statistics is Needed for Data Science? First, let’s see why Statistics is required for Data Science- Why Statistics is Important for Data Science? As someone who has studied statistics and applied it in my data science projects, I have found statistics to be very important. Here’s why: 1. Data Collection and Sampling: In my projects, gathering data is often the first step. Proper sampling techniques, like random sampling, ensure that the data is representative and unbiased. This makes the data reliable for analysis. 2. Data Analysis: Descriptive statistics are essential for summarizing data. Measures like the mean, median, mode, variance, and standard deviation help me quickly understand the main characteristics of the data. These summaries provide a clear overview of the data. 3. Inferential Statistics: When I need to make predictions or generalize findings from a sample to a larger population, inferential statistics are crucial. Techniques such as hypothesis testing and regression analysis allow me to draw meaningful conclusions from the data. 4. Identifying Patterns and Trends: Statistics helps me uncover patterns and trends within the data. For instance, time series analysis reveals trends over time, while clustering techniques identify natural groupings in the data. These insights are key to solving complex problems. 5. Building Predictive Models: Many projects involve building models to predict future outcomes. Statistical methods like linear regression and logistic regression are essential for creating accurate predictive models. Understanding these methods helps me build reliable models. 6. Handling Uncertainty: Data often comes with uncertainty. Statistical tools, such as probability distributions and confidence intervals, help me quantify and manage this uncertainty. This is especially important in risk assessment projects. 7. Evaluating Models: After building models, it’s important to evaluate their performance. Statistical metrics like accuracy, precision, recall, and F1 score help me assess how well a model is performing and identify areas for improvement. 8. Data Visualization: Effective communication of data insights is crucial. Statistics guides me in creating clear and accurate visualizations. This ensures that the data is presented in a way that is easy to understand for stakeholders. 9. Decision Making: Statistics enables me to make data-driven decisions. By applying statistical analysis, I can support my recommendations with solid evidence. This approach leads to better and more reliable outcomes in projects. 10. Ethics and Bias Detection: Statistics also helps in identifying and correcting biases in data. Ensuring fairness and avoiding discrimination are essential, especially in projects that impact In conclusion, my experience with statistics in data science has shown that it is a fundamental part of the field. It provides the necessary tools to turn raw data into meaningful insights, enabling data scientists to make informed and effective decisions. Now, let’s come to your main doubt “How Much Statistics is Needed for Data Science?“ How Much Statistics is Needed for Data Science? In my opinion, you need to learn these topics in detail for Data Science- 1. Basic Descriptive Statistics: You need to understand basic descriptive statistics, including measures like mean, median, mode, variance, and standard deviation. These basics help you summarize and get a quick overview of the data. 2. Probability Theory: You should grasp probability well. Understanding concepts like probability distributions, conditional probability, and Bayes’ theorem is important for making predictions and dealing with uncertainty in data. 3. Inferential Statistics: You need to know how to draw conclusions about a population based on a sample. This involves learning about hypothesis testing, confidence intervals, and p-values. These techniques allow you to make inferences and decisions based on data samples. 4. Regression Analysis: You must know how to perform and interpret regression analysis, including both linear and logistic regression. Regression models help you understand relationships between variables and make predictions. 5. Multivariate Statistics: You should understand how to analyze data involving multiple variables. Techniques like principal component analysis (PCA) and cluster analysis help you deal with complex datasets and extract meaningful patterns. 6. Time Series Analysis: If you work with data that changes over time, you need to understand time series analysis. This includes methods for identifying trends, seasonality, and making forecasts. 7. Statistical Testing: You need to be familiar with various statistical tests, such as t-tests, chi-square tests, and ANOVA. These tests help you compare groups and determine if observed differences are statistically significant. 8. Experimental Design: You should know how to design experiments and analyze experimental data, especially for A/B testing or other types of controlled experiments. This includes understanding randomization, control groups, and blinding. 9. Bayesian Statistics: While not always required, understanding Bayesian statistics can be very useful. Bayesian methods provide a different approach to probability and statistical inference, often leading to more intuitive results. 10. Data Visualization: You need basic knowledge of statistical principles in data visualization. This includes understanding how to accurately represent data and avoid misleading visualizations. Now, let’s see the resources to learn Statistics- Resources to Learn Statistics Should I learn statistics before data science? Before learning data science, I made sure to learn statistics first, and I did it alongside my data science studies. This approach helped me understand how statistics is used in real-life data analysis. Learning statistics beforehand gave me a strong base. Concepts like probability, hypothesis testing, and regression analysis became familiar to me, and I could see how they are important in data science. Studying statistics alongside data science also helped me see how these concepts work in practice. For example, understanding probability distributions was important for using machine learning algorithms effectively. Also, learning about hypothesis testing helped me make smart decisions about model performance. Moreover, studying statistics alongside data science helped me become better at critical thinking. I learned to look at data carefully, spotting mistakes and biases. This skill has been really useful as I’ve worked with different datasets. In short, learning statistics before jumping into data science was a smart move. It gave me a strong foundation, practical skills, and a sharper eye for detail, all of which have been really helpful in my data science journey. Is statistics hard in data science? Is statistics hard in data science? Well, from my experience, it’s a bit of a mixed bag. Understanding statistics is important for data science, but some parts were trickier for me than others. Probability theory and hypothesis testing were tough cookies. Wrapping my head around abstract concepts like probability and the ins and outs of hypothesis testing took some extra effort. But with practice and breaking things down into smaller chunks, I eventually got the hang of it. However, some topics were smoother sailing. Descriptive statistics, for instance, felt more straightforward. Learning about stuff like mean, median, and mode made sense to me right off the bat. Getting hands-on with real-world data science projects also helped. Using regression analysis to predict things like housing prices or sales trends made the theory feel more concrete and easier to In the end, while some parts of statistics were tough, I found that persistence and practice paid off. Taking it step by step and not being afraid to ask for help when needed made all the difference. How long does it take to learn statistics for data science? How long does it take to learn statistics for data science? Well, it varies for everyone. From my experience, it’s not something you can rush through—it takes time to really understand the ins and For me, it took about 4 months of dedicated learning to properly grasp everything about statistics. I took it slow and steady, breaking down each concept into manageable pieces and ensuring I fully understood each one before moving on. Others might pick it up quicker, especially if they have a knack for numbers or prior experience with related subjects. It depends on factors like how much time you can dedicate to learning, your background knowledge, and how you prefer to learn—whether it’s through books, online courses, or hands-on practice. But here’s the thing: don’t rush it. Take the time you need to truly understand each concept. Break things down into smaller, more manageable pieces, and don’t be afraid to ask for help if something doesn’t click right away. In the end, it’s not about how fast you learn—it’s about how well you understand statistics and how you can apply it to data science. So take your time, stay patient, and keep pushing forward. You’ll get there eventually! So, I have shared everything related to my statistics learning journey with you. I hope it will help you and clear your doubts about “How Much Statistics is Needed for Data Science?“. If you have any doubts or queries, feel free to ask me in the comment section. I am here to help you. All the Best for your Career! Happy Learning! You May Also Be Interested In 10 Best Online Courses for Data Science with R Programming 8 Best Free Online Data Analytics Courses You Must Know in 2025 Data Analyst Online Certification to Become a Successful Data Analyst 8 Best Books on Data Science with Python You Must Read in 2025 14 Best+Free Data Science with Python Courses Online- [Bestseller 2025] 10 Best Online Courses for Data Science with R Programming in 2025 8 Best Data Engineering Courses Online- Complete List of Resources Thank YOU! To explore More about Data Science, Visit Here Though of the Day… ‘ It’s what you learn after you know it all that counts.’ – John Wooden Founder of MLTUT, Machine Learning Ph.D. scholar at Dayananda Sagar University. Research on social media depression detection. Create tutorials on ML and data science for diverse applications. Passionate about sharing knowledge through website and social media. Leave a Comment
{"url":"https://www.mltut.com/how-much-statistics-is-needed-for-data-science/","timestamp":"2024-11-07T15:50:20Z","content_type":"text/html","content_length":"145303","record_id":"<urn:uuid:48b63568-3977-4fc3-b986-797cdad96046>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00767.warc.gz"}
Calculating Expected Value Depends on Opponent by mhilger | Sep 5, 2024 | Advanced Poker Strategies Eric “Rizen” Lynch provides a hand analysis below. A few months ago, I wrote an article about how some players are misapplying expected value calculations. The column focused on a particular play of shoving a wide range of hands when playing heads up with 20 big blinds. If you do the math, you discover that pushing all in is perfect strategically: You could play with your hand faceup and it would still be profitable. That column discussed several different factors that you should consider, but one key factor is that just because a play is profitable, it doesn’t mean you should make it. You should be making the most profitable play. I was reviewing a book Winning Poker Tournaments One Hand at a Time, Volume II. The book has three authors — Jon “PearlJammer” Turner, Eric “Rizen” Lynch, and Jon “Apestyles” Van Fleet. Each author walks through the key hands of a tournament that he played, from the bubble through heads-up play. One of the hands from Rizen illustrates that the optimal play depends on the particular opponent you are facing, and it makes for a great follow-up to the column I wrote. Hand Analysis – Opponent Has 20 Big Blinds Here’s the hand, described by Rizen: Hand 16 Seat 2, small blind: 37,886 (Rizen) Seat 3, big blind: 48,159 1,200-2,400 blinds, 300 ante Setup: I have just been moved to a new table, and we are seven-handed. I have no significant reads on the table, except that the player to my immediate left is a very good player who plays a push/ fold game. He almost never takes flops, and loves to resteal all in with almost reckless abandon. This is the third hand at my new table, and I have a little more than 15 big blinds. Preflop (5,700) Q However, he will still call me with a very wide range. He is probably not going to fold any hand that is ahead of me, except for some weak A-X and K-X hands. If I were to make a standard raise, he would reraise all in anyway with all of the hands with which he would call an all-in push. Going all in is actually a lower-variance play than raise/calling. By pushing, I avoid some marginal all-in situations compared to the raise/call strategy, and I add more than 11 percent to my stack when my opponent folds. I push all in for 37,586, and the big blind folds. Calculating the Expected Value of Raising and Calling All-In Showing this mathematically, if I were to make a standard raise and call an all-in reraise based on my read that he was reraising with a very wide range (for example, 2-2+, A-2+ suited, K-2+ suited, Q-8+ suited, J-8+ suited, 10-8+ suited, 9-7+ suited, 8-6+ suited, 7-6 suited, 6-5 suited, A-2+ offsuit, K-5+ offsuit, Q-10+ offsuit, J-10+ offsuit, and 10-9 offsuit), it totals 39.1 percent of hands. So, 60.9 percent of the time, I would win the blinds and antes, and the other 39.1 percent of the time, I would have 44.5 percent equity in the pot: (.609) 5,700 + (.391) [(.445) (39,386) – (.555) (36,286)] 3,471 + (.391) (17,527 – 20,139) 3,471 + (.391) (-2,612) 3,471 – 1,021 EV [Expected Value] = 2,450 Note that I will also be out of the tournament 21.7 percent of the time this way (39.1 percent of 55.5 percent). Calculating the Expected Value of Pushing All-In The second option is pushing, assuming that my opponent will call an all-in bet with a tighter range than he would three-bet with: 4-4+, A-2+ suited, K-8+ suited, Q-10+ suited, J-10 suited, A-7+ offsuit, K-10+ offsuit, Q-10+ offsuit, and J-10 offsuit (22.8 percent of hands). By pushing, my expected value increases, as I will win the blinds and antes 77.2 percent of the time and have 40.7 percent equity when he calls: (.772) 5,700 + (.228) [(.407) (39,386) – (.593) (36,286)] 4,400 + (.228) (16,030 – 21,518) 4,400 + (.228) (-5,488) 4,400 – 1,251 EV [Expected Value] = 3,149 Pushing not only shows a higher profit, but also results in being knocked out of the tournament only 13.5 percent of the time (22.8 percent of 59.3 percent). Calculating the Expected Value of Raising and Folding There is a third option, raise/folding, which is similar to the first scenario, but I lose just my raise amount (approximately three times the big blind minus the posted small blind, so 2.5 times the big blind, or 6,000): (.609) 5,700 + (.391) (-6,000) 3,471 – 2,346 EV [Expected Value] = 1,125 Although this play has positive expectation, it is by far the lowest-expectation option against this player, but it does result in never being eliminated. Calculating the Expected Value Against a Tight-Passive Opponent By contrast, I’m going to show how the results would change if the big blind were a more tight-passive player who would reraise all in with the same range with which he would call (that is, never resteal). The range I’m using for this player is 7-7+, A-10+ suited, K-10+ suited, Q-J suited, A-10+ offsuit, K-J+ offsuit, and Q-J offsuit (12.4 percent of hands). Since his range is the same for both the first scenario (in which he goes all in over the raise and I call) and the second scenario (in which I push all in), I can use the same calculations for both. This works only because his range for pushing and for calling are identical. When all in, my equity is 33.2 percent against his range. (.876) 5,700 + (.124) [(.332) (39,386) – (.668) (36,286)] 4,993 + (.124) (13,076 – 24,239) 4,993 + (.124) (-11,163) 4,993 – 1,384 EV [Expected Value] = 3,609 Against this player, I would be out of the tournament 8.2 percent of the time (12.4 percent of 66.8 percent). I have good expected value here, but raising three times the big blind and then folding to this player has even better expected value: (.876) 5,700 + (.124) (-6,000) 4,993 – 744 EV [Expected Value] = 4,249 This option provides better expected value and also no chance of elimination. Optimal Play Based on Your Opponent There are several important conclusions to make from this sort of analysis: The first is that against hyper-aggressive players who are willing to push with a very wide range, it can often be more profitable, with lower variance, to just take the play away from them by pushing all in yourself, rather than give them the chance to push over your raise in a situation where you have less than 20 big blinds. Second, against tighter, uncreative players who are going to play more straightforwardly, raising and folding if they push all in can be better than just pushing all in. When choosing the optimal play, it’s very important to have an idea of how aggressive your opponent is and how he might react. Rizen’s hand analysis clearly demonstrates the mathematics behind this play. More importantly, he explains that pushing is sometimes the best play, while raising and folding can also be the best play. Avoid playing robotic poker, and make sure that you are exploiting the play of the specific opponent you are facing. Calculating Expected Value Depends on Opponent Average rating: 0 reviews
{"url":"https://americancardrooms.com/calculating-expected-value-based-on-opponent/","timestamp":"2024-11-06T11:16:09Z","content_type":"text/html","content_length":"69606","record_id":"<urn:uuid:12068100-6c66-4a40-9175-392c9b533b4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00433.warc.gz"}
Equational derivations of What is lambda? Lambda is a notation for expressing anonymous functions. For instance, in Python, we define the function f using def: def f(x): return x + 1 print (f(10)) # prints 11 Or, we can assign f to lambda x: x + 1: f = lambda x: x+1 print (f(10)) # prints 11 The lambda calculus Alonzo Church discovered the lambda calculus back in the 1920s, not as a programming language, but in an effort to find a functional foundation for logic. The lambda calculus is what’s left of mathematics if you take away everything except for variables, anonymous functions (lambdas) and function application. It’s also the world’s smallest universal programming language, and it lives inside many major programming languages these days, including Python, JavaScript, Ruby, PHP, Java and C++. (Of course, it also lives in the functional languages.) The following grammar carves out a subset of Python that encodes the lambda calculus: <exp> ::= <var> | <exp>(<exp>) | lambda <var>: <exp> | (<exp>) <var> is a legal variable name In the lambda calculus, all values are functions. A programming language out of lambda At first glance, the lambda calculus doesn’t appear to be a universal foundation for programming. To convert it into a universal foundation, we will steadily construct the features expected of a modern programming language: • void values; • multi-argument functions; • booleans and conditionals; • numbers and arithmetic; • pairs; • lists; and • recursive functions. To do this, we’ll employ lambda-based programming techniques such as Currying, Church encodings, self-application, eta-expansion and the fixed-point combinators. A void value Sometimes, we’ll need a “void” value that we can pass when we expect that argument to be ignored. We’ll the identity function to accomplish this; VOID = lambda void: void If we wanted ‘debugging’ support for this development, you might want to temporarily use an alternate definition of VOID: def VOID(_): raise Exception('Cannot invoke VOID!') In this case, if someone accidentially invokes VOID, it will blow up. Multi-argument functions The lambda calculus requires that every function have exactly one argument. It’s easy to see how to fake zero-argument functions: we can take every call with zero arguments: and convert it into a call that takes VOID: so that every lambda taking zero arguments: lambda : body becomes a lambda with an ignored argument: lambda _ : body If we want true multi-argument functions, we need to use Currying. In the case of a function over two arguments, the Curried function becomes a function that accepts the first argument, but then returns a function accepting the second argument and returning the For example, suppose we have a function sum: sum = lambda x, y: x + y print (sum(2,3)) # prints 5 we could Curry it to produce a function that completes the sum: sum = lambda x: lambda y: x + y print (sum(2)(3)) # prints 5 At this point, we can desugar multi-argument functions into strictly single-argument functions. Conditionals and Booleans To develop conditionals, we want to create a function that simulates the behavior of the classic McCarthy-style if <condition> then <exp> else <exp> construct. In C-like languages, this construct is the ternary operator: <condition> ? <exp> : <exp> and in Python proper, this is <exp> if <condition> else <exp>` To start, we’re going to constrain the function IF and the values TRUE and FALSE so that: IF (TRUE) (true_value) (false_value) == true_value IF (FALSE) (true_value) (false_value) == false_value There are several ways to do this, but we can take a step in the right direction by remembering that all values in the lambda calculus are functions – including whatever TRUE and FALSE must be. Then, we can constrain IF to be the function that applies the condition to its arguments: IF (cond) (true_value) (false_value) == cond(true_value)(false_value) which leads to a pure-lambda definition of IF: IF = (lambda cond: lambda true_value: lambda false_value: At this point, we can equationally solve for the definition of TRUE and FALSE: TRUE (true_value) (false_value) == IF (TRUE) (true_value) (false_value) == true_value And, as before, we can turn this into a valid Python definition by moving the arguments with lambdas: TRUE = lambda true_value: lambda false_value: true_value Following the same process for FALSE yields: FALSE = lambda true_value: lambda false_value: false_value Making it work with eager languages This encoding for TRUE and FALSE works when the computations for the true branch and the false branch are terminating and error-free. But, if we tried IF (TRUE) (value) (nonterminate()) where nonterminate() never completes, then the result will not be equal to value, because the entire expression will never terminate. To fix this, we must “thunk” the arguments to IF by wrapping them in lambdas to suspend their computation: IF (TRUE) (lambda: true_value) (lambda: false_value) == true_value IF (FALSE) (lambda: false_value) (lambda: false_value) == false_value The definition of IF stays the same, but resolving for TRUE and FALSE yields: TRUE = lambda true_value: lambda false_value: true_value() FALSE = lambda true_value: lambda false_value: false_value() Church numerals are a means for encoding the natural numbers as functions. In the spirit of encoding data as its use, Church numerals encode natural numbers as iterated application. For example, the Church numeral for 3 is a function that applies a composes its argument with itself 3 times. That means that ZERO, given any function, returns the identity function: ZERO (f) (z) == z so that: ZERO = lambda f: lambda z: z The Church numeral for ONE applies its argument once; TWO applies it twice; THREE applies it three times: ONE (f) (z) == f(z) TWO (f) (z) == f(f(z)) THREE (f) (z) == f(f(f(z))) so that: ONE = lambda f: lambda z: f(z) TWO = lambda f: lambda z: f(f(z)) THREE = lambda f: lambda z: f(f(f(z))) Alternatively, if we define function composition, compose: compose = lambda f,g: lambda x: f(g(x)) we can represent these numbers in a point-free style: ONE = lambda f: f TWO = lambda f: compose(f,f) THREE = lambda f: compose(f,compose(f,f)) We can write a function in Python that turns a Python number into a Church numeral: def numeral(n): return lambda f: lambda z: z if n == 0 else f(numeral(n-1)(f)(z)) print (numeral(0)(lambda x: x+1)(0)) # prints 0 print (numeral(7)(lambda x: x+1)(0)) # prints 7 print (numeral(3)(lambda x: 2*x)(1)) # prints 8 and a function that turns a Church numeral back into a number: natify = lambda c: c(lambda x: x+1)(0) print (natify (ONE)) # prints 1 print (natify (TWO)) # prints 2 Given the representation of a Church numeral, \(n\), its successor, \(n+1\) should apply f one more time: SUCC (n) (f) (z) == f(n(f)(z)) By pulling the arguments across with lambda, we get a definition for SUCC: SUCC = lambda n: lambda f: lambda z: f(n(f)(z)) And, it works: print (SUCC (ONE) (lambda x: x+1) (0)) # prints 2 To add \(n\) and \(m\) as Church numerals, we want to apply a function \(n\) times and then apply it \(m\) times. So, we can define addition using composition: SUM (n) (m) (f) (z) == compose(n(f),m(f))(z) so that: SUM (n) (m) (f) == compose(n(f),m(f)) and using lambdas to peel off the arguments: SUM = lambda n: lambda m: lambda f: compose(n(f),m(f)) And, after inlining the compose: SUM = lambda n: lambda m: lambda f: lambda z: n(f)(m(f)(z)) A few tests show that it works: FOUR = SUM (TWO) (TWO) FIVE = SUM (TWO) (THREE) print (natify (FOUR)) # prints 4 print (natify (FIVE)) # prints 5 To multiply \(n\) and \(m\) as Church numerals, we want to apply the encoding of \(n\) exactly \(m\) times: MUL (n) (m) (f) (z) == m(n(f))(z) so that peeling the arguments off with lambdas yields: MUL = lambda n: lambda m: lambda f: lambda z: m(n(f))(z) And, we can even derive a point-free version with compose: MUL = lambda n: lambda m: lambda f: lambda z: m(n(f))(z) = lambda n: lambda m: lambda f: m(n(f)) = lambda n: lambda m: lambda f: compose(m,n)(f) = lambda n: lambda m: compose(m,n) And, it works: FOUR = MUL (TWO) (TWO) SIX = MUL (TWO) (THREE) print (natify (FOUR)) # prints 4 print (natify (SIX)) # prints 6 To create pairs (and tuples), we want a constructor that can create a pair (PAIR) and accessors to pull components out (LEFT, RIGHT). Together, they need to obey the following properties: LEFT (PAIR (a) (b)) == a RIGHT (PAIR (a) (b)) == b Let’s assume that a pair is a function that applies its first argument to both the left and the right value: PAIR (a) (b) == lambda f: f(a)(b) so that: PAIR = lambda a: lambda b: lambda f: f(a)(b) Then we can solve for LEFT: == LEFT (PAIR (a) (b)) == LEFT (lambda f: f(a)(b)) Assuming LEFT (p) == p(g) for some unknown function g lets us solve for g: LEFT (lambda f: f(a)(b)) == (lambda f: f(a)(b))(g) == g(a)(b) All together, a == g(a)(b), so we can solve for g by pulling the arguments across with lambda: g = lambda a: lambda b: a which means that: LEFT = lambda p: p(lambda a: lambda b: a) Repeating the process for RIGHT yields: RIGHT = lambda p: p(lambda a: lambda b: b) To create lists, we need a NIL element, a list constructor CONS, a predicate for detecting empty lists NILP, a HEAD accessor and a TAIL accessor, so that all together, the following constraints hold: NILP (NIL) == TRUE NILP (CONS (hd) (tl)) == FALSE HEAD (CONS (hd) (tl)) == hd TAIL (CONS (hd) (tl)) == tl To pull this off, we will allow a list to take two functions: one to call when the list is empty and the other to call when the list is non-empty. Under this interpretation, NIL should call the empty function: NIL = lambda onnil: lambda onlist: onnil() And, the CONS constructor should call the non-empty function with both the head and the tail: CONS (hd) (tl) == lambda onnil: lambda onlist: onlist(hd)(tl) so that: CONS = (lambda hd: lambda tl: lambda onnil: lambda onlist: onlist(hd)(tl)) Under this convention, NILP can engineer the functions it passes in to return the appropriate value: NILP (list) == list (lambda: TRUE) (lambda hd: lambda tl: FALSE) so that: NILP = lambda list: list (lambda: TRUE) (lambda hd: lambda tl: FALSE) And, at the same time HEAD and TAIL can extract the appropriate value: HEAD = lambda list: list (VOID) (lambda hd: lambda tl: hd) TAIL = lambda list: list (VOID) (lambda hd: lambda tl: tl) Non-termination via the U Combinator The first hint that the lambda calculus may be capable of general purpose computation comes from the U Combinator. The U combinator is the function that applies its argument to itself: U = lambda f: f(f) When we apply the U Combinator to itself, something strange happens: U(U) # error: stack overflow! Because Python lacks tail-call optimization, it blows out the call stack. In languages with correct tail-call handling, it non-terminates. In the context of U(U), the expression f ends up being bound to lambda f: f(f). Through self-application, we achieved self-reference. Recursion via the U Combinator Once non-termination enters the picture, it’s natural to wonder if one might do more than just non-terminate. Can we achieve recursion through self-application? We can. Consider the recursive definition of factorial: fact = lambda n: 1 if n <= 0 else n*fact(n-1) We can try to pass a copy of “factorial” into itself: fact = ((lambda n: 1 if n <= 0 else n*fact(n-1)) (lambda n: 1 if n <= 0 else n*fact(n-1))) But, this doesn’t work: n ends up bound to (lambda n: 1 if n <= 0 else n*fact(n-1)), and the program breaks. But, what if we add an extra parameter to represent the function itself? We end up with: fact = ((lambda f: lambda n: 1 if n <= 0 else n*fact(n-1)) (lambda f: lambda n: 1 if n <= 0 else n*fact(n-1))) Now, it works, but we still have a recursive reference to fact. Fortunately, f(f) (or U(f)) will produce a new reference to fact. So, we can use: fact = ((lambda f: lambda n: 1 if n <= 0 else n*(U(f))(n-1)) (lambda f: lambda n: 1 if n <= 0 else n*(U(f))(n-1))) and clean the whole expression up with a call to the U combinator: fact = U(lambda f: lambda n: 1 if n <= 0 else n*(U(f))(n-1)) Sure enough, it works: U(lambda f: lambda n: 1 if n <= 0 else n*(U(f))(n-1))(5) # prints 120 Booleans. Numbers. Pairs. Lists. Recursion. None are fundamental. If we have anonymous functions, we have all of these. Expressing recursion via the Y Combinator The U Combinator may be sufficient to demonstrate the universality of the lambda calculus, but for those that wish to use the lambda calculus as an everyday programming language, it is cumbersome. The Y Combinator is an elegant way to express recursive functions. To express recursion via the Y Combinator requires two steps: 1. express a recursive function as the fixed point of a non-recursive function; and 2. create a function – the Y Combinator – to find the fixed point of another function, without using recursion. The fixed point of a function \(f\) is the value \(x\) such that \(x = f(x)\). A functional is a function that takes a function as its argument. The fixed point of a functional (if it exists), by definition, would have to be a function. To put this in more concrete terms, we can work on a specific example: What is the functional for which factorial is a fixed point? Let’s cheat at first, and assume we already have a recursive defintion of factorial available: fact = lambda n: 1 if n <= 0 else n*fact(n-1) A simple functional that returns factorial when given factorial is: lambda f:fact We can inline fact once to arrive at: lambda f: lambda n: 1 if n <= 0 else n*fact(n-1) Given factorial, the above will return factorial. Under the assumption that factorial will be passed in for f, we can confidently state that the fixed point of the following functional is factorial: lambda f: lambda n: 1 if n <= 0 else n*f(n-1) What we need now is a function Y that will find the fixed point of a functional for us, so that: fact = Y(lambda f: lambda n: 1 if n <= 0 else n*f(n-1)) Deriving the Y Combinator To derive the Y Combinator, let us first state the key property we seek; Y(F) should return the fixed point of F: Y(F) == f and f == F(f) Substituting F(f) for f gives: Y(F) == F(f) Because f == Y(F), we can substiute for f yet again to yield: Y(F) == F(Y(F)) We can pull the argument off with lambda to get an operational definition: Y = lambda F: F(Y(F)) Of course, this won’t work: Y(lambda f: lambda n: 1 if n <= 0 else n*f(n-1))(5) # stack overflow! The problem is that Y immediately invokes Y recursively. Fortunately, we can \(\eta\)-expand the call to Y, under the observation that for any expression in the lambda calculus, e: e == lambda x: e(x) except that if e is non-terminating, then the expanded expression will terminate. This expansion yields a new definition of Y: Y = lambda F: F(lambda x:Y(F)(x)) And, already, this works: Y(lambda f: lambda n: 1 if n <= 0 else n*f(n-1))(5) # prints 120 All that’s left now is to apply the U combinator to eliminate the explicit recursion in the definition of Y: Y = U(lambda h: lambda F: F(lambda x:U(h)(F)(x))) And, of course, we can also inline U to leave an expression in the pure lambda calculus: Y = ((lambda h: lambda F: F(lambda x:h(h)(F)(x))) (lambda h: lambda F: F(lambda x:h(h)(F)(x)))) Y(lambda f: lambda n: 1 if n <= 0 else n*f(n-1))(5) # prints 120 All together Putting together all of the Church encodings and the Y Combinator allows the expression of factorial in pure lambda calculus, in which case fact(5) becomes: (((lambda f: (((f)((lambda f: ((lambda z: (((f)(((f)(((f)(((f)(((f) (z)))))))))))))))))))((((((lambda y: ((lambda F: (((F)((lambda x: (((((((y)(y)))(F)))(x)))))))))))((lambda y: ((lambda F: (((F)((lambda x: (((((((y)(y)))(F)))(x)))))))))))))((lambda f: ((lambda n: (((((((((((( lambda n: (((((n)((lambda _: ((lambda t: ((lambda f: (((f)((lambda void: (void)))))))))))))((lambda t: ((lambda f: (((t)((lambda void: (void))))) ))))))))((((((lambda n: ((lambda m: (((((m)((lambda n: ((lambda f: ((lambda z: (((((((n) ((lambda g: ((lambda h: (((h)(((g)(f))))))))))) ((lambda u: (z)))))((lambda u: (u)))))))))))))(n))))))) (n)))((lambda f: ((lambda z: (z)))))))))((lambda _: ((((lambda n: (((((n) ((lambda _: (( lambda t: ((lambda f: (((f)((lambda void: (void))))))))))))) ((lambda t: ((lambda f: (((t)((lambda void: (void))))))))))))) ((((((lambda n: ((lambda m: (((((m)((lambda n: ((lambda f: ((lambda z: (((((((n) ((lambda g: ((lambda h: (((h)(((g)(f)))))))))))((lambda u: (z)))))((lambda u: (u)))))))))))))(n)))))))((lambda f: ((lambda z: (z)))))))(n))))))))) ((lambda _: ((lambda t: ((lambda f: (((f)((lambda void: (void))))))))))) ))((lambda _: ((lambda f: ((lambda z: (((f)(z)))))))))))((lambda _: ((( (((lambda n: ((lambda m: ((lambda f: ((lambda z: (((((m)(((n)(f)))))(z) ))))))))))(n)))(((f) ((((((lambda n: ((lambda m: (((((m)((lambda n: ((lambda f: ((lambda z: (((((((n) ((lambda g: ((lambda h: (((h)(((g)(f) ))))))))))((lambda u: (z)))))((lambda u: (u)))))))))))))(n)))))))(n))) ((lambda f: ((lambda z: (((f) (z))))))))))))))))))))))))(lambda x:x+1)(0) A condensed version of the code in this post is available. More reading 1. Write PRED to return the predecessor of a Church numeral.
{"url":"https://matt.might.net/articles/python-church-y-combinator/","timestamp":"2024-11-11T13:43:40Z","content_type":"text/html","content_length":"33980","record_id":"<urn:uuid:b793f192-17e2-4586-bad9-750b34600222>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00115.warc.gz"}
manual pages Theoretical maximum for degree centralization See centralize for a summary of graph centralization. graph = NULL, nodes = 0, mode = c("all", "out", "in", "total"), loops = FALSE graph The input graph. It can also be NULL, if nodes, mode and loops are all given. nodes The number of vertices. This is ignored if the graph is given. mode This is the same as the mode argument of degree. loops Logical scalar, whether to consider loops edges when calculating the degree. Real scalar, the theoratical maximum (unnormalized) graph degree centrality score for graphs with given order and other parameters. See Also Other centralization related: centr_betw_tmax(), centr_betw(), centr_clo_tmax(), centr_clo(), centr_degree(), centr_eigen_tmax(), centr_eigen(), centralize() # A BA graph is quite centralized g <- sample_pa(1000, m = 4) centr_degree(g, normalized = FALSE)$centralization %>% centr_degree(g, normalized = TRUE)$centralization version 1.2.5
{"url":"https://igraph.org/r/html/1.2.5/centr_degree_tmax.html","timestamp":"2024-11-14T14:26:01Z","content_type":"text/html","content_length":"10060","record_id":"<urn:uuid:3f3403d7-6603-403b-b029-32e6b6f5a8f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00821.warc.gz"}
Fermionic anti-commutation relations 6089 views For Pauli's exclusion principle to be followed by fermions, we need these anti-commutators $$[a_{\lambda},a_{\lambda}]_+=0 $$ and $$[a_{\lambda}^{\dagger},a_{\lambda}^{\dagger}]_+=0 $$ Then $$n_{\ lambda}.$$ which gives $ n_{\lambda}=0,1 $. Here we used the anti-commutator $$[a_{\lambda},a_{\lambda^{\prime}}^{\dagger}]_+= \delta_{\lambda,\lambda^{\prime}} $$ But we could have used even a commutator instead of the anti-commutator and still got the same result i.e. if we choose $[a_{\lambda},a_{\lambda^{\prime}}^{\dagger}]_{-}=\delta_{\lambda,\lambda^{\prime}} $ then $n_{\lambda}^{2}= a_{\lambda}^{\dagger}a_{\lambda}a_{\lambda}^{\dagger}a_{\lambda}=a_{\lambda}^{\dagger}\left(1+a_{\lambda}^{\dagger}a_{\lambda}\right)a_{\lambda}=a_{\lambda}^{\dagger}a_{\lambda}=n_{\lambda} $ which also gives $n_{\lambda}=0,1 $ What conditions make us impose the last anti-commutation relation $$[a_{\lambda},a_{\lambda^{\prime}}^{\dagger}]_+= \delta_{\lambda,\lambda^{\prime}} $$ instead of $[a_{\lambda},a_{\lambda^{\prime}}^ {\dagger}]_{-}=\delta_{\lambda,\lambda^{\prime}} $ ? I mean, we do not need all relations to be anti-commuting. I can take 2 of them to be anti-commuting but the third one i.e. relation between creation and annihilation operator to be commuting and still maintain the Pauli's exclusion This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user cleanplay Perhaps my eyes are deceiving me, but it seems that you actually used the anti-commutator you were wondering about in your second equality after "because." This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user joshphysics @joshphysics made the change. I mean, I could have used even a commutator instead the anti-commutator and still got the same result This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user cleanplay Related: physics.stackexchange.com/q/17893/2451 This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user Qmechanic @Qmechanic : thanks, though it addresses the question but it is a bit vague for me. This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user cleanplay We need it because we want the "occupied" state to have $n_\lambda=1$; I will omit the $\lambda$ argument everywhere. In other words, we need $$ n a^\dagger |0\rangle \equiv a^\dagger a a^\dagger |0\ rangle = 1\cdot a^\dagger $$ But the left hand side has the operator that is $$a^\dagger a a^\dagger |0\rangle = a^\dagger ([a,a^\dagger]_+ - a^\dagger a) = a^\dagger [a,a^\dagger]_+ $$ where the last term was dropped in the last term because $(a^\dagger)^2=0$. So we demand $$a^\dagger[a,a^\dagger]_+ |0\rangle = a^\dagger|0\rangle.$$ In combination with your other conditions, this is only possible if the anticommutator is one – we may "cancel" the $|0\rangle$ ket vector because a similar condition may be derived for $|1\rangle$ as the ket vector. This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user Luboš Motl @Motl I mean we could also use the commutator $ [a_{\lambda},a_{\lambda^{\prime}}^{\dagger}]_{-}=\delta_{\lambda,\lambda^{\prime}} $ and still get the same result. Why should I use only the anti-commutator ? I can use the anti-commutators $$[a_{\lambda},a_{\lambda}]_+=0 $$ and $$[a_{\lambda}^{\dagger},a_{\lambda}^{\dagger}]_+=0, $$ along with $$ [a_{\lambda},a_{\lambda^{\prime}}^{\ dagger}]_{-}=\delta_{\lambda,\lambda^{\prime}} $$ This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user cleanplay You are obviously wrong. When you have operators with anti-commutation relations, you have : $${a^+}_\lambda {a^+}_{\lambda'} + {a^+}_{\lambda'} {a^+}_{\lambda} = 0 \tag{1}$$ Taking $\lambda = \lambda'$, you get : $${a^+}_\lambda {a^+}_{\lambda} = 0 \tag{2}$$ If you have operators with commutation relations, you have : $${a^+}_\lambda {a^+}_{\lambda'} - {a^+}_{\lambda'} {a^+}_{\lambda} = 0 \tag{3}$$ Taking $\lambda = \lambda'$, you get : $${a^+}_\lambda {a^+}_{\lambda} - {a^+}_{\lambda} {a^+}_{\lambda} = 0 \tag{4}$$ which is a trivial equation ($x=x$) So, it is not true, with commutation relations, that you have : ${a^+}_\lambda {a^+}_{\lambda} = 0$, so your equation $n_\lambda ^2=n_\lambda$ is obviously false for operators with commutations This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user Trimok My question is only about the last anti-commutation relation which you did not use in your proof. I understand that you need the two anti-commutation relations that you have used, in order to prove the Pauli's exclusion principle. My question is on what basis we choose the third relation i.e the relation of the creation and annihilation operator to be of anti-commutator type and not commutator type. Read my comment in response to Lubos Motl's answer. This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user cleanplay @cleanplay : This is an other question... In short, this has to do with the spin-statistics theorem. For instance, in quantum field theory, if you write the hamiltonian for a fermionic field (for instance a Dirac field), you will find something like $H = \sum_k (b^+_kb_k - d_kd^+_k)$ ($b$ concerns particles and $d$ concerns anti-particles). But this hamiltonian has to be bounded below, and you have to choose anti-commutation relations, to have $H = \sum_k (b^+_kb_k + d^+_k d_k)$, up to a (infinite) constant. This post imported from StackExchange Physics at 2014-05-04 11:38 (UCT), posted by SE-user Trimok
{"url":"https://www.physicsoverflow.org/16907/fermionic-anti-commutation-relations","timestamp":"2024-11-13T06:09:10Z","content_type":"text/html","content_length":"177660","record_id":"<urn:uuid:5dbf4913-7600-4a3a-89d7-cf620316178c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00341.warc.gz"}
Margins and Margins Plot in Stata | The Data Hall Margins and Margins Plot in Stata This article details out the margin and marginal plots in Stata. The margins, in Stata, are referred to as a technique that is used to calculate the marginal effects of independent variables in models such as regression. Using margins, we can estimate the change in predicted values when one or more independent variable changes, keeping other variables constant. Margin plot on the other hand is the graphical representation of marginal effects calculated using the technique of marginal command. To demonstrate it further, let’s import a data set in stata, using the following command webuse lbw This data set contains information about the effect of smoker and non-smoker mothers on the weight of newborn babies. We first describe the data set to figure out what each variable is about. To describe the data set, use the following command This command generates the following results, that outlines the details of each variable in the data set. Once we have the details of dataset, we can find the effect of smoking on birth weight of babies. To find the effect of smoking habits of mothers, on weight of the children, we regress birth weight on smoke using following command regress bwt age i.smoke Note that, we used i.smoke variable in the regression. This is because smoke variable is categorical and to get margins for the smoke variable, we need to specify Stata about the categorical variable. The age variable is added additionally, which outlines the effect that how age of mother affects the birth weight of babies, depending on whether she smokes or not. Following regression results are generated from the above command Margins in Stata To get the marginal effects for the above regression, we use the following command This generates a table that predicts the expected birth weight of the babies, which is 2944.286 grams, based on the data set. We can get the graphical representation of above margins. To get the visual representation of these margins, we use the following command to generate margins plot in Stata The following marginsplot is generated in Stata, that shows the predictive margins of effect of smoking habits of women during pregnancy on the birth weight of their children with a 95% confidence Now to get the individual effect of smoker and non-smoker mothers on the weight of babies at the time of birth, we use the following command margins smoke This will categorically explain the effect of smoker and non-smoker mothers on the birth weight of babies, as shown below The above table shows that babies’ of non-smoker mothers are better off in terms of having more weight, compared to babies of smokers. As the saying goes, smoking is injurious to health. Well, that’s an information we have now verified from the data, too. Now the individual effect has been explained, we now create the marginal plot that graphically represent the effect. We use the following command for creating marginal plot in Stata. The following margins plot is generated in Stata As the margin plot is just the visual representation of margins table, it also verifies the table that babies of non-smoker are weighed more than the babies of smoking mothers. Although the graph shows the effect of smoking, it doesn’t visualize how much non-smoker mothers are better off with their babies’ health. To get this kind of visualization, we can get the bar graph by using the following command marginsplot, recast(bar) This command creates a type of bar chart as shown below. Now we can confirm that mothers who don’t smoke have babies with average weight of 3000 grams and are much better off, compared to smokers that have average weight of around 2700 grams. Now that we have created graphs for the smoker and non-smoker categories, we can create margins for the respective ages of mothers too, and their effect on the birth weight of children. To create margins for the age of mothers, use the following command margin, at(age= (18(3)30)) Let me walk you through this command first, in case you have a different data set with different values. The values (18(3)30) are the range of age, where 18 is the starting age and 30 is the ending age. The value 3 in parentheses is the increment of age by 3. So, in the above command we are taking age starting from 18 and incrementing it by 3 i.e. age 18, 21, 24 and so on , thus ending the age at 30. The above command generates the following results in Stata The birth weight of baby of mother aged 18 is around 2800 grams and the weight increases by the increase of age. To create a margin plot of the above table, we again use the following command that we used earlier The margin plot again verifies the results table. However, the table and margin plot generated above doesn’t take into account the effect of smoking on the birth weight of babies, so to incorporate that effect, we use the following command margin smoke, at(age= (18(3)30)) The above table generated explains the effect of age and smoking/nonsmoking combined on the birth weight in detail. Interpreting the above table, we can see that mothers at the age of 18 having smoking habits have babies with average weight of 2717 gram, compared to mothers with nonsmoking habits having babies with birth weight of 2994 grams. Similarly, the margins plot for this table will look like this, indicating that women with nonsmoking habits are clearly better off compared to their counterparts. To create a margins plot for the above table, use the following command We can also create margins for the specific age too, whatever the requirement of data is. For instance, if we want to create a margin for the mother aged 35, we can use the following command margin smoke, at(age=35) The following margin table is generated, which describes that average weight of babies of mothers aged 35 is 3075 grams. However, the above margin again doesn’t incorporate the smoking effect. To see the effect of smoking on weight of baby for mothers aged 35, we use the following command margin smoke, at(age=35) Now the margin table is generated that provides details about the weight of newborn babies, that changes depending on the smoking habits of their mothers. Again, the following margins plot is generated for the above margins., using the given command Formatting of Margins Plots in Stata Now if we want to visualize this data graphically, one way is to simply create a margins plot, that will generate a graph similar to plots generated above. However, we can change that plot too, by incorporating a few features. If we want to create the margin plot by using the line plot and don’t add any confidence intervals into it, we can use the following command marginsplot, recast(line) noci The above command will generate following margins plot in Stata. Similarly, if you want to have a better picture of data, the bar plot can be created in Stata. This bar plot can be generated using following graph, where y-axis is the marginal effects and x-axis is the categorical independent variable marginsplot, recast(bar) noci However, if you are a monochromatic person, where colorful graphs are not your thing, you can generate the monochrome graph using the following command. marginsplot, recast(bar) noci scheme(s2mono) The following monochromatic graph will be generated in Stata using the above command. We can also give titles to our margins plot in Stata. To give a margin plot title, the following command will be used marginsplot, recast(line) noci title("Difference of weights in babies of smokers and non-smoker mothers") Only margins for the linear regression or linear model was covered in this article, however, the concept of margins is applied for the non-linear models too. Those nonlinear models whose marginal effects one wants to study include logit model, probit model or other non-linear models depending on your data. 0 Comments Inline Feedbacks View all comments | Reply
{"url":"https://thedatahall.com/margins-and-margins-plot-in-stata/","timestamp":"2024-11-03T16:11:58Z","content_type":"text/html","content_length":"193301","record_id":"<urn:uuid:3f312cf1-e614-41d9-84a1-992b59267551>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00696.warc.gz"}
Geometric Proofs of Pi - Measuring Pi Squaring Phi Welcome to the Geometric Proofs of Pi section of our Measuring Pi Squaring Phi web site. News Flash Update! Nov 30, 2018… Scroll down past Proof 6 in this section and view the latest simplified Proof 7 (a) Pi Circumference Measurement and Proof 7 (b) simplified Math Proof for the true value of Pi = 4 / sqrt (Phi). This section is divided into two parts: (1) A Geometric picture of each Proof including an accompanying step by step summary of proof, and (2) A video of my stepping the reader through each step of each proof. The best way to view this section is to first do Part (1) View the Geometric Proof Photo, and then walk through Part (2), its video explanation. These geometric proofs are shown in chronological order of development as I learned more and more about how to discover the true value of Pi. Note 1: For Geometric Proof 1, how was the blue circle with a given circumference of 2-units created? The real question is: how can one transfer the given 2-unit circumference of the blue circle onto a straight line which can then be used to draw the given 2-unit sided Square YHWT, which is then used in drawing Kepler’s Triangle and the rest of the drawing to prove True Pi? I used an extra-sharp, carbon tool steel circular blade made by OLFA, Inc. and rolled it against a T-Square on a straight line on the drawing. For Proof 1, one revolution equals a given 2-units. My OLFA, Inc. rotary blade had an axle, so it was easy to roll against my T-Square after marking a point on the blade’s circumference to transfer one 2-unit revolution onto a straight line. You can draw the given blue circle as a circle, of course, by simply drawing a circle around the rotary blade (allowing for the width of the pencil line) and then transferring the blue circle to the rest of the drawing with your compass. The above is true for both Proof 1 Fig 1 and Proof 2 Fig 2 using the same rotary blade except a 2-unit circumference is declared in Proof 1 Fig 1 and a Pi diameter and, therefore, Pi ^ 2 circumference is declared in Proof 2 Fig 2. These are ordinal given values, not cardinal measurements. For Proofs 4 and 6, you do not need to use given blue circles to start your geometric proof for Pi. Just create Kepler’s Golden Ratio Right Triangle and go from there. Note 2: For those of you who are familiar with the proof of Archimedes’ Circles of Arbelos (“Arbelos” is Italian for the “shoemaker’s knife” due to its geometric shape), you can easily see in my Proofs 1 and 2 that the sum of the arcs of the top half of the 4 blue circles tangent across the diameter of the yellow circle equals the arc of the top half of the yellow circle. These arcs are the semi-circles of the blue and yellow circles. In fact, I have drawn multiple levels of Archimedes’ Arbelos Circles in Proof 1. In Proof 1 Fig. 1, the sum of the arcs of the top half of two blue circles equals the top half of Circle C2, and then the sum of the arcs of the top half of two Circle C2’s equal the top half of the yellow circle (which is the yellow circle’s semi-circle). As an aside, the area under each Arbelos blade equals the area above each Arbelos blade. You may want to review the Arblelos Circles on the Internet to understand why the diameters and circumferences of the 4 blue circles respectively equal the diameter and circumference of the yellow circle. With this information, we can easily calculate the true value of Pi as I have shown in each Proof. Geometric Proof 1 for True Value of Pi: Walk-through for Geometric Proof 1 for the True Value of Pi Note: A few readers have asked, “How do you know that the diameters of 4 blue circles fit exactly tangent across the diameter of the Big Yellow Circle in Proof 1 Fig 1?” Blue circle diameter = 2 / Pi, given where Circumf = 2, and diam = C / Pi, Proof 1 Fig 1, Big Yellow circle diameter = 2 sqrt Phi, from Kepler’s Triangle Proof 1 Fig 1. In Proof 1 Fig 1, we do not know yet what Pi is and so we ask, What is the value of Pi when the diameter of the Big Yellow Circle = 4 times the diameter of the Blue circle? The answer is: 2 sqrt Phi = 4 (2 / Pi), Solve for Pi Pi = 4 / sqrt Phi, Pi = 4 / 1.272019650… = 3.144605511… Therefore, the diameter of 4 Blue circles fit exactly tangent across the diameter of the Big Yellow Circle in Proof 1 Fig 1 when Pi = 3.144605511… . And since Pi is a universal constant, not a variable, there is no need to look for another value of Pi. Also, we can square the circumference of the Big Yellow Circle to the perimeter of Square YHWT, equate the two equations for C = P, and then solve for the value of Pi: P, Perimeter of Square YHWT = 8, given in Proof 1 Fig 1, C, Circumference of Big Yellow Circle = 2 Pi (sqrt Phi), from Kepler’s Triangle, Proof 1 Fig 1. When C = P, what is the value of Pi? C = 2 Pi (sqrt Phi) = P = 8, Solve for Pi Pi = 8 / (2 sqrt Phi), Pi = 4 / sqrt Phi = 4 / 1.272019650… = 3.144605511… Therefore, we have squared the circumference, C, to the perimeter, P, and equated C = P to solve for the only variable left, which is Pi = 4 / sqrt Phi. All of the above is clearly shown in Proof 1 Fig 1: (a) the Big Yellow Circle’s diameter is 4 times the diameter of the Blue circle, and (b) we have squared the Big Yellow Circle’s circumference to the perimeter of Square YHWT using the construction and sides of Kepler’s Golden Ratio Right Triangle. Thanks for your questions. Time for you to step through the video of Proof 1 Fig 1: Geometric Proof 2 for True Value of Pi: Walk-through for Geometric Proof 2 for the True Value of Pi Note: A few readers have asked, “How do you know that the diameters of 4 blue circles fit exactly tangent across the diameter of the Big Yellow Circle in Proof 2 Fig 2?” Blue circle diameter = Pi, given where Circumf = C = diam x Pi = Pi^2, Proof 2 Fig 2, Big Yellow Circle diameter = 2 sqrt Phi (Pi^2 / 2), from Kepler’s Triangle Proof 2 Fig 2. In Proof 2 Fig 2, we do not know yet what Pi is – except that we really do know from Proof 1 Fig 1, but let’s continue with Proof 2 Fig 2 as if we didn’t know — and so we ask, What is the value of Pi when the diameter of the Big Yellow Circle = 4 times the diameter of the Blue circle? The answer is: 2 sqrt Phi (Pi^2 / 2) = 4 Pi, Solve for Pi Sqrt Phi (Pi) = 4 , Pi = 4 / 1.272019650… = 3.144605511… Therefore, the diameter of 4 Blue circles fit exactly tangent across the diameter of the Big Yellow Circle in Proof 2 Fig 2 when Pi = 3.144605511… . And since Pi is a universal constant, not a variable, there is no need to look for another value of Pi. Also, we can square the circumference of the Big Yellow Circle to the perimeter of Square YHWT, equate the two equations for C = P, and then solve for the value of Pi: P, Perimeter of Square YHWT = 4 Pi^2, given in Proof 2 Fig 2, C, Circumference of Big Yellow Circle = 2 Pi (sqrt Phi) (Pi^2 / 2), from Kepler’s Triangle, Proof 2 Fig 2. When C = P, what is the value of Pi? C = 2 Pi (sqrt Phi) (Pi^2 / 2) = P = 4 Pi^2, Solve for Pi Pi (sqrt Phi) = 4, Pi = 4 / sqrt Phi = 4 / 1.272019650… = 3.144605511… Therefore, we have squared the circumference, C, to the perimeter, P, and equated C = P to solve for the only variable left, which is Pi = 4 / sqrt Phi. All of the above is clearly shown in Proof 2 Fig 2: (a) the Big Yellow Circle’s diameter is 4 times the diameter of the Blue circle, and (b) we have squared the Big Yellow Circle’s circumference to the perimeter of Square YHWT using the construction and sides of Kepler’s Golden Ratio Right Triangle. Proof 2 Addendum 1 Jan 04, 2018 Notice that one of the key attributes of Proof 2 is that I have defined BOTH the given perimeter of the Square YHWT and the Kepler-derived ordinal value circumference of the Yellow Circle A in terms of Pi: the side of YHWT is a given Pi^2, and the Kepler Triangle side of sqrt (Phi) x Pi^2 / 2 is the given radius of Yellow Circle A. Both equations — the perimeter, P, of Square YHWT and the circumference, C, of Yellow Circle A — are true. We merely want to find the true value of Pi that satisfies the equality of P = C. Perimeter, P, of Square YHWT = 4 x (Pi ^2), Circumference, C, of Yellow Circle A = radius x 2 Pi = sqrt (Phi) x (Pi^2 / 2) x 2 x Pi, or reducing, Circumference, C, = sqrt (Phi) x (Pi^3). Therefore, when “squaring” C to equal P, the value of Pi must be the same for both sides of the equation 4 x (Pi ^2) = sqrt (Phi) x (Pi^3), so that the Perimeter, P, equals (is “squared” to) the circumference, C. Solving for Pi, when P = C, 4 x (Pi^2) = sqrt (Phi) x (Pi^3), Perimeter = Circumference 4 = sqrt (Phi) x Pi, divide both sides by Pi^2 4 / sqrt (Phi) = Pi, divide both sides by sqrt (Phi) 4 / 1.272019650… = Pi, sqrt (Phi) = 1.272019650… 3.144605511… = Pi conclusion Substituting the value of Pi = 3.144605511… in both equations for P = C, we get: 4 (3.144605511)^2 = 39.55417528… for perimeter of Square YHWT, and 1.272019650 (3.144605511)^3 = 39.55417528… for circumference of Yellow Circle A. Everything balances on both sides of the equation P = C when Pi = 3.144605511… . If, however, we substitute Pi = 3.141592654… in both equations for P = C, we get: 4 (3.141592654)^2 = 39.4784176… for perimeter of Square YHWT, and 1.272019650 (3.141592654)^3 = 39.44059321… for circumference of Yellow Circle A. The same equations for P and C do not balance if we use Old Pi = 3.141592654… as Pi. P does not equal C, even though we are using the same true given equations for both P and C. That means Old Pi = 3.141592654… is WRONG! It also means that any other value for Pi other than Pi = 3.144605511… would also be wrong because there are not 2 values for Pi. Pi is a universal constant, not a variable. And Proof 2 shows why true Pi = 4 / sqrt (Phi) = 3.144605511… and no other value. QED. H. Lear Thanks for your questions. Time for you to step through the video of Proof 2 Fig 2: Geometric Proof 4 for True Value of Pi: Walk-through for Geometric Proof 4 for the True Value of Pi Geometric Proof 6 for True Value of Pi: Note: Why Geometric Proof 6 is NOT circular reasoning. The assumption that Proof 6 is a circular argument would be true if the Hypothesis only stated that Pi = 4 / sqrt Phi WITHOUT THE CONDITIONAL that the “Kepler Triangle-created circle and square remains squared.” Without this conditional of Circumference = Perimeter, one would simply be stating that Pi = 4 / sqrt Phi without any conditions. But Proof 6 explicitly states and shows that the “circumference of the circle must equal the perimeter of the square” – Pi x Phi = 4 x sqrt Phi – for the reduction of Pi = 4 / sqrt Phi = 3.144605511… to be true. That is why I show all the significant figures for Pi, Phi, and the sqrt Phi AND the same level (decimal places) of significant figures for circumference and perimeter. Note that I compare significant figures of circumference and perimeter in Proof 6 from 2 to 40 decimal places (30 on the web site) and they match up digit by digit, thus proving that true Pi = 4 / sqrt Phi. Somebody with a super computer should carry out Proof 6 using 1 million or more significant figures (decimal places) for all the parameters and, if they did, I’m sure they would discover that Pi = 4 / sqrt Phi out to a million decimal places as the circumference equaled the perimeter. NASA says they only use 40 significant figures for their Pi calculations (unfortunately, they’re using the wrong value of Pi = 3.141592654…) because this level of significant figures covers the diameter or size of our entire known universe. (NASA’s belief, not necessarily mine, as in who really knows the current size of our universe?) Therefore, Proof 6 is actually testing any given value of Pi with the condition that the “circle is squared.” Any value prescribed for Pi — such as Old Pi = 3.141592654… that fails this “squared” condition of circumference = perimeter is rejected in Proof 6. Both True Pi and Old Pi at only 2 significant figures – 3.14… — passes the conditional “squared” test but Old Pi fails at the 3^rd decimal place and thereafter. Note that this special “squaring” of the sides of Kepler’s Triangle is true for all 3 sides, i.e. Pi times the longest side = 4 times the next longest side, etc. with a special ratio condition for the third side Case. Step through the narrative and you will see. Pi has been hiding in the Kepler Triangle for the last 2,000 years with, I suppose, nobody noticing all 3 Cases of the relationships shown in Proof 6. Walk-through for Geometric Proof 6 for the True Value of Pi Physical and Geometric Proof 7 for True Value of Pi (Zoom in/out to view image) Proof 7 for Pi Walk-through for Proof 7(a) Physical Measurement of True Pi Walk-through for Proof 7(b) Math Proof for True Pi Proof 7 (a) and (b) Step by Step Physical and Math Proof for True Pi Proof 7a – Physical Measurement of Pi Step 1: Drawing table is flat and level. Calibrate beam compass and rotary circle cutter to Starrett Engineering Tape Measure in millimeters. Use 3/8 – 1/2 inch thick x 40 inch x 60 inch poster board to cut Pi Circumference circle with diameter = 1,000.0 mm and perform Proof 7 math proof. Step 2: Draw yellow Circle A with 500.0 mm radius using beam compass. Step 3: Draw Y-axis PN, mark 1,000.0 mm diameter with center at Point A, Circle A. Step 4: Bisect PN, draw X-axis KI, mark 1,000.0 mm diameter of Circle A. Step 5: Circumscribe (cut) groove around Circle A with NT CL-100P Rotary Cutter. Step 6: Insert tape measure into cut groove around Circle A and measure the Pi circumference of the 1,000.0 mm diameter, d, of Circle A. Wrap tape around Circle A with hash marks facing inward against Circle A, not outward. Otherwise you are adding the thickness of the tape measure to the circumference. Circumference, C, = 3,144.6 mm, diameter, d, = 1,000.0 mm. Therefore, Pi = C / d = 3,144.6 mm / 1,000.0 mm = 3.1446… . Proof 7b – Math Proof for Pi = 4 / sqrt Phi = 3.144605511… Step 7: Let circumference of yellow Circle A equal 8 units. Golden Ratio Phi = 1.618033989… , sqrt Golden Ratio = 1.272019650… Step 8: Draw chords PI = IN = NK = KP creating Square PINK inscribed within Circle A. Step 9: Bisect Chords PI, IN, NK, KP of Square PINK, creating inscribed Octagon PTIQNRKS with eight 1 unit Circle A Arcs PT = TI = IQ = QN = NR = RK = KS = SP. Step 10: Place metal flat blade tape measure in cut groove of Circle A circumference to transfer curved length of 1 unit Arc NQ to 1 unit straight line AD on x-axis. Step 11: Draw Squares FHDA and HBJD with sides = 1 unit, creating Rectangle FBJA. Step 12: Draw diagonal AB of Rectangle FBJA creating purple right Triangle ABJ with Sides 1, 2, sqrt 5. Step 13: At Diagonal Midpoint M, draw a 1 unit diameter Circle M to create Golden Ratio Line AC = AM + MC = ((sqrt 5) / 2) + (1 / 2) = 1.618033989… called Phi Step 14: Rotate AC to AE on Line FB to create orange Kepler Golden Ratio Right Triangle FEA with sides FA = 1, FE = sqrt Phi, and AE = Phi. Step 15: Use compass to transfer Kepler Triangle side FE = sqrt Phi to radius AI of Circle A. Diameter KI of Circle A = 2 x sqrt Phi = 2 x 1.272019650… Step 16: Since circumference, C, of Circle A = 8 units and diameter, d, of Circle A = 2 sqrt Phi, then Pi = C / d = 8 / 2 sqrt Phi = 4 / sqrt Phi = 4 / 1.272019650… = 3.144605511… Copyright © Nov 11, 2018 by Harry E. Lear, Jr.
{"url":"http://measuringpisquaringphi.com/geometric-proofs-of-pi/","timestamp":"2024-11-05T12:37:37Z","content_type":"text/html","content_length":"72290","record_id":"<urn:uuid:748e209d-22a3-4691-a8d9-8162f8fb3adb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00046.warc.gz"}
Stupid question of the day I’m just how to get a KJF first strategy to work, especially since my teammate here at college wants to try a successful one. But the two big problems that I see are actually cracking Japan, since it is harder to stop it capital, and how Russia can deal with the massive behemoth Germany baring down on it. Here’s hoping for some critique of ideas and some suggestions (also note, no bid to the axis) (my teammate and I will be both playing Russia, I’ll have Britain, and he’ll have the US) I’m also assuming Germany r1, specifically since that is what the Germany player usually does. Russia building 4 inf/3 art, I like having a little offensive punch thinking just attacking West Russia, using 2 inf, 1ftr Karelia, 3 inf, 1arm Archangel, 3 inf, 1 art, 2 arm moscow, and 1 at least 1 art caucasus. Moving 6 inf into Buryatia, 2 yakut, 2 sinkang, 2 persia. Germany will probably take Anglo, Karelia, and send 3-4 fighters and sub against the UK battleship. The UK will be placing a IC in India, trying to hit japanese sub, and probably hitting the sz 59 transport with the fighter. British bomber will be going to Siankang. The US will be building an IC in Sinkang however, we not sure on how much should the US put in Navy against Japan. I’m hoping to convince my teammate to go after the islands with his fleet when it is built up enough. My questions I have are 1) Do I write off Africa to Germans, and with that much more IPC to Germany, can Russia with less than half of Britain IPCs helping, hold off Germany until Japan is cracked? 2) What should I do with my indian fleet, try to help the americans? 3) Does America send anything against Germany? Thanks for the help. If you want KJF done right, you had better remember the secret herbs and spices. I’m gonna ask you a coupla dumbass questions. Then I’m gonna give a lot of smart-ass answers. Tee heez. No, srsly, these comments should help. Maybe. If you dare to think about what I’m Let us SAY that you are playing rock paper scissors. Now let us say that you have DECIDED for some wacky reason that players will NOT SHOW THEIR HANDS SIMULTANEOUSLY. One player shows, then the other. To make things REALLY wacky, the second player doesn’t even have to decide what he has to choose until AFTER he sees what the first player chose. Wow, this sounds like a hell of an easy game, doesn’t it? Now, HERES THE TRICKY PART. Your opponent wants YOU to decide WHO GOES FIRST. WHAT DO YOU DO? THINK HARD. I have just answered all your questions. Mostly kind of sort of. –1) Do I write off Africa to Germans, and with that much more IPC to Germany, can Russia with less than half of Britain IPCs helping, hold off Germany until Japan is cracked? Answer: If the Germans really want Africa, let 'em have it. If you eat up Europe while the Germans eat up Africa - you win! What should I do with my indian fleet, try to help the americans? Answer: As opposed to, say, helping the mighty Russian fleet? No, just kidding. Seriously, you should base your moves with the Indian fleet on the German move. If it looks like you’ll want to KJF, you might want to unify the UK fleets. If you want to KGF, you might want to suicide the UK fleet. Does America send anything against Germany? Answer: Really depends on what Germany bought. If Germany looks ripe for a KGF, go for it.
{"url":"https://www.axisandallies.org/forums/topic/4758/stupid-question-of-the-day/4","timestamp":"2024-11-09T04:41:55Z","content_type":"text/html","content_length":"163251","record_id":"<urn:uuid:dcdafdea-dd8b-4f70-a39d-99add8701d93>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00796.warc.gz"}
Recursive Sequence Calculator – Easy To Use Calculator (FREE) Are you looking for an easy to use recursive sequence calculator that’s free? If so, then you have come to the right place! Our recursive sequence calculator is a fast and powerful tool that allows you to quickly calculate any recursive function for any given sequence. You’ll be able to understand the relationships between the terms in a recursive sequence in no time! A recursive sequence is a pattern of numbers in which each successive term is calculated based on the preceding term(s). It’s an important concept that is used in many areas of mathematics, including calculus, algebra, and statistics. Our free calculator can be used to easily calculate the values of any recursive sequence. How the Recursive Sequence Calculator Works The recursive sequence calculator works by allowing you to input the initial term of your sequence and the function by which the next terms are calculated. It then calculates all the terms in the sequence so you don’t have to work them out by hand. It is an incredibly useful tool for anyone who needs to find the values of recursive sequences quickly and accurately. The calculator is also incredibly easy to use. You simply type in the initial term and the function and hit the “calculate” button. In seconds, you will have all the terms in your sequence. You can then copy and paste the results into a spreadsheet or other document for further analysis. Try Our Recursive Sequence Calculator for Free! The best part about our recursive sequence calculator is that it’s free to use. There are no hidden costs or subscription fees. All you have to do is visit our website and start calculating! The calculator is also easy to use and can help you quickly calculate the terms in any recursive sequence. So, don’t wait any longer. If you need to calculate recursive sequences quickly and accurately, our free recursive sequence calculator is the perfect solution. Try it out now and see how easy it is to use and how powerful it can be! Leave a Comment
{"url":"https://slickspring.com/computer-software/recursive-sequence-calculator-easy-to-use-calculator-free/","timestamp":"2024-11-04T19:52:40Z","content_type":"text/html","content_length":"142593","record_id":"<urn:uuid:19fffe50-8c02-45ef-904e-ba341f86eb29>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00640.warc.gz"}
Multi-Lagrangians, hereditary operators and Lax pairs for the Korteweg - De Vries positive and negative hierarchies We present an approach to the construction of action principles (the inverse problem of the calculus of variations), for first order (in time derivatives) differential equations, and generalize it to field theory in order to construct systematically, for integrable equations which are based on the existence of a Nijenhuis (or hereditary) operator, a (multi-Lagrangian) ladder of action principles which is complementary to the well-known multi-Hamiltonian formulation. We work out results for the Korteweg - de Vries (KdV) equation, which is a member of the positive hierarchy related to a hereditary operator. Three negative hierarchies of (negative) evolution equations are defined naturally from the hereditary operator as well, in a concise way, suitable for field theory. The Euler - Lagrange equations arising from the action principles are equivalent to deformations of the original evolution equation, and the deformations are obtained explicitly in terms of the positive and negative evolution vectors. We recognize, after appropriate coordinate transformations, the Liouville, Sinh - Gordon, Hunter - Zheng, and Camassa - Holm equations as negative evolution equations. The multi-Lagrangian ladder for KdV is directly mappable to a ladder for any of these negative equations and other positive evolution equations (e.g., the Harry - Dym and a special case of the Krichever - Novikov equations). For example, several nonequivalent, nonlocal time-reparametrization invariant action principles for KdV are constructed, and a new nonlocal action principle for the deformed system Sinh-Gordon+spatial translation vector is presented. Local and nonlocal Hamiltonian operators are obtained in factorized form as the inverses of all the nonequivalent symplectic two-forms in the ladder. Alternative Lax pairs for all negative evolution vectors are constructed, using the negative vectors and the hereditary operator as only input. This result leads us to conclude that, basically, all positive and negative evolution equations in the hierarchies share the same infinite-dimensional sets of local and nonlocal constants of the motion for KdV, which are explicitly obtained using symmetries and the local and nonlocal action principles for KdV. Profundice en los temas de investigación de 'Multi-Lagrangians, hereditary operators and Lax pairs for the Korteweg - De Vries positive and negative hierarchies'. En conjunto forman una huella única.
{"url":"https://pure.uai.cl/es/publications/multi-lagrangians-hereditary-operators-and-lax-pairs-for-the-kort","timestamp":"2024-11-09T13:20:06Z","content_type":"text/html","content_length":"57253","record_id":"<urn:uuid:f21e10d7-6b6e-47ad-bbca-cb501bcdc63a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00444.warc.gz"}
Common Core Math Standard | Common Core Worksheets Common Core Math Standard Common Core Standards Math ELA Edulastic Common Core Math Standard Common Core Math Standard – Are you looking for Common Core Math Standard? These worksheets are excellent for assisting youngsters understand the Common Core criteria, including the Common Core Reading and also Writing standards. What are Common Core Worksheets? Common Core Worksheets are academic resources for K-8 students. They are created to aid trainees achieve a typical set of objectives. The very first guideline is that the worksheets may not be shared or posted to the net in any type of means. Common Core worksheets cover K-8 pupils and are created with the CCSS in mind. Making use of these sources will aid students find out the skills necessary to be successful in institution. They cover different ELA and mathematics subjects and featured response tricks, making them a fantastic resource for any kind of classroom. What is the Purpose of Common Core? The Common Core is an effort to bring uniformity to the way American youngsters learn. Developed by teachers from across the nation, the criteria concentrate on constructing an usual base of understanding and also skills for trainees to be successful in university and in life. Presently, 43 states have adopted the standards and also have begun to apply them in public schools. The Common Core requirements are not a government required; instead, they are an outcome of years of research and also evaluation by the Council of Chief State School Officers and the National Governors Association. While federal requireds are very important, states still have the last word in what their curriculum appears like. Numerous moms and dads are frustrated with Common Core criteria and are publishing screenshots of incomprehensible materials. The social media website Twitchy is a great place to locate examples of incomprehensible products. One such screenshot reveals Rubinstein attempting to figure out the function of a Common Core page. The page contains squares and also circles and also a selection version. The Common Core has actually labeled this a mathematics job, however Rubinstein couldn’t make sense of it. Common Core Math Standard Understanding Common Core Standards Math Smiling And Shining In Second Grade July 2013 Common Core Math Sheet 10 Free Word Excel PDF Documents Download Common Core Math Standard If you are looking for Common Core Math Standard, you’ve come to the right location! These math worksheets are categorized by quality degree and also are based on the Common Core math requirements. The initial collection of worksheets is focused on single-digit enhancement as well as will certainly check a youngster’s abilities in counting items. This worksheet will certainly need trainees to count items within a min, which is a terrific means to practice checking. The charming items that are included will make the mathematics problems much more understandable for the kid and also supply a graph of the response. Mathematics worksheets based upon the common core mathematics criteria are a terrific method for children to learn basic math skills and principles. These worksheets have different problems that range in difficulty. They will certainly additionally encourage analytical, which helps children use their knowing in real-life circumstances. Fractions are another discipline that is difficult, however not impossible for young learners to find out. Common Core Fractions Teaching Resources consist of sorting, ordering, and also modeling fractions. These complimentary worksheets are created to aid kids understand this topic. Related For Common Core Math Standard
{"url":"https://commoncore-worksheets.com/common-core-math-standard/","timestamp":"2024-11-13T04:30:37Z","content_type":"text/html","content_length":"41040","record_id":"<urn:uuid:5cad405d-b8e9-4ff5-abb8-55196932bb39>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00441.warc.gz"}
How much leverage does BTSE offer? BTSE offers up to 100x Leverage on its futures products. 100x leverage is offered for Bitcoin and Ethereum, while most altcoins can be leveraged up to 20x. What is Initial Margin? Initial Margin is the minimum amount of funds you need in your margin trading wallets (either Cross Wallet or Isolated Wallets) to open a new trading position or to maintain an active order. This amount is specified in USDT or an equivalent value in another currency. How is Initial Margin Calculated? The Initial Margin required depends on the leverage you use for your trade. Higher leverage means you can control a larger position with a smaller amount of capital, which in turn lowers the percentage of Initial Margin required. Here's the formula to calculate it: Initial Margin = Notional Value × (Initial Margin Percentage + (Taker Fee Percentage × 2)) • Notional Value is calculated by multiplying the price at which your trade is filled (Fill Price) by the size of your position in the respective futures contract (Position Size). • Initial Margin Percentage is determined by your leverage, calculated as 1 divided by the leverage amount. • For example, using 100x leverage means the Initial Margin Percentage is 1/100, or 1%. Example Calculation: Let's break down an example where you want to place a limit order in the BTC-PERP (Bitcoin Perpetual Futures) market under the following conditions: • Limit Price: $30,000 (the price you want to buy/sell 1 BTC) • Position Crypto Size: 1 BTC • Leverage: 100x • Taker Fee Percentage: 0.05% Using the formula: • Notional Value = $30,000 (Fill Price) × 1 BTC (Position Crypto Size) = $30,000 • Initial Margin = $30,000 × (1/100 + 0.05% × 2) = $30,000 × (0.01 + 0.001) = $30,000 × 0.011 = $330 Therefore, the Initial Margin required to open this position would be 330 USDT. This calculation shows how leveraging your trade affects the Initial Margin required, allowing you to trade larger positions with a relatively small amount of capital. Remember, while higher leverage can amplify profits, it also increases the risk of losses. What is Maintenance Margin? Maintenance Margin is the minimum balance you need to maintain in your margin trading wallets (either Cross Wallet or Isolated Wallets) in order to keep your trading positions open. This balance is required to be in USDT or its equivalent value in another currency. How is Maintenance Margin Calculated? The amount of Maintenance Margin required is impacted by the risk limit level you select. Opting for a higher risk limit increases the Maintenance Margin percentage. If you decide to adjust your risk limit level at any point, the new Maintenance Margin percentage will apply to the entire size of your position. The formula for calculating Maintenance Margin is as follows: Maintenance Margin = Notional Value × (Maintenance Margin Percentage + Taker Fee Percentage + Funding Rate Percentage) • Notional Value is calculated by multiplying the current market price (Mark Price) by the size of your position in the futures contract (Position Size). • Funding Rate Percentage is adjusted based on the position's direction: assign 0% for negative percentages in long positions and for positive percentages in short positions to simplify When Do You Risk Liquidation? Your positions may be at risk of liquidation—either full or partial liquidation, or triggering a forced market buy/sell—when the market price (Mark Price) hits the Liquidation Price. This happens if the value of your margin wallet drops to or below the Maintenance Margin requirement. Example Calculation: Let's illustrate with an example in the BTC-PERP (Bitcoin Perpetual Futures) market, assuming you hold a long position under these conditions: • Mark Price: $30,000 • Position Crypto Size: 1 BTC • Maintenance Margin Percentage: 0.5% • Taker Fee Percentage: 0.05% • Funding Rate Percentage: 0.001% Using the formula: • Notional Value = $30,000 (Mark Price) × 1 BTC (Position Crypto Size) = $30,000 Maintenance Margin = $30,000 × (0.5% + 0.05% + 0.001%) = $30,000 × 0.551% = $165.3 Therefore, to keep this position open, you would need a Maintenance Margin of 165.3 USDT. This example demonstrates the importance of understanding and managing your Maintenance Margin to avoid the risk of liquidation, especially in volatile market conditions. What is Maintenance Margin? • Maintenance Margin is the minimum amount of USDT (or USDT Value) you must have in your margin wallets (Cross Wallet or Isolated Wallets) to keep a position open. • The Maintenance Margin Percentage is determined by your chosen risk limit level. Higher risk limit level, higher maintenance margin percentage. If you change the risk limit level, the entire position size will be applied to the new maintenance margin percentage. • When the Mark Price reaches the Liquidation Price, it means that your margin wallet value equals or falls below the maintenance margin requirement. Your position(s) will be liquidated, partially liquidated, or trigger a forced market buy/sell. Maintenance Margin = Notional Value * (Maintenance Margin% + Taker Fee% + Funding Rate%) *Assign 0 Funding Rate % for negative percentage in long position and positive percentage in short position. * Notional Value = Mark Price * Position Size If you have a long position with the following conditions in the BTC-PERP market: - Mark Price: 30,000 - Position Crypto Size: 1 BTC - Maintenance Margin%: 0.5% - Taker Fee%: 0.05% - Funding Rate%: 0.001% The maintenance margin would be: 30,000 * 1 * ( 0.5% + 0.05% + 0.001% ) = 165.3 USDT
{"url":"https://support.btse.com/en/support/solutions/articles/43000460018-leverage","timestamp":"2024-11-05T00:18:46Z","content_type":"text/html","content_length":"77838","record_id":"<urn:uuid:0ca3b956-c75c-4ff9-89ac-d7f1c0a3da73>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00076.warc.gz"}
1. (30 pts) For each of the following systems, determine whether it i 1. (30 pts) For each of the following systems, determine whether it is linear and whether it is time-invariant. Justify your answers. If it is LTI, find the impulse response function h(t). Each system is specified by the output y that is produced from an input r. \text { (a) } y(t)=x(t+7) \text { (b) } y(t)=x(3 t) \text { (c) } y(t)=|x(10)| y(t)=\int_{-\infty}^{\infty} I_{[0,+\infty)}(t-\tau) \exp (\tau-t) x(\tau) d \tau y(t)=\int_{-\infty}^{\infty} \frac{1}{1+\tau^{2}} x(\tau-t) d \tau y(t)=\int_{-1}^{0}(\tau-1) x(t+\tau) d \tau y(t)=\min (1, \max (-1, x(t-4))) n) Let (a1,. , ak) be a vector of k nonnegative reals and let (T1,.., Tk) E R*. y(t)=\underset{x \in \mathbb{R}}{\operatorname{argmin}} \sum_{i=1}^{k} a_{j}\left(z-x\left(t-\tau_{i}\right)\right)^{2} The argmin, is the value of z (the argument) that minimizes the expression. Fig: 1 Fig: 2 Fig: 3 Fig: 4 Fig: 5 Fig: 6 Fig: 7 Fig: 8 Fig: 9 Fig: 10 Fig: 11
{"url":"https://tutorbin.com/questions-and-answers/1-30-pts-for-each-of-the-following-systems-determine-whether-it-is-lin","timestamp":"2024-11-07T00:59:38Z","content_type":"text/html","content_length":"82605","record_id":"<urn:uuid:0b108994-bfc1-497a-9c37-a0b93ccc1d81>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00436.warc.gz"}
Maximum Fisherhood Ronald Fisher's shapeshifting conceptions of probability. Everyone was confused by randomness in the 1920s, and no one was more confused than Ronald Fisher. Fisher wrote a series of papers establishing much of modern statistics. But his internal philosophy about what probability means shifts with every paper, revealing his deep confusion about epistemology and inference. For someone who is infamous for his staunch dogmatism, Fisher was philosophically all over the map in the 1920s. He contradicts himself in each subsequent paper (though, of course, he never admits it). My most and least favorite Fisher paper is his 1922 magnum opus, “On the Mathematical Foundations of Theoretical Statistics.” This is heralded by statisticians as one of the most important papers in Statistics. It’s my least favorite because it defines the maximum likelihood method, of which I’ve never been a fan and which has been a mathematical mess for a century. For statistics, this paper has done more harm than good. It’s my favorite because I love the free-wheeling way Fisher writes. It’s clear he’s making things up as he goes to justify rigor in a field that cannot be rigorous. Fisher argues the role of statistics is data summarization. This had been its primary use: a way of tabulating bulk facts about the properties of the state so that those who ruled could make informed decisions. Fisher sought to make this tabulation of counts into rigorous mathematics. Let’s find out what Fisher thought, closely reading the first three paragraphs of Section 2. “...the object of statistical methods is the reduction of data. A quantity of data, which usually by its mere bulk is incapable of entering the mind, is to be replaced by relatively few quantities which shall adequately represent the whole, or which, in other words, shall contain as much as possible, ideally the whole, of the relevant information contained in the original data.” So far, so good. Now, how should you summarize data? Here’s where things get wild: “This object is accomplished by constructing a hypothetical infinite population, of which the actual data are regarded as constituting a random sample. The law of distribution of this hypothetical population is specified by relatively few parameters, which are sufficient to describe it exhaustively in respect of all qualities under discussion. Any information given by the sample, which is of use in estimating the values of these parameters, is relevant information.” All data must be assumed to be random. Not only are they random, but they are randomly sampled (whatever that may mean) from a “population.” This population is hypothetical (i.e., it does not exist) and is a relatively simple mathematical object. Sampling from the population is the same as sampling from a certain simple probability distribution with only a few parameters. The important differences between populations can be summarized by a few numbers. This set of assumptions about data is patently absurd and never true. However, for Fisher, it doesn’t need to be true. The purpose of this hypothetical population is data summarization. It need only encapsulate the important features of the data before the analyst. Fisher, with his frustrating overloquation, is just saying “All models are wrong, but some are useful.” “Since the number of independent facts supplied in the data is usually far greater than the number of facts sought, much of the information supplied by any actual sample is irrelevant.” Indeed, most of the information, whatever that is, is irrelevant to the facts we seek. Of course, what’s relevant and irrelevant is in the eye of the beholder. Does that make Fisher a Bayesian subjective probabilist? “It is the object of the statistical processes employed in the reduction of data to exclude this irrelevant information, and to isolate the whole of the relevant information contained in the The goal of a statistical algorithm is to remove all irrelevant information and find only the relevant information. What is relevant is clarified by creating a hypothetical, simple random model of the world and assuming all data is generated by it. The randomness flattens all uncertainty into stochastic variation around a small number of statistics. The statistician must model the world as a few simple facts corrupted by aberrations due solely to chance. Now I offhandedly quipped that Fisher might be considered a Bayesian for his subjectivity in this section. But he’s clearly not being a frequentist in this paper. How would the modern statistician characterize his proposed procedure? 1. I have a bunch of observations in front of me. 2. I hypothesize a model for this data. 3. I use some math to estimate the parameters of this model. 4. These parameters serve as my summary of the data. This sounds like exploratory data analysis to me! We make some untestable assumptions about the world in order to tell a story about data. Fisher of 1922 is much closer to John Tukey than the Fisher of 1925 who wrote The Design of Experiments. Fisher further expands upon his probabilitist beliefs in the next paragraph. “It should be noted that there is no falsehood in interpreting any set of independent measurements as a random sample from an infinite population; for any such set of numbers are a random sample from the totality of numbers produced by the same matrix of causal conditions: the hypothetical population which we are studying is an aspect of the totality of the effects of these conditions, of whatever nature they may be. The postulate of randomness thus resolves itself into the question, ‘Of what population is this a random sample?’ which must frequently be asked by every practical That first sentence is a doozy. So many clauses! What does he mean by independent here? Regardless, he’s laying his cards on the table and telling us that all data are a random sampling of something. This means that all of our experience is nothing more than the manifestation of random fluctuations of the universe. You might defend this position, but realize that you are making some very strong philosophical assertions. Natural randomness is a postulate for Fisher. All observations are random. Some, I suppose, are useful. In the remaining fifty odd pages, Fisher proceeds to write a bunch of formulae to derive the method of maximum likelihood. Let me include one more paragraph that still haunts statistics. “Readers of the ensuing pages are invited to form their own opinion as to the possibility of the method of the maximum likelihood leading in any case to an insufficient statistic. For my own part I should gladly have withheld publication until a rigorously complete proof could have been formulated; but the number and variety of the new results which the method discloses press for publication, and at the same time I am not insensible of the advantage which accrues to Applied Mathematics from the co-operation of the Pure Mathematician, and this co-operationis not infrequently called forth by the very imperfections of writers on Applied Mathematics.” Hilarious. Is maximum likelihood rigorous today? No! 100 years later, we still use the technique with little justification. It’s mostly harmless as it’s often just computing means or solving least-squares problems. And it’s often as good as anything else because data summarization is exploratory. We’d certainly add some forms of mathematical rigor. For example, Doob would show the method could be considered empirical risk minimization. While this gives a rigorous justification for the method in special contexts, it does not rigorously justify the assumptions. Doob’s theory is true only if the data are actually generated from one of Fisher’s hypothetical probability distributions. But this is almost never true. The assumptions of statistics are metaphysical and can never be made rigorous. You can never prove that all observations are generated by having god randomly generate an iid sample from a probability distribution governed by a few parameters. The mathematical foundations of statistics have their issues. The philosophical foundations are untenable. Isn't basically all of ML based on the assumption that there exists some unknown distribution over basically everything? Expand full comment "all observations are generated by having god randomly generate an iid sample from a probability distribution governed by a few parameters. " This world view is so confusing. "Random variables" in statistical world view seem to be super zombie which make everything rv. Any constant/object + random variable is a random variable. Random variable infects everything ! This worldview is good for mathematical analysis or exploration in some context. Generalizing this idea is so weird. Expand full comment 16 more comments...
{"url":"https://www.argmin.net/p/maximum-fisherhood","timestamp":"2024-11-03T17:07:20Z","content_type":"text/html","content_length":"170934","record_id":"<urn:uuid:2686eb1d-61e7-4007-94b5-bd7d48b21ee9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00554.warc.gz"}
The top surface area of a square tabletop was changed so that one of Question Stats: 60% 40% (02:02) based on 2812 sessions Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and independent equations ensures a solution. The surface area of a square tabletop was changed so that one of the dimensions was reduced by 1 inch and the other dimension was increased by 2 inches. What was the surface area before these changes were made? (1) After the changes were made, the surface area was 70 square inches. (2) There was a 25 percent increase in one of the dimensions. In the original condition, suppose one side is x. There is 1 variable(x), which should match with the number of equations. So you need 1 more equation. For 1) 1 equation, for 2) 1 equation, which is likely to make D the answer. In 1), from (x-1)(x+2)=70, x^2+x-72=0, (x-8)(x+9)=0, x is 8, which is unique and sufficient. In 2), from 2=0.25x, x is 8, which is unique and sufficient. Therefore, the answer is D. -> For cases where we need 1 more equation, such as original conditions with “1 variable”, or “2 variables and 1 equation”, or “3 variables and 2 equations”, we have 1 equation each in both 1) and 2). Therefore, there is 59 % chance that D is the answer, while A or B has 38% chance and C or E has 3% chance. Since D is most likely to be the answer using 1) and 2) separately according to DS definition. Obviously there may be cases where the answer is A, B, C or E.
{"url":"https://gmatclub.com/forum/the-top-surface-area-of-a-square-tabletop-was-changed-so-that-one-of-83624.html?kudos=1","timestamp":"2024-11-12T23:45:43Z","content_type":"application/xhtml+xml","content_length":"1028178","record_id":"<urn:uuid:802e8a69-97e8-4aba-aa33-a58c7cb47953>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00313.warc.gz"}
Surface Area of Cones Worksheets - 15 Worksheets.com Surface Area of Cones Worksheets About These 15 Worksheets These worksheets will help students understand and calculate the surface area of conical shapes. These worksheets guide students through the steps required to determine the total area covered by the surface of a cone, using specific geometric formulas. By working through these worksheets, students develop a deeper understanding of geometric principles and improve their mathematical problem-solving skills. Mathematical Skills Explored Worksheets focused on finding the surface area of cones help students develop and reinforce several key math skills essential for understanding and solving these problems accurately. One of the primary skills is the application of geometric formulas. To find the surface area of a cone, students must use the formula πr(r + l), where r is the radius of the base and l is the slant height. This exercise reinforces their ability to apply specific geometric formulas and enhances their understanding of the relationship between the different dimensions of a cone. Another critical skill developed through these worksheets is spatial reasoning. Students need to visualize the three-dimensional nature of a cone and understand how its two-dimensional base and slant height combine to form its surface area. This visualization helps students grasp the concept of surface area as the sum of the areas of the base and the curved surface, improving their ability to think about and manipulate three-dimensional objects. Attention to detail and precision in measurement and calculation are also emphasized in these exercises. Calculating the surface area of a cone requires accurate measurement of the radius and slant height, as well as precise computation using the formula. This practice helps students develop meticulousness and accuracy, which are essential skills in all areas of mathematics and many real-world Logical reasoning and sequential thinking are further strengthened as students follow a clear sequence of steps to solve these problems. They must first measure the relevant dimensions, then apply the formula to calculate the surface area. This structured approach to problem-solving helps students build strong logical reasoning skills, which are valuable not only in mathematics but also in fields such as science, engineering, and technology. Exercises on These Worksheets Surface area of cones worksheets typically contain a variety of problems that help students understand and apply the formula for calculating the surface area of cones. The types of problems and exercises on these worksheets are designed to reinforce the concepts of geometry, specifically focusing on the properties of cones and their surface areas. Basic Calculations – One of the most common types of problems involves straightforward calculations where students are given the radius and the slant height of a cone and are asked to find its surface area. These problems help students practice the formula for the surface area of a cone, which is the sum of the base area and the lateral surface area, represented by A = πr(r + l), where r is the radius and l is the slant height. Finding Missing Dimensions – Another typical exercise involves problems where students need to find a missing dimension given the surface area and one other dimension (either the radius or the slant height). For example, students might be given the surface area and the radius and asked to solve for the slant height. These types of problems require students to manipulate the surface area formula algebraically, enhancing their algebra skills alongside their understanding of geometry. Word Problems – Worksheets often include word problems that require students to apply their knowledge in real-world contexts. These problems might describe a scenario, such as designing a conical tent or a lampshade, and ask students to calculate the surface area needed for materials. These exercises develop students’ ability to translate textual information into mathematical problems and apply geometric concepts to practical situations. Composite Figures – Some advanced worksheets may include problems involving composite figures where the cone is combined with other shapes, such as cylinders or spheres. Students might be asked to calculate the total surface area of a structure that includes a conical section. These problems require a deeper understanding of how to break down complex figures into simpler parts and apply the surface area formulas accordingly. Problem Solving with Nets – Another type of exercise involves working with nets of cones. Students may be asked to draw the net of a cone, which includes a circular base and a sector of a circle for the lateral surface. They then use these nets to visualize and calculate the surface area. This type of exercise helps students develop spatial reasoning skills and understand the geometric properties of cones more concretely. Benefits of These Worksheets Learning how to calculate the surface area of cones offers numerous benefits for students, both in terms of academic development and practical application. Firstly, mastering this skill strengthens students’ understanding of geometry, particularly in the area of three-dimensional shapes. Calculating the surface area of cones requires familiarity with geometric formulas, the ability to manipulate algebraic expressions, and spatial reasoning. This foundational knowledge is critical for advancing in mathematics, as it underpins more complex concepts and problem-solving techniques. The process of learning to calculate surface areas enhances critical thinking and analytical skills. Students must interpret the given information, apply the correct formulas, and accurately perform calculations. This step-by-step problem-solving approach fosters attention to detail and logical reasoning, skills that are valuable not only in mathematics but also in various academic and professional fields. Additionally, students develop perseverance and the ability to tackle challenging problems, which are essential traits for success in any discipline. Real World Applications The ability to calculate the surface area of cones also has significant real-world applications. For instance, in engineering and architecture, understanding the surface area is crucial when designing conical structures such as roofs, towers, and chimneys. Accurate calculations ensure that the correct amount of materials is used, preventing wastage and reducing costs. This knowledge is particularly important in the construction industry, where precise measurements directly impact the quality and efficiency of a project. In manufacturing, especially in industries that produce conical objects like funnels, pipes, and storage tanks, calculating the surface area is vital for determining material requirements and production costs. For example, manufacturers need to know the surface area to apply coatings, paints, or other finishes to their products. This ensures uniform application and quality control, leading to better product durability and aesthetics. The skill is beneficial in everyday life and various other professions. In culinary arts, chefs and bakers often need to calculate the surface area of conical molds to determine the amount of ingredients required for recipes. Similarly, in environmental science, understanding the surface area of conical landforms can be important for studying erosion patterns and designing conservation
{"url":"https://15worksheets.com/worksheet-category/surface-area-of-cones/","timestamp":"2024-11-14T05:43:37Z","content_type":"text/html","content_length":"131902","record_id":"<urn:uuid:9165dbc5-5fb7-414b-b880-89dc9e0af0f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00368.warc.gz"}
Re: [seul-edu] [OT] summation of 1/2x [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [seul-edu] [OT] summation of 1/2x On Wed, Apr 05, 2000 at 04:25:00PM +0100, jm wrote: > > > I got to thinking of this last week when a friend and I were trying to > > > remember what the summation of 1/x evaluated from x=0 to x=infinity > > > is. > > > >I would like to see what you found! ;-) > If I remember well, there is a trick to calculate the sum of 1/2x from 1 to > infinity > you just need a sheet of paper: > + first you cut in half the sheet of paper: you get 1/2 sheet and another > 1/2 sheet > + then you cut in half again one of the 1/2 sheet: you get 1/4 and 1/4 > + then you cut in half again... > so you can prove that 1/2+1/4+1/8+..+1/2n+.. = 1 Actually, that is 1/2^n (one over 2 to the n), not 1/2n. As a matter of fact, the series 1/n does not have a finite sum. But after adding some terms at the beginning, the sum increases so slowly that from a computer calculation, it may indeed seem like you have a convergence. After adding first 10000 terms, it all looks like 9.787..., and it seems that these digits are not changing any more. But after adding first 100000 terms you get 12.0901..., and so on. It keeps increasing, and it is not bounded. Of course, when you get so far that your computer thinks that 1/n = 0, you have the "limit" :-). Then you increase the precission, and your "limit" suddenly changes. This is a classical example that demonstrates shortcommings of numerical computations in math. Even better example is 1/(n*ln(n)) (ln stands for natural logarithm). This series is also divergent, but the sum increases even slower than the sum of 1/n. Jan Hlavacek (219) 434-7566 Department of Chemistry Jhlavacek@sf.edu University of Saint Francis http://199.8.81.3/Jhlavacek/
{"url":"https://archives.seul.org/seul/edu/Apr-2000/msg00060.html","timestamp":"2024-11-12T11:45:09Z","content_type":"text/html","content_length":"6045","record_id":"<urn:uuid:1976de41-6164-459f-a451-f3a1f582d0f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00597.warc.gz"}
Size of Infinity | Joe McCann What does it mean to count? When I ask you “How many apples are there” depending on however many apples you are counting, you’ll mentally provide some number. $1,2,3,4,\ldots$ etc. Counting is counting, you’ve done it since you were a child. Say I ask you “How many items are in the set $\{a,b,c\}$?, you might think “Uh, obviously $3$ dumbass”. You would be right, but I have a much more general question, what does it really, really, mean for there to be $3$ items in the set? What does it mean to count? Well for purpose of simplifying, lets say the empty set $\phi$ has $0$ items in it. Then the set that only has $\{1\}$ would have one item in it. By extension $\{1,2\}$ has two items, $\{1,2,3\}$ has three items, and we can continue so on and so forth. How could we extend this idea to sets that are not as straightforward though, like $\{a,b,c\}$? Those items aren’t numbers so can we even count Well we know that $\{1,2,3\}$ has $3$ items, and from before we know that if two sets have a bijection between them, then the two sets have the same cardinality which was our way of defining a “number of items”. So if we can find a bijection between our sets, then the two sets must have the same number of items! For example $$ f(a)&=1 \\ f(b)&=2 \\ f(c)&=3 $$ In this case $f:\{a,b,c\}\ rightarrow\{1,2,3\}$ is a bijection so we know they both must be the same size of $3$. Putting this into a nice theorem that we can use, we can say the following Theorem: A set $A$ has cardinality $|A|=n$ iff there exists an $$f:A\rightarrow \{1,2,\ldots, n\}$$ where $f$ is bijective. This is well and good, but it gets very interesting once we move into the land of the infinite. Infinite Cardinalities Now that we can compare the sizes of sets and know what it means for a set to be a certain size (defining “size” as cardinality), we can reasonably ask the question “How big are infinity sets?” Well pretty clearly they are bigger than any finite set, because an infinite set will just keep going forever and a finite set will not, but can we compare them to each other? Evens and Odds Let the even numbers be $E=\{2,4,6,8,\ldots\}$ and the odd numbers be $O=\{1,3,5,7,\ldots\}$. Are there more even numbers, odd numbers, or are they the same? Putting this in math notation, which of the following is true? $$ |E|&>|O| \\ |E|&<|O| \\ |E|&=|O|. $$ Intuition might tell you that they should be the same, as after every odd number there is an even number and vice versa, but lets see if we can construct a bijection to prove this. Theorem: $|E|=|O|$ Proof Consider the function $f:E\rightarrow O$ where $$ f(x)=x-1. $$ This function is injective as every even number subtracted by $1$ will be unique, and it is surjective as every odd number has an even above it. You could have also just showed $x+1$ was an inverse. Since $f$ is bijective, then $|E|=|O|$.QED The intuition was right^1! You might have noticed though that we started our even numbers with $2$, and well $0$ is an even number, so does that impact anything? Let’s say that $E_2=\{0,2,4,6\ldots\}$. Now we have that $E\subset E_2$, so will $|E_2|>|E|=|O|$? Let’s consider a function $f:O\rightarrow E_2$ where $f(x)=x-1$. You might notice that $f$ is exactly the same as our previous proof, and by the same logic it actually turns out that $|E_2|=|O|=|E|$! This is crazy as we have a proper subset being the same cardinality as its superset! In essense here when we add $0$ into $E_2$, we are effectively “pushing” all the items down one, but that is ok because there is an infinite amount of them! If you think that’s crazy, just watch this though! The Naturals and Countability Let’s consider how the natural numbers $\mathbb{N}$^[2] interact in terms of size now with the even numbers that we just showed. For the sake of brevity, for this problem we will say that $E$ includes $0$. Clearly since $$ \mathbb{N}=\{0,1,2,3,4,5,6,7,\ldots\}=E\cup O $$ You would be tempted to say that $|\mathbb{N}|>|E|$, however what happens when we introduce the function $$ f&:\mathbb {N}\rightarrow E \\ f(n)&= 2n $$ This function is injective, as $$ f(n_1)=f(n_2)\implies 2n_1=2n_2\implies n_1=n_2. $$ It is also surjective as given any even number, we can just divide by $2$ to find a value of $n$ such that $f(n)$ equals the given even. Since $f$ is surjective and injective, it must be bijective too! But hold on a minute buckaroo, that means then that $|\mathbb{N}|=|E|$??? Yes it does! In fact it also implies that $|\mathbb{N}|=|E|=|O|$! If we were to write out the function $f$, we would see this sort of pattern $$ 0&\rightarrow 0 \\ 1&\rightarrow 2 \\ 2&\rightarrow 4 \\ 3&\rightarrow 6 \\ 4&\rightarrow 8 \\ &\vdots $$ It almost looks like we are counting the even numbers, such that if I asked you for the the fourth even number was, you could tell me that it was $8$ (starting with $0^{\text{th}}$). We will soon see this is a fundemental quantity and give a name for it. Definition: Let $A$ be a set where $|A|=|\mathbb{N}|$. We say that $A$ is countably infinite, or countable for short, and is notated as $|A|=\aleph_0$ Woah what is that symbol $\aleph_0$? So I’ve heard, the story goes that Georg Cantor after working on infinite cardinalites wanted to come up with a symbol for $|\mathbb{N}|$ but believed greek letters were too overused. As such he decided to use the hebrew letter Aleph to represent the cardinality of the naturals, sometimes called aleph null. Spoiler alert, using Hebrew letters did not catch on and mathematicians continued to oversaturate Greek letters, oops. There is a very interesting result involving $\aleph_0$ and how fundemental it is, but I will get to that later. For now lets prove a series of small results that lead up to something cool. Lemma: Let $$ \mathbb{Z}+=\left\{1,2,3,4,5,\ldots\right\} $$ then $|\mathbb{N}|=|\mathbb{Z}+|$. Proof Consider the function $f:\mathbb{N}\rightarrow \mathbb{Z}_+$ where $$ f(x)=x+1. $$ This function is bijective in the same way as the previous examples.QED Lemma: Let $\mathbb{Z}-=\{-1,-2,-3,-4,\ldots\}$. $|\mathbb{Z}-|=|\mathbb{Z}_+|$. Proof Consider the function $f:\mathbb{Z}_+\rightarrow \mathbb{Z}_-$ where $$ f(x)=-x. $$ This function is pretty clearly bijective as we are just taking the items and adding a negative sign lmao.QED Notice how now we have a lot of sets that all have the same cardinality, $|\mathbb{N}|=|E|=|O|$ and these are also equal to the positive and negative integers. This actually translates, so we have that the set of odd numbers and negative integers are the same size for example. Lets continue with this example for a second and write out some pairings $$ -1&\rightarrow 1 \\ -2&\rightarrow 3 \\ -3 &\rightarrow 5 \\ -4&\rightarrow 7 \\ &\vdots $$ but wait a second, notice that we can match up all the negative numbers with all the odd numbers. Earlier on we showed that $$ 0&\rightarrow 0 \\ 1&\ rightarrow 2 \\ 2&\rightarrow 4 \\ 3&\rightarrow 6 \\ 4&\rightarrow 8 \\ &\vdots $$ which means that we can match up all the positive numbers with all the even numbers. What if we combined these two lists together though $$ 0&\rightarrow 0 \\ -1&\rightarrow 1 \\ 1&\rightarrow 2 \\ -2&\rightarrow 3 \\ 2&\rightarrow 4 \\ -3&\rightarrow 5 \\ 3&\rightarrow 6 \\ -4&\rightarrow 7 \\ 4&\rightarrow 8 \\ &\vdots $$ On the left hand side we are getting all the negative and positive integers, which is just $\mathbb{Z}$, and on the right we find that we have all the natural numbers $|\mathbb{N}|$. By combining these two functions, we have actually created a bijection between the integers and naturals! This means that the following theorem is true. Theorem: $|\mathbb{N}|=|\mathbb{Z}|$ This is kind of crazy, because even though the integers are infinite in two directions, we can still find a way to match everything up. For a set to be countable, what we effectively need is to find a way to list out every item of the set in a specified order, as then you can match up the naturals. The Rationals and Countability Remember that the set of rational numbers $\mathbb{Q}$ is the set of all fractions $\frac{a}{b}$ where $a,b\in\mathbb{Z}$. Now since we have infinite choices for the top and the bottom, and there are even infinite fractions between any two integers^2, you might think that $|\mathbb{Q}|>|\mathbb{N}|$. Since $\mathbb{N}\subset\mathbb{Q}$, we know that $|\mathbb{N}|\leq|\mathbb{Q}|$, but spoiling the results for you, we can show that Theorem: $|\mathbb{N}| = |\mathbb{Q}|$ The proof for this is fairly involved, but I will show you one way of listing out all the rational numbers that will in fact give you every single one. For the sake of simplicity we will only consider positive fractions, as negative fractions could just be inserted in between in the same way we did above for the integers. The idea is that we will list off all fractions sorted by the sum of their numerator and denominator, so start with all that sum to $1$, then $2$, then $3$, etc. etc. For each of these options, theres actually only a finite number of ways to represent this, so at some point we will reach every fraction (which would make the function that follows this order surjective). If we ever come across a fraction who is already in this list (for example $\frac{2}{4}$ when $\frac{1}{2}$ is there) we skip it, which will make our function also injective. The list will look like the following $$ \frac{0}{1},\frac{1}{1},\frac{1}{2},\frac{2}{1},\frac{1}{3},\frac{3}{1},\ldots $$ Notice we skip fractions such as $\frac{0}{2}$ as that appears in the start and we proceed to skip it. This means that $\mathbb{Q}$ is also a countable even though it seems to be incredibly large! At this point you might be thinking “Yeah but infinity is infinity, it’s obvious they are all the same size”, to which I say, just you wait for the next page Practice Problems Theorem: There are an infinite number of rational numbers $q$ such that $0<q<1$ Theorem: The set of square numbers $\{1,4,9,16,\ldots\}$ is countable 1. Unless you said otherwise but I won’t blame you infinity is tough 😝 ↩︎ 2. Try proving this! ↩︎
{"url":"https://wjmccann.com/course/settheory/sections/infinitesets/","timestamp":"2024-11-10T03:10:09Z","content_type":"text/html","content_length":"25414","record_id":"<urn:uuid:0fbc2866-3b2b-47b1-b848-5e914c048c05>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00452.warc.gz"}
Computation of unit sample unit step and sinusoidal response of the given LTI system and verifying its physical reliability and stability properties - Basic Simulation Lab Computation of unit sample, unit step and sinusoidal response of the given LTI system and verifying its physical reliability and stability properties Aim: To Unit Step And Sinusoidal Response Of The Given LTI System And Verifying Its Physical Realizability And Stability Properties. PC with windows (95/98/XP/NT/2000). MATLAB Software A discrete time system performs an operation on an input signal based on predefined criteria to produce a modified output signal. The input signal x(n) is the system excitation, and y(n) is the system response. The transform operation is shown as, LTI system If the input to the system is unit impulse i.e. x(n) = d(n) then the output of the system is known as impulse response denoted by h(n) where, h(n) = T[d(n)] we know that any arbitrary sequence x(n) can be represented as a weighted sum of discrete impulses. Now the system response is given by, For linear system (1) reduces to %given difference equation y(n)-y(n-1)+.9y(n-2)=x(n); Program - 1: Calculate and plot the impulse response and step response n = [-20:120]; title('impulse response'); title('step response'); n = [-20:120]; title('sin response'); Z plane Program - 2: Computation of Unit Sample response : %This program finds the unit sample response of the given discrete system a=input('enter the coefficient vector of input starting from the coefficient of x(n) term') b=input('enter the coefficient vector of output starting from the coefficient of y(n) term') n1=input('enter the lower limit of the range of impulse response') n2=input('enter the upper limit of the range of impulse response') title('Unit Sample response of the given discrete system y(n)-y(n-1)+0.9y(n-2)=x(n)') enter the coefficient vector of input starting from the coefficient of x(n) term 1 a = 1 enter the coefficient vector of output starting from the coefficient of y(n) term [1 -1 0.9] b = 1.0000 -1.0000 0.9000 enter the lower limit of the range of impulse response-50 n1 = -50 enter the upper limit of the range of impulse response50 n2 = 50 Program - 3: Computation of Unit Step response %This program finds the unit step response of the given discrete system a=input('enter the coefficient vector of input starting from the coefficient of x(n) term') b=input('enter the coefficient vector of output starting from the coefficient of y(n) term') n1=input('enter the lower limit of the range of impulse response') n2=input('enter the upper limit of the range of impulse response') title('Unit Step response of the given discrete system y(n)-y(n-1)+0.9y(n-2)=x(n)') enter the coefficient vector of input starting from the coefficient of x(n) term 1 a = 1 enter the coefficient vector of output starting from the coefficient of y(n) term [1 -1 0.9] b = 1.0000 -1.0000 0.9000 enter the lower limit of the range of impulse response-50 n1 = -50 enter the upper limit of the range of impulse response50 n2 = 50 Program - 4: Computation of Sinusoidal response %This program finds the Sinusoidal response of the given discrete system a=input('enter the coefficient vector of input starting from the coefficient of x(n) term') b=input('enter the coefficient vector of output starting from the coefficient of y(n) term') f=input('enter the sampling frequency') title('Sinusoidal input(f=50) for the discrete system y(n)-y(n-1)+0.9y(n-2)=x(n)') title('Sinusoidal(f=50) response of the discrete system y(n)-y(n-1)+0.9y(n-2)=x(n)') enter the coefficient vector of input starting from the coefficient of x(n) term 1 a = 1 enter the coefficient vector of output starting from the coefficient of y(n) term [1 -1 0.9] b = 1.0000 -1.0000 0.9000 enter the sampling frequency100 f = 100 a=input('enter the coefficients of numerator in the order of decreasing order of the variable z') b=input('enter the coefficients of denominator in the order of decreasing order of the variable z') R1=input('enter the lower bound of ROC') R2=input('enter the upper bound of ROC') if length(p)<=length(q) &amp; R2==inf disp('The system is causal') disp('The system is not causal') if R1<1&amp;R2>1 | length(i)==length(q) disp('The system is stable') disp('The system is unstable') 1.H(z)=z/(3z2-4z+1) ROC |z|>1 enter the coefficients of numerator in the order of decreasing order of the variable z a =1 0 enter the coefficients of denominator in the order of decreasing order of the variable z b=3 -4 1 enter the lower bound of ROC R1 =1 enter the upper bound of ROC R2 =Inf The system is causal. The system is unstable 2.H(z)=z/(3z2-4z+1) ROC 1/3<|z|<1 enter the coefficients of numerator in the order of decreasing order of the variable z a =1 0 enter the coefficients of denominator in the order of decreasing order of the variable z b=3 -4 1 enter the lower bound of ROC R1 =1/3 enter the upper bound of ROC R2 =1 The system is not causal. The system is unstable 3.H(z)= (z2-1.5z)/[z2-(5/6)z+1/6] ROC |z|>0.5 enter the coefficients of numerator in the order of decreasing order of the variable z a =1.0000 -1.5000 0 enter the coefficients of denominator in the order of decreasing order of the variable z b =1.0000 -0.8333 0.1667 enter the lower bound of ROC R1 =0.5000 enter the upper bound of ROC R2 =inf. The system is causal. The system is stable 4.H(z)= (z2-1.5z)/(z2-(5/6)z+1/6) ROC 1/3< |z|<0.5 enter the coefficients of numerator in the order of decreasing order of the variable z a =1.0000 -1.5000 0 enter the coefficients of denominator in the order of decreasing order of the variable z b =1.0000 -0.8333 0.1667 enter the lower bound of ROC R1 =1/3 enter the upper bound of ROC R2 =0.5. The system is not causal. The system is stable LOCATING THE POLES AND ZEROS IN S-PLANE AND Z-PLANE: The Z-transform converts a discrete time-domain signal, which is a sequence of real or complex numbers, into a complex frequency-domain representation.The Z-transform, like many other integral transforms, can be defined as either a one-sided or two-sided transform. Bilateral Z-transform The bilateral or two-sided Z-transform of a discrete-time signal x[n] is the function X(z) defined as Unilateral Z-transform Alternatively, in cases where x[n] is defined only for n = 0, the single-sided or unilateral Z-transform is defined as In signal processing, this definition is used when the signal is causal. The roots of the equation P(z) = 0 correspond to the 'zeros' of X(z) The roots of the equation Q(z) = 0 correspond to the 'poles' of X(z) The ROC of the Z-transform depends on the convergence PROGRAM:- ZEROS AND POLES IN S- PLANE clear all; close all; num=input('enter the numerator polynomial vector\n'); % [1 -2 1] den=input('enter the denominator polynomial vector\n'); % [1 6 11 6] [p z]=pzmap(H); disp('zeros are at '); disp('poles are at '); if max(real(p))>=0 disp(' All the poles do not lie in the left half of S-plane '); disp(' the given LTI systen is not a stable system '); disp('All the poles lie in the left half of S-plane '); disp(' the given LTI systen is a stable system '); Enter the numerator polynomial vector [1 -2 1] Enter the denominator polynomial vector [1 6 11 6] Transfer function: s^2 - 2 s + 1 s^3 + 6 s^2 + 11 s + 6 Zeros are at Poles are at All the poles lie in the left half of S-plane The given LTI system is a stable system Result: In this experiment computation of unit sample, unit step and sinusoidal response of the given lti system and verifying its physical reliability and stability properties Using MATLAB. Viva Questions: 1. What is Even Signal Ans: If x(t)= x(-t) then x(t) is Even signal. 2. What is Odd Signal Ans: If x(t)= -x(-t) then x(t) is Odd signal. 3. State the difference between a Signal and Sequence? Ans: Signal is a function varies with time, sequence consisting of number samples. 4. What is Static and Dynamic System Ans: Dynamic system output -constantly changing and dynamic system carry past and present inputs to get output, Static system carry only present input to generate output
{"url":"https://vikramlearning.com/jntuh/notes/basic-simulation-lab/computation-of-unit-sample-unit-step-and-sinusoidal-response-of-%C2%A0the-given-lti-system-and-verifying-%C2%A0its-physical-reliability-and-stability-properties/354","timestamp":"2024-11-13T19:22:23Z","content_type":"text/html","content_length":"45535","record_id":"<urn:uuid:57a80fc2-be71-45ec-b416-6ae7332ad5a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00891.warc.gz"}
Automated Building Footprint Extraction (Part 3): Model Architectures | Azavea Automated Building Footprint Extraction (Part 3): Model Architectures Automated Building Footprint Extraction (Part 3): Model Architectures In the first and second parts of this blog series, we discussed open datasets, and evaluation metrics for building footprint extraction. In this third and final part, we’re rounding out the series by reviewing model architectures for building footprint extraction including naive approaches, model improvement strategies, and three recent papers. Naive Approaches A popular, yet naive approach to building footprint extraction consists of three steps. First, a semantic segmentation model such as U-Net or DeepLab outputs a raster wherein each pixel indicates whether or not a building is present. Second, each connected component of building pixels is converted to a vectorized polygon using an off-the-shelf algorithm. Third, and optionally, a heuristic simplification algorithm such as Douglas-Peuker is applied to the polygons. A problem with this approach is that semantic segmentation models are unable to delineate the boundaries between objects of the same class. This means that a single polygon will be drawn around a group of buildings that share walls, such as a block of rowhouses. To handle this case, the semantic segmentation model can be replaced with an instance segmentation model such as Mask R-CNN. This model generates a separate raster mask for each instance of a class that is detected. Instance segmentation can delineate the boundaries of adjacent objects from the same class. (Source) Model Improvement Strategies More recent and sophisticated approaches to building footprint extraction leverage two general strategies for improving deep learning models. The first strategy is to train models “end-to-end” to directly produce the desired output from the input. For example, an end-to-end model for a self-driving car might predict a steering angle from an image. This is in contrast to a more traditional approach in which a model is trained to predict a map of the environment, which is then fed into a hard-coded motion planning algorithm to generate steering angles. By making the entire system trainable from data, it is possible to reduce errors that compound through the system, and optimize for the intended use case. This strategy has been applied to building footprint extraction by designing models that directly output polygons rather than rasters. The distinction between modular and end-to-end pipelines for self-driving (Source) The second strategy is to incorporate a stronger inductive bias by utilizing a priori knowledge about the domain in the model and/or loss function. For example, a convolutional neural network (CNN) “bakes in” the notion of translation invariance. Translation invariance says that “a cat is a cat regardless of where it is in the image” by using the same weights to compute each pixel in a feature map. This strategy has been applied to building footprint extraction by designing models that are biased toward predicting building-like polygons that have low complexity and preserve corner angles. Recent research on models for building footprint extraction In the rest of this blog, we summarize three papers on models specially designed for building footprint extraction, and then conclude the blog series. These articles were all published recently, and exemplify one or both of the two model improvement strategies mentioned above. The models from the first two papers were used as the evaluation baselines for PolyWorld, the model in the third paper. Therefore, we present the evaluation of all these methods together at the end. Although these three models are architecturally diverse, this blog does not contain an exhaustive review of the literature. For this, we recommend consulting the related work sections of these papers. Topological Map Extraction From Overhead Images PolyMapper, published at ICCV 2019 by Li, Wegner, and Lucci [1], is a model that combines CNNs and RNNs to perform object detection, instance segmentation, and polygon extraction in an end-to-end fashion. It can be applied to both buildings and roads. The first part of the model is similar to a Mask R-CNN, an instance segmentation model, which generates: 1. a feature map for the image 2. bounding boxes for each building 3. and a crop of the feature map for each box. Using the feature map for a bounding box, a fully convolutional model is used to generate a boundary mask and a vertex mask that is 1/8 the size of the input image. Each pixel in these masks stores the probability of being on a building boundary or vertex. The top K vertices are then selected and the one with the highest probability is selected as the first point in the polygon. An overview of PolyMapper taken from [1]. Note that the top row demonstrates how it works for buildings, and the bottom for roads. Next, an RNN with ConvLSTM cells is used to generate a sequence of vertices which are joined together to form a polygon. An RNN is used to generate a variable length sequence, which cannot be generated by a CNN. At each time step, the RNN takes the initial and the previous two points in the sequence, and the concatenation of the feature map, boundary mask, and vertex mask as input, and outputs a probability distribution over locations of the next vertex. When the last vertex matches the initial vertex, forming a closed polygon, the RNN outputs an end of sentence (EOS) token. The training loss combines losses over the bounding boxes, masks, and polygons. The PolyMapper RNN which outputs a sequence of vertices that form a polygon. (Source) PolyMapper can handle buildings that are touching, although it will not explicitly represent shared walls as such, and will duplicate them. It can also handle buildings with inner courtyards using polygons with holes, although it requires a modification that was not discussed here. The code for PolyMapper does not seem to be openly available. A comparison of polygons generated by instance segmentation (left) and PolyMapper, which uses fewer vertices and preserves right angles (right) (Source) Polygonal Building Segmentation by Frame Field Learning This paper was published at CVPR 2021 by Girard, Smirnov, Solomon, and Tarabalka [2]. The authors present a CNN that generates raster masks of polygon interiors and exteriors, and a frame field, an additional representation of contours and corners which facilitates downstream polygon extraction. Unlike the previous paper, this paper uses a polygon extraction process that is not learned As can be seen below, a U-Net model is trained to output a boundary mask, and a frame field. The model is trained with a sum of various losses which can be broken into three categories: segmentation losses, frame field losses, and output coupling losses that enforce mutual consistency between the different outputs. Tangent fields are a common representation for contours in an image, and consist of a 2D vector at each point that lies tangent to any contour nearby. They are good at modeling smooth curves, but struggle to represent sharp corners which are parameterized by two directions. Instead, frame fields (as instantiated in this paper) have a pair of vectors for each pixel. At corners, the vectors are aligned with tangents to the two constituent edges. Along edges, at least one vector is aligned to the edge. In the field of computer graphics, frame fields have been used to convert images of line drawings into vectorized graphics. Left: tangent field representation of part of a line drawing. Right: frame field representation of the same drawing which is better at capturing the angles of corners (Source) The output of the model is polygonized using a multi-step process which is depicted below. First, the boundary raster mask is thinned, and then converted to a set of contours using marching squares, a standard algorithm for extracting contours from rasters. Then, this graph is optimized using gradient descent to better align with the frame field and minimize complexity. Next, corners are detected and edges that connect corners are simplified. Finally, polygons are extracted and filtered to keep the ones that overlap with the polygon interior mask. The process for extracting polygons from the output of the model. (Source) An example of using gradient descent to optimize a rough polygon produced from the boundary mask into a polygon that is better aligned with the frame field. (Source) This method is able to handle buildings that are touching and buildings with courtyards by explicitly representing shared walls and generating polygons with holes. In addition, it runs about 10x faster than PolyMapper at inference time. The downside of this method is that the polygon extraction routine is complex and lacks the elegance of a model trained end-to-end. The source code is open source, but has a restrictive license that only permits its use for research. PolyWorld: Polygonal Building Extraction with Graph Neural Networks in Satellite Images The final model we review, PolyWorld, was published at CVPR 2022 by Zorzi, Bazrafkan, Habenschuss, and Fraundorfer [3]. This model predicts polygons end-to-end and consists of a CNN to extract vertices, a graph neural network to refine the position of these vertices, and an “optimal connection network” to prune, group, and sequence the vertices into polygons. An overview of the PolyWorld model which detects vertices and joins them together into polygons. (Source) The first part of the model, the vertex detection network, is a fully convolutional network that outputs a feature map and a vertex detection map. The vertex detection map classifies each pixel based on whether or not it contains a vertex. The mask is filtered using non-maximum suppression to find the N peaks with the highest probability. The positions of these peaks are used to select the corresponding feature embeddings from the feature map, which represent a set of candidate vertices. The vertex detection network outputs the positions and feature embedding for each candidate vertex. (Source) These vertex embeddings and corresponding positions are then passed to an attentional graph network (described below) to aggregate information across the vertices. The input to the graph network is a complete graph (ie. a graph where all nodes are connected to one another) with a node for each vertex. The output of the graph network is a feature vector for each vertex called a “matching descriptor” (the name of which will make more sense later) and a positional offset for each vertex that can be used to refine the predicted position. The input to a graph network is a graph structure and feature vectors for each node. Each layer updates the feature vector for each node as a function of its neighbors’ feature vectors. With each successive layer, information is propagated through the network via edge relationships. The activation function and associated weights are shared by all nodes. A CNN can be seen as a special case of a graph network where the graph is grid structured. In an attentional graph network, the activation function takes a weighted average of a function of the neighbors. These weights are computed using a self-attention layer which decides how much each node should “attend” to each of its neighbors. Using a complete graph structure, if most of the attention values are negligible, the graph structure is effectively computed dynamically as a function of the input. The attentional graph neural network refines positions and generates matching descriptors. (The figure shows a graph with a sparse structure, but communication with the author confirmed that a fully-connected graph is actually used.) (Source) Given the matching descriptors for each vertex, the “optimal connection network” outputs the optimal way of connecting the vertices together into a set of polygons. This network specifies the three components that comprise an optimization system: 1. A representation for solutions to the problem 2. A function that scores solutions 3. A mechanism for finding the solution with the best score The network represents a set of polygons as an adjacency matrix for a graph with a node for each vertex. Each node is matched with exactly one other node: the next node in a traversal of the polygon, or itself if the node is not part of a polygon. This means that each row and each column should have exactly one entry that is non-zero. A matrix of this form is called a permutation matrix because it permutes the rows of another matrix when it is applied as an operator. The optimal matching network represents a set of polygons using an adjacency matrix which is also a permutation matrix. There is an adjacency matrix for clockwise and counterclockwise traversal directions. (Source) The scoring function for permutation matrices uses a score matrix which contains the affinity between each pair of vertices which is based on the matching descriptors for those vertices. The score for a permutation matrix is then the sum over the element-wise product of the score matrix and the permutation matrix. It turns out that optimizing this scoring function is an instance of the assignment problem, a classic combinatorial optimization problem! The assignment problem is perhaps best introduced with a concrete example. We are given a set of agents (Paul, Dave, and Chris), and a set of tasks (clean bathroom, sweep floors, and wash windows), and need to assign each agent to a different task. Furthermore, there is a cost matrix, shown below, which contains the cost for each agent to perform each task. The goal is to find the assignment that minimizes the total cost. In this case, the optimal assignment is to have Paul clean the bathroom, Dave sweep the floor, and Chris wash the windows which has a cost of $6. Clean Bathroom Sweep Floors Wash Windows Paul $2 $3 $3 Dave $3 $2 $3 Chris $3 $3 $2 The optimal matching network solves the same problem except that vertices replace agents and tasks, and the score matrix replaces the cost matrix. In other words, the problem is to match each vertex with one other vertex in a way that minimizes cost. The assignment problem is typically solved using the Hungarian algorithm which runs in time cubic in the number of vertices. However, we cannot simply use the Hungarian algorithm during training since it is not differentiable or GPU efficient. Instead, we use the Sinkhorn algorithm which solves the same problem, but is differentiable and GPU efficient. The algorithm alternates between normalizing rows and columns, and is run for 100 iterations in the PolyWorld paper. Unlike in PolyMapper, there is no need to first detect objects with this method. Instead, vertices are grouped into multiple polygons as a side effect of solving the assignment problem! The optimal connection network uses the Sinkhorn algorithm to find the optimal permutation matrix given score matrices that represent the affinity between vertices. (Source) The use of a differentiable version of a classic algorithm as a module within a neural network is a powerful addition to the deep learning toolkit. A similar approach, and a likely inspiration for PolyWorld, was used in a model called SuperGlue. It uses the Sinkhorn algorithm to find an optimal matching between patches located within pairs of related images. These pairs could come from a sequence of image frames in a video, or a binocular camera system. Solving the correspondence problem is a first step in various downstream 3D computer vision tasks such as localization and structure from motion. An example of a correspondence problem solved by SuperGlue, a similar model to PolyMapper which also uses the Sinkhorn algorithm. The goal is to match corresponding features between two images of the same scene. (Source) Despite having better performance than the frame fields models on the CrowdAI dataset, PolyWold does not have the ability to generate polygons with holes, or handle buildings with shared walls. However, the authors offer some ideas for how the model could be modified to handle these cases. The model and inference (but not the training) source code is open source, but has a restrictive license that only permits its use for research. Here, we briefly summarize experimental results from the PolyWorld paper which compares the performance of PolyWorld to instance segmentation baselines, PolyMapper, and Frame Field Learning. The models were evaluated using the test set of CrowdAI using the various metrics described above. Overall, PolyWorld performs better than the other methods on most metrics, and the instance segmentation baselines perform the worst. These results are listed in the tables below. For a more intuitive understanding of how the methods perform, we can look at visualizations of their predictions on different images. Below are examples showing that PolyWorld can handle difficult cases including complex shapes, occluded corners, and curved shapes. Below is a comparison of PolyWorld and FFL. PolyWorld is apparently better at generating right angles and more parsimonious polygons with fewer vertices. To help create a map of all the world’s buildings, we can use machine learning to train models that output polygonal building footprints from satellite and aerial imagery. However, training these models requires a large amount of training data, which needs to have very high resolution. Thanks to open datasets such as Ramp, SpaceNet, OpenCities AI, and CrowdAI, researchers can compare the performance of different models, and release those models to the public. Predictions of building footprints can be evaluated using the usual metrics like IoU and AP, but there is a need for new metrics specially designed for this use case. For example, max tangent angle error and complexity-aware IoU are designed to penalize domain-relevant mistakes such as overly complex polygons and rounded corners. Semantic segmentation and instance segmentation can be used to extract building footprints, but the results are usually too sloppy for cartographic applications. By using end-to-end learning and incorporating domain-specific knowledge into models, researchers have developed a variety of more sophisticated model architectures that perform better. PolyMapper uses an RNN to sequentially connect vertices into polygons. Frame field learning uses a novel representation of contours that can better capture sharp corners, which is important for buildings. Finally, PolyWorld uses a differentiable solver for a combinatorial optimization problem to group vertices into polygons. Unfortunately, all of these approaches are difficult to implement, with many important details left out in this blog, and either aren’t open source or use restrictive licenses. In addition, many approaches cannot handle shared walls between buildings. Given these limitations, these newer methods may not be worthwhile for use cases such as population density estimation that do not require high fidelity. However, to generate cartographic-quality footprints for databases such as OSM, more sophisticated methods are worth a try. [1] Li, Zuoyue, Jan Dirk Wegner, and Aurélien Lucchi. “Topological map extraction from overhead images.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1715-1724. [2] Girard, Nicolas, Dmitriy Smirnov, Justin Solomon, and Yuliya Tarabalka. “Polygonal building extraction by frame field learning.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5891-5900. 2021. [3] Zorzi, Stefano, Shabab Bazrafkan, Stefan Habenschuss, and Friedrich Fraundorfer. “PolyWorld: Polygonal Building Extraction with Graph Neural Networks in Satellite Images.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1848-1857. 2022.
{"url":"https://www.azavea.com/blog/2022/11/09/automated-building-footprint-extraction-part-3-model-architectures/","timestamp":"2024-11-13T02:31:11Z","content_type":"text/html","content_length":"132308","record_id":"<urn:uuid:ae968a08-cb01-4ea4-bb6c-abc0d5c41169>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00485.warc.gz"}
Structured English Query Language SEQUEL for short — SQL for really short. Remember we had Relational Algebra: $\Pi_{\textrm{salpername}} (\sigma_{(\textrm{manager}=44)} (\textrm{Salesperson}))$ And we had relational calculus: $\{t | \exists x(s\in\textrm{Salesperson}\wedge s.\textrm{manager}=44\wedge t.\textrm{name}=s.\textrm{salpername}\}$ These are the foundational constructs for query expression. In terms of computational power: Codd proved that Relational Algebra is as powerful as Relational Calculus is as powerful as SQL. There are many flavors of SQL (each database implements various parts of the various standards) • MySQL (ANSI-SQL/PSM (persistent storage model)) • DB2 (SQL-PL (procedural language)) • Microsoft (T-SQL (transact)) • Oracle (PL/SQL) • PostgresQL (PL/pgSQL (based on PL/SQL)) • Teradata (SPL (stored procedural language)) It is a very high level language SELECT attributes $\Pi$ FROM relations $R$ WHERE condition; $\sigma$ What salespersons have manager with ID 44? SELECT salpersname FROM Salesperson WHERE Salesperson.manager_id = 44 ; This returns a relation, just like Relational Calculus and Relational Algebra. Find Names of Customers who live in Tokyo. SELECT cust_name FROM Customers WHERE Customers.city = ‘Tokyo’ ; What’s the result? Watabe Bros It’s a declarative language close to Relational Calculus. We declare what we want from the database We can relax the language a little bit: SELECT * FROM Customers WHERE city = ‘Tokyo’ ; The * returns all attributes, and if there is no ambiguity then we can remove Customer. from the where clause. SELECT does not eliminate duplicates like Projection in relational algebra. Just as in relational calculus you can name your result attributes whatever you like, you can do the same in SQL: SELECT cust_name as name FROM Customers WHERE Customers.city = ‘Tokyo’ ; You can also rename relations SELECT cust_name FROM Customers C WHERE C.city = ‘Tokyo’ ; No ‘as’ here. SQL allows simple math expressions. SELECT prod_id, prod_desc, price*1.07 as aftertax FROM Product; Where Clause This is where the magic happens in SQL. Attribute names come from the relations in the FROM clause • Can do comparison operations • Arithmetic • String operations • Pattern matching • Lots of functions • Atoms are separated by logic operators (just like relational calculus) • Case insensitive (except certain pattern matching operations) • String literals go in single quotes □ Single quote is the delimiter inside a string. where prod_desc like ‘%lamp‘ % is wildcard character that matches multiple characters – is wildcard character that matches only one character. where prod_desc not like ‘%lamp‘ not is a negation In total: Give me after tax price and description of non-lamps. SELECT prod_id, prod_desc, price*1.07 as aftertax FROM Product WHERE prod_desc not like ‘%lamp’ Find products with a margin of more than 60% SELECT P.prod_desc FROM product P WHERE (P.price) > (P.cost*1.6); margin is “sales price less the [costs], divided by the sales price”. Null Values What happens if: SELECT salpers_name FROM Salesperson WHERE manager_id > 20 or manager_id <= 20 Does Terry Cardon appear in the result set? Yes or no? Answer: No Truth logic in SQL is three-valued: True, False, Unknown We only return tuples when True. Testing for null: or manager_id is null or manager_id is not null Multi-relation Queries SELECT Product.prod_desc, Manufacturer.manufactr_name FROM Product, Manufacturer WHERE Product.manufactr_id = Manufacturer.manufactr_id; This is a theta join, and an equi-join. (MySQL doesn’t really have natural join, but some DBMSs do) Find customer-names who bought lamps: SELECT Customer.cust_name FROM Customer, Sale, Product WHERE Customer.cust_id = Sale.cust_id and Sale.prod_id = Product.prod_id and product.prod_desc like '%lamp'; Semantics of SQL queries SELECT $a_1,a_2,\ldots,a_k$ FROM $R_1,R_2,\ldots,R_3$ WHERE $\sigma_*$ Nested Loop: Answer = {} for x∈R_1 do for y∈R_2 do for z∈R_n do if σ_* then answer = answer ∪{a_1,a_2,…,a_k} Compute the gross revenue for each sale. SELECT P.prod_desc, P.price*S.qty AS revenue FROM sale S, product P WHERE S.prod_id = P.prod_id;
{"url":"https://timweninger.com/teaching/database-systems-concepts/structured-english-query-language/","timestamp":"2024-11-11T22:55:27Z","content_type":"text/html","content_length":"30004","record_id":"<urn:uuid:b02609cf-0a5a-4cce-937f-ace2ff33bced>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00879.warc.gz"}
53rd Annual Meeting of the APS Division of Atomic, Molecular and Optical Physics Bulletin of the American Physical Society 53rd Annual Meeting of the APS Division of Atomic, Molecular and Optical Physics Volume 67, Number 7 Monday–Friday, May 30–June 3 2022; Orlando, Florida Session Q02: Focus Session: Dynamical Gauge Fields in AMO Systems Hide Abstracts Focus Live Streamed Chair: Shraddha Agrawal, UIUC Room: Grand Ballroom A Thursday, Q02.00001: Realising a one-dimensional topological gauge theory in an optically dressed Bose-Einstein condensate June 2, Invited Speaker: Leticia Tarruell 8:00AM - Topological gauge theories describe the low energy properties of certain strongly correlated quantum systems through effective weakly interacting models. A prime example is the Chern-Simons 8:30AM theory of fractional quantum Hall states, where the emergence of anyonic excitations is explained by the coupling between weakly interacting matter particles and a density-dependent gauge field. While in traditional solid-state platforms such gauge theories are only convenient theoretical constructions, experimental atomic systems enable their direct implementation and are expected to provide a fertile playground to investigate their phenomenology without the need for strong interactions. In my talk, I will report on the first quantum simulation of a topological gauge theory by realising a one-dimensional reduction of the Chern-Simons theory (the chiral BF theory) in a Bose-Einstein condensate. Using the local conservation laws of the theory, we eliminate the gauge degrees of freedom in favour of chiral matter interactions, which we engineer by synthesising optically dressed atomic states with momentum-dependent scattering properties. We explore the key properties of the chiral BF theory: the formation of chiral solitons - self-bound states of the matter field that only exist for one propagation direction - and the emergence of an electric field generated by the system itself. Our results expand the scope of quantum simulation to topological gauge theories and pave the way towards implementing analogous field theories in higher dimensions. Thursday, Q02.00002: Wavepacket dynamics in Floquet topological systems June 2, Invited Speaker: Monika Aidelsburger 8:30AM - Periodic driving, also known as Floquet engineering, is a powerful experimental technique to realize topological lattice models with ultracold atoms in optical lattices. Here, we report on 9:00AM the realization of distinct topological models using periodic driving with bosonic atoms in a hexagonal optical lattice. We probe different topological regimes using a combination of spectroscopic measurements and local Hall deflections, thereby revealing the topological invariants that characterize the different topological regimes. Depending on the modulation parameters, we show that genuine out-of-equilibrium Floquet topological systems can be realized without any static analogue. These systems are characterized by a generalized bulk-boundary correspondence, which can support topological edge modes even if the Chern numbers of all bulk bands vanish. We reveal this connection by preparing localized wavepackets at the edge of the systems, which directly signals the presence of topological edge modes. Thursday, Q02.00003: Exploring Phase Diagrams of 1D Z[2] Lattice-Gauge Theory with Dynamical Matter June 2, Matjaz Kebric, Luca Barbiero, Umberto Borla, Sergej Moroz, Ulrich J Schollwoeck, Fabian Grusdt 9:00AM - Here, we study a one-dimensional lattice-gauge theory model where dynamical charges are coupled to gauge fields. Such models exhibit confinement and can be realized with modern quantum 9:12AM simulators. By adding nearest-neighbor interactions we uncover interesting phase transitions to different Mott states, which are strongly related to the filling. Remarkably, the confining electric field stabilizes a Mott state at the filling of n = 2/3 and destabilizes it for filling n = 1/2. On the other hand, adding superconducting terms instead of the nearest-neighbor interactions results in trivial to non-trivial topological transitions, which resemble behavior of the Kitaev chain. In our work we rely on the combination of the numerical DMRG calculations and analytical techniques, which are tractable for specific parameter values and limits. We also develop an effective mean-field theory model of our problem. This simple mean-field model correctly resembles the main features of the original model and offers deeper physical insights. Finally, we also discuss possible experimental realizations with quantum gases in optical lattices. Thursday, Q02.00004: Emergent Z[2] gauge theories and topological excitations in Rydberg quantum simulators June 2, Rhine Samajdar, Darshan G Joshi, Yanting Teng, Subir Sachdev 9:12AM - Strongly interacting arrays of Rydberg atoms provide versatile platforms for exploring exotic many-body phases and dynamics of correlated quantum systems. Motivated by recent experimental 9:24AM advances, we theoretically investigate the quantum phases that can be realized by such Rydberg atom simulators in two dimensions. We show that the combination of Rydberg interactions and appropriate lattice geometries naturally leads to emergent Z[2] gauge theories endowed with matter fields. Based on this mapping, we demonstrate how Rydberg platforms can be used to realize topological spin liquid states based solely on their native van der Waals interactions. We also discuss the nature of the fractionalized excitations of two distinct classes of such Z[2] quantum spin liquid states using both fermionic and bosonic parton theories and illustrate their rich interplay with proximate solid phases. Thursday, Q02.00005: Topological quantum Spin Liquid in a hexagonal Lattice of Rydberg Atoms with density-dependent Peierls Phases June 2, Simon Ohler, Michael Fleischhauer, Maximilian Kiefer-Emmanouilidis 9:24AM - We show that the nonlinear transport of bosonic excitations in a two-dimensional honeycomb lattice of spin-orbit coupled Rydberg atoms gives rise to disordered quantum phases which are 9:36AM candidates for topological quantum spin liquids. As recently demonstrated in [Lienhard et al., Phys. Rev. X, 10, 021031 (2020)] the spin-orbit coupling breaks time-reversal and chiral symmetries and leads to a tunable density-dependent complex hopping of the hard-core bosons or equivalently to complex XY spin interactions. We numerically investigate the phase diagram resulting from the competition between density-dependent and direct transport terms. In the regime where the two terms are comparable, we find a disordered quantum state that is absent in a mean-field description. This phase is characterized by a finite spin-gap, a large spin chirality as well as a many-body Chern number C=1. We therefore identify this phase as a topological spin liquid. Thursday, Q02.00006: Observing dynamical currents in a non-Hermitian momentum lattice June 2, Fabian Finger, Rodrigo Rosa-Medina, Francesco Ferri, Nishant Dogra, Katrin Kroeger, Rui Lin, Ramasubramanian Chitra, Tobias Donner, Tilman Esslinger 9:36AM - Dynamic transients are a natural ingredient of non-equilibrium quantum systems. A paradigmatic example is Dicke superradiance, describing the collectively enhanced population inversion of 9:48AM an ensemble of two-level atoms coupled to a single mode of light. In our experiment, we leverage superradiance in a quantum degenerate gas to engineer dynamical currents in a synthetic lattice geometry. Our experimental implementation is based on a spinor Bose-Einstein condensate coupled to a single mode of an ultrahigh finesse optical cavity. Two transverse laser fields induce cavity-assisted Raman transitions between discrete momentum states of two spin levels, which we interpret as tunneling in a momentum space lattice. As the cavity field depends on the local density and spin configuration, the tunneling rate evolves dynamically with the atomic and photonic states. By monitoring the cavity leakage, we gain real-time access to the emerging currents and benchmark their collective nature. Moreover, frequency-resolved measurements of the leaking photon field allow us to locally resolve individual tunneling events as well as cascaded dynamics. Our results provide prospects to explore dynamical gauge fields and transport phenomena in driven-dissipative quantum systems. Thursday, Q02.00007: Resonant dynamics of fermions in synthetic flux ladders with strong SU(n) interactions June 2, Mikhail Mamaev, Bhuvanesh Sundar, Thomas Bilitewski, Ana Maria Rey 9:48AM - We theoretically study the dynamics of strongly interacting fermionic alkaline earth atoms with n internal levels in an optical lattice. When treating the internal flavors as a synthetic 10:00AM dimension, the system realizes a synthetic ladder structure. We use laser driving to couple the internal levels and induce an effective magnetic flux piercing the ladder. The system dynamically generates chiral spin currents in response to the flux. While strong interactions with one atom per site tend to inhibit motion, we show that transport is enhanced at special integer and fractional ratios of the driving and interaction strength, reminiscent of the enhancement of longitudinal conductivity in the fractional quantum Hall effect. At these resonant points, tunneling is induced by multi-body resonances that are enabled by the flux. For some resonances the particle transport approaches that of an effectively non-interacting system, while other resonances yield non-thermal behavior due to non-trivial kinetic constraints upon the motion. Our results showcase the plethora of complex dynamical phenomena that strongly interacting SU(n) fermions exhibit in the presence of an effective magnetic flux, many of which can manifest on timescales well within reach of current-generation experiments. Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. Become an APS Member Renew Membership Librarians Submit a Meeting Abstract Join an APS Unit Authors Submit a Manuscript Get My Member Number Referees Find a Journal Article Update Contact Information Media Donate to APS Students © 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200 Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000 Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700
{"url":"https://meetings.aps.org/Meeting/DAMOP22/Session/Q02?showAbstract","timestamp":"2024-11-02T21:07:19Z","content_type":"text/html","content_length":"24847","record_id":"<urn:uuid:7a010c5a-5852-43ec-a31d-4599407bb764>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00069.warc.gz"}
Solving the task from The Weekly Challenge 233, where you need to sort numbers by two dimensions. Solving the task from The Weekly Challenge 233, where you need to sort numbers by two dimensions. Working with words in the Raku programming language A solution to the task 1 of the Weekly Challenge 233, where the goal is to find the words constructed from the same letters. A solution to the task 1 of the Weekly Challenge 233, where the goal is to find the words constructed from the same letters. A couple of tasks solved in Raku Two tasks from the Weekly Challenge 231 solved in the Raku programming language. Two tasks from the Weekly Challenge 231 solved in the Raku programming language. Calculator with Roman numbers using Raku Grammars Using Raku grammar, I created a simple calculator that works with Roman numbers, for example: `XXI + MCMXIX`. Using Raku grammar, I created a simple calculator that works with Roman numbers, for example: `XXI + MCMXIX`. Counting Fridays the 13th in Raku The Raku solution to the following task: Write a script to find out how many dates in the year are Friday 13th, assume that the current Gregorian calendar applies. The Raku solution to the following task: Write a script to find out how many dates in the year are Friday 13th, assume that the current Gregorian calendar applies. Solving Task 2 of the Weekly Challenge 204 with the help of ChatGPT Let us solve the second task from the Weekly Challenge 204. It is called ‘Reshape matrix’. Let us solve the second task from the Weekly Challenge 204. It is called ‘Reshape matrix’. Dialogues with ChatPGP about the Raku programming language. Solving Task 1 of the Weekly Challenge 204 Let us ask ChatGPT to find a solution and then correct it to make it more Raku-ish. This task, the machine solved the thing from the first attempt, but you can follow how we managed to make it better and more compact. Most (except one) of the generated code workes without compiler errors, so I will not concentrate on it here. Let us ask ChatGPT to find a solution and then correct it to make it more Raku-ish. This task, the machine solved the thing from the first attempt, but you can follow how we managed to make it better and more compact. Most (except one) of the generated code workes without compiler errors, so I will not concentrate on it here. Raku Challenge, Week 92, Issue 1 This week’s task has an interesting solution in Raku. So, here’s the task: You are given two strings $A and $B. Write a script to check if the given strings are Isomorphic. Print 1 if they are otherwise 0. This week’s task has an interesting solution in Raku. So, here’s the task: You are given two strings $A and $B. Write a script to check if the given strings are Isomorphic. Print 1 if they are otherwise 0. Raku Challenge Week 91 Here’s my Raku breakfast with the solutions of Week 91 of The Weekly Challenge. A couple of simple programs with Raku arrays. Here’s my Raku breakfast with the solutions of Week 91 of The Weekly Challenge. A couple of simple programs with Raku arrays. Raku Challenge, Week 85 Welcome back to another week of the Weekly Challenge, and today I’ll briefly describe my solutions to the Week 85. Task 1. Triplet Sum. Task 2. Power of Two Integers. Welcome back to another week of the Weekly Challenge, and today I’ll briefly describe my solutions to the Week 85. Task 1. Triplet Sum. Task 2. Power of Two Integers. Raku Challenge Week 4, Task 1: Printing π Let me return to the old challenges from last year and fill a few more gaps. The task for now is to write a program to output the same number of π digits as the size of the program. Let me return to the old challenges from last year and fill a few more gaps. The task for now is to write a program to output the same number of π digits as the size of the program. The weekly challenge 078: Leader element and Left rotation This week, The Weekly Challenge offered us a couple of simple tasks, so why not solve it on Monday. Task1: Leader Element. Task 2: Left Rotation. This week, The Weekly Challenge offered us a couple of simple tasks, so why not solve it on Monday. Task1: Leader Element. Task 2: Left Rotation. Programming with passion This week, I wrote a few programs solving the task of this week’s Weekly Challenge. I already explained the solution in the Raku programming language. In this post, I’d like to demonstrate other solutions. The key point is that they not only use different programming language but also approach the problem differently and implement different algorithms. This week, I wrote a few programs solving the task of this week’s Weekly Challenge. I already explained the solution in the Raku programming language. In this post, I’d like to demonstrate other solutions. The key point is that they not only use different programming language but also approach the problem differently and implement different algorithms. Lonely X — The Weekly Challenge 77, Task 2 The second task of this week’s challenge sounds like this: You are given m x n character matrix consists of O and X only. Write a script to count the total number of X surrounded by O only. Print 0 if none found. The second task of this week’s challenge sounds like this: You are given m x n character matrix consists of O and X only. Write a script to count the total number of X surrounded by O only. Print 0 if none found. Add up Fibonacci numbers — The Weekly Challenge 77, Task 1 The task today is: You are given a positive integer $N. Write a script to find out all possible combination of Fibonacci Numbers required to get $N on addition. You are NOT allowed to repeat a number. Print 0 if none found. The task today is: You are given a positive integer $N. Write a script to find out all possible combination of Fibonacci Numbers required to get $N on addition. You are NOT allowed to repeat a number. Print 0 if none found. A more idiomatic Raku solution A couple of days ago I published a straightforward solution to the Task 2 of Week 75 of The Weekly Challenge. Despite that solution perfectly works, I wasn’t satisfied with it and wanted a more Raku-ish code. Here is the next iteration of it. my @hist = 3, 2, 3, 5, 7, 5; my $max … Continue reading “A more idiomatic Raku solution” Largest Rectangle Histogram: The Raku Challenge Week 75, task 2 Hello, here is my solution to the Task 2 of Week 75 of the Weekly Challenge solved in the Raku programming language. You are given an array of positive numbers @A. Write a script to find the largest rectangle histogram created by the given array. Hello, here is my solution to the Task 2 of Week 75 of the Weekly Challenge solved in the Raku programming language. You are given an array of positive numbers @A. Write a script to find the largest rectangle histogram created by the given array. Coins Sum: The Raku Challenge Week 75, task 1 Here is my solution to the Task 1 of the Week 75 of the Weekly Challenge solved in the Raku programming language. You are given a set of coins @C, assuming you have infinite amount of each coin in the set. Write a script to find how many ways you make sum $S using the coins from the set @C. Here is my solution to the Task 1 of the Week 75 of the Weekly Challenge solved in the Raku programming language. You are given a set of coins @C, assuming you have infinite amount of each coin in the set. Write a script to find how many ways you make sum $S using the coins from the set @C. The weekly challenge nr 74 The Perl Weekly Challenge was renamed to The Weekly Challenge recently, so there’s a bigger chance that more solutions in other programming languages appear there. In the two Raku solutions in this post, you can see how you can use the built-in Bag data type. Task 1. Majority Element (Raku and C++ solutions). Task 2. First Non-Repeating Character (Raku solution). The Perl Weekly Challenge was renamed to The Weekly Challenge recently, so there’s a bigger chance that more solutions in other programming languages appear there. In the two Raku solutions in this post, you can see how you can use the built-in Bag data type. Task 1. Majority Element (Raku and C++ solutions). Task 2. First Non-Repeating Character (Raku solution). Raku challenge week 73 Here are my solutions to the tasks of Week 73 of the Perl Weekly Challenge: 1) Min Sliding Window and 2) Smallest Neighbour. Here are my solutions to the tasks of Week 73 of the Perl Weekly Challenge: 1) Min Sliding Window and 2) Smallest Neighbour.
{"url":"https://andrewshitov.com/category/programming-languages/raku-programming-language/raku-challenges/","timestamp":"2024-11-13T19:09:18Z","content_type":"text/html","content_length":"98113","record_id":"<urn:uuid:c6f08465-fa70-4e60-98cb-13a1d715028b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00763.warc.gz"}
Program to find LCM of two numbers Hello Everyone, LCM (Least Common Multiple) of two numbers is the smallest number which can be divided by both numbers. For example, LCM of 15 and 20 is 60, and LCM of 5 and 7 is 35. An efficient solution is based on the below formula for LCM of two numbers ‘a’ and ‘b’. a x b = LCM(a, b) * GCD (a, b) LCM(a, b) = (a x b) / GCD(a, b) Below is the implementation of the above idea: // C++ program to find LCM of two numbers #include <iostream> using namespace std; // Recursive function to return gcd of a and b long long gcd( long long int a, long long int b) if (b == 0) return a; return gcd(b, a % b); // Function to return LCM of two numbers long long lcm( int a, int b) return (a / gcd(a, b)) * b; // Driver program to test above function int main() int a = 15, b = 20; cout << "LCM of " << a << " and " << b << " is " << lcm(a, b); return 0; LCM of 15 and 20 is 60 In arithmetic and number theory, the least common multiple , lowest common multiple , or smallest common multiple of two integers a and b , usually denoted by lcm( a , b ) , is the smallest positive integer that is divisible by both a and b Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm( a ,0) as 0 for all a , which is the result of taking the lcm to be the least upper bound in the lattice of divisibility. The lcm is the “lowest common denominator” (lcd) that can be used before fractions can be added, subtracted or compared. The lcm of more than two integers is also well-defined: it is the smallest positive integer that is divisible by each of them. A multiple of a number is the product of that number and an integer. For example, 10 is a multiple of 5 because 5 × 2 = 10, so 10 is divisible by 5 and 2. Because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle, 10 is the least common multiple of −5 and −2 as well. The least common multiple of two integers a and b is denoted as lcm( a , b ). Some older textbooks use [ a , b ], uses a*.b . {lcm} (4,6) Multiples of 4 are: Multiples of 6 are: Common multiples of 4 and 6 are the numbers that are in both lists: In this list, the smallest number is 12. Hence, the least common multiple is 12.
{"url":"https://discuss.boardinfinity.com/t/program-to-find-lcm-of-two-numbers/6766","timestamp":"2024-11-10T05:26:41Z","content_type":"text/html","content_length":"18618","record_id":"<urn:uuid:50df09be-bf3e-4fc4-bbf7-75134919cc89>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00324.warc.gz"}
#python #sicp it is often clearer to think about recursive calls as functional abstractions. If you want to change selection, open document below and click on "Move attachment" 1.7 Recursive Functions the standard definition of the mathematical function for factorial: (n−1)!n!n!=(n−1)⋅(n−2)⋅⋯⋅1=n⋅(n−1)⋅(n−2)⋅⋯⋅1=n⋅(n−1)!(n−1)!=(n−1)⋅(n−2)⋅⋯⋅1n!=n⋅(n−1)⋅(n−2)⋅⋯⋅1n!=n⋅(n−1)! While we can unwind the recursion using our model of computation, <span> it is often clearer to think about recursive calls as functional abstractions. That is, we should not care about how fact(n-1) is implemented in the body of fact ; we should simply trust that it computes the factorial of n-1 . Treating a recursive call as a status not read reprioritisations last reprioritisation on suggested re-reading day started reading on finished reading on
{"url":"https://buboflash.eu/bubo5/show-dao2?d=1438532111628","timestamp":"2024-11-09T06:55:30Z","content_type":"text/html","content_length":"18167","record_id":"<urn:uuid:ca6d924c-13e8-48ef-8362-28b40e04ad16>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00560.warc.gz"}
Polynomial and Rational Functions MCQ Quiz [PDF] Questions Answers | Polynomial and Rational Functions MCQs App Download & e-Book Business Mathematics Practice Tests Business Mathematics Online Tests Polynomial and Rational Functions MCQ (Multiple Choice Questions) PDF Download The Polynomial and Rational Functions Multiple Choice Questions (MCQ Quiz) with Answers PDF (Polynomial and Rational Functions MCQ PDF e-Book) download to practice Business Mathematics Tests. Study Quadratic and Polynomial Functions Multiple Choice Questions and Answers (MCQs), Polynomial and Rational Functions quiz answers PDF for online certificate programs. The Polynomial and Rational Functions MCQ App Download: Free learning app for polynomial and rational functions, how to graph a parabola, graphing quadratic functions test prep for online college classes. The MCQ: The quadratic function is considered as; "Polynomial and Rational Functions" App Download (Free) with answers: Third-degree polynomial function; Four-degree polynomial function; First-degree polynomial function; Second-degree polynomial function; for online certificate programs. Practice Polynomial and Rational Functions Quiz Questions, download Apple eBook (Free Sample) for online Polynomial and Rational Functions MCQ (PDF) Questions Answers Download MCQ 1: The cubic function is considered as 1. first-degree polynomial function 2. second-degree polynomial function 3. third-degree polynomial function 4. four-degree polynomial function MCQ 2: The quadratic function is considered as 1. third-degree polynomial function 2. four-degree polynomial function 3. first-degree polynomial function 4. second-degree polynomial function MCQ 3: The function behaves as x assuming larger positive values and negative values is classified as 1. larger direction 2. ultimate direction 3. ultimate variables 4. smaller direction MCQ 4: The behavior of the function depends upon the behavior of 1. double degree 2. zero degree 3. higher degree 4. lower degree MCQ 5: The linear and quadratic functions are examples of 1. polynomial functions 2. variable function 3. mean function 4. constant function Business Mathematics Practice Tests Polynomial and Rational Functions Learning App: Free Download Android & iOS The App: Polynomial and Rational Functions MCQs App to learn Polynomial and Rational Functions Textbook, Business Mathematics MCQ App, and BBA Economics MCQ App. The "Polynomial and Rational Functions" App to free download iOS & Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with
{"url":"https://mcqslearn.com/applied/mathematics/polynomial-and-rational-functions-multiple-choice-questions.php","timestamp":"2024-11-04T20:33:31Z","content_type":"text/html","content_length":"95844","record_id":"<urn:uuid:479900dc-1329-4338-a005-c9c568b91400>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00351.warc.gz"}
What is Standard Deviation for Forex Trading Hi, everybody. Forex is the largest and most liquid among global financial markets. Recognized for its daily trading volume in the trillions of dollars, Forex is a financial market that offers us the opportunity to exchange different countries' currencies. One of the analysis methods required to achieve good results in trading this market is technical analysis. Technical analysts attempt to find trends, support and resistance levels, and other patterns from past price movements using charts, indicators, and other technical tools. By using this information, they predict future price movements. There are many technical analysis tools used in financial markets, and we can add technical indicators to that list. Indicators function as mathematical tools, allowing us to measure trends, momentum, and other price movements by examining past price actions. The Standard Deviation indicator is the subject of today's article. What is Standard Deviation? The term "Standard Deviation" that we often meet in financial markets is generally used to measure volatility and assess the predictability of price movements. Standard deviation measures how much the prices of a financial instrument usually deviate from the average. The higher the standard deviation, the more volatile and unpredictable the prices become. When trading in the Forex market, we can utilize standard deviation in various ways. For instance, we can use it to predict how much an asset's price is likely to change over a specific period. Also, we consider it when evaluating the probability of an asset's price rising or falling. High volatility means that the currency or asset's price may experience faster and larger fluctuations. Calculation of Standard Deviation In trading within financial markets, the need to calculate standard deviation arises with the aim of measuring volatility and evaluating the predictability of price movements. Standard deviation is a measure indicating how much the prices of a currency pair fluctuate within a specific period. The first step involves recording the closing prices of a particular currency pair over a period, for example, 20 days. Then, the average value of these closing prices is calculated. To compute the average value, the sum of these prices is taken and divided by the number of periods. To determine how much each closing price deviates from the average value, each price is subtracted from the average. The squared values of these deviations are calculated. The resulting squared deviation values are summed, and this sum is divided by the number of periods. Finally, the square root of this value is taken to calculate the standard deviation. Here is the formula used to calculate standard σ = √((1/N)Σ(Xᵢ - μ)²) In this formula: • σ represents the standard deviation. • ∑ represents the sum. • N is the total number of days in the period used (e.g. 20 days) • X[i], each day's closing price • μ is the average value of the period's closing prices. Don't worry, we don't need to perform these complex calculations ourselves, standard deviation is automatically calculated and displayed by trading platforms and software. Standard deviation is a measurement used to gauge how erratic the price movements of a currency pair are. A higher standard deviation value indicates that the currency pair's price is as erratic as ocean waves, while a lower value suggests that the currency pair's price is as stable as a calm lake. Using Standard Deviation in Forex Trading In trading in financial markets, standard deviation is more commonly used to assess volatility and improve risk management strategies. Standard deviation measures how volatile and how frequently the prices of a currency pair change. A high standard deviation indicates high volatility, suggesting that prices fluctuate more. On the other hand, a low standard deviation reflects a more stable market environment. During this time, we can use standard deviation to determine which currency pairs have higher volatility. When investing in a currency with a high standard deviation, we take more risk, while investing in a currency with a lower standard deviation involves less risk. This helps us manage our trading risk. Standard Deviation in the AUD/NZD chart In addition, standard deviation can be used to assess whether the prices of a currency pair will trend or reverse. If standard deviation is increasing, it indicates that the price trend is strengthening. If you observe a trend with high standard deviation, the trend is more likely to continue. Conversely, if you see a clear deviation in prices with high standard deviation, it could be a signal of a reversal. Indicators are used to determine optimal entry points by tracking price movements. Standard deviation, on the other hand, evaluates how much the current prices deviate from the average price, so we can estimate the probability of prices returning to the average value in the future. Keep in consideration. In financial market trading, the standard deviation indicator is a reasonable tool for measuring volatility and analyzing price movements. This indicator can be a useful tool for market analysis, but it cannot fully capture the complexity of price movements. Market dynamics are dependent on a variety of factors, and a single indicator can only provide a limited view. Standard deviation can sometimes give false signals. It can be risky to use the indicator, especially during periods of low liquidity or under the influence of news. Relying solely on the standard deviation indicator in trading leads to the incompleteness of your risk management strategies. It is important to combine standard deviation with other technical indicators to achieve better results when trading in financial markets. Trading success to you!
{"url":"https://www.forexeduline.com/2023/09/what-is-standard-deviation-for-forex.html","timestamp":"2024-11-12T09:44:01Z","content_type":"text/html","content_length":"150400","record_id":"<urn:uuid:622eb9da-3c3f-4e9b-98cc-61db93399759>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00446.warc.gz"}
Project #1 INFT 940 -- Design Project #1 Robust Analysis of Vibration Control System INFT 940, Spring 1997, Assigned 2/27/97, Due 4/3/97 1. At the bottom of this web page are shown the equations for a system to control mechanical vibration. The complete open-loop system is the series combination of those transfer functions. The quantities which are uncertain are the spring constant (k), damper constant (d), and the outer mass (m1). The uncertainty in these variables will be considered individually. The nominal value for each of these variables is denoted by the subscript 0 (e.g., k0). Variations in the variables will be with respect to those nominal values. Therefore, the range of values for a variable will have the form 2. The table below shows the range of variations in the three variables. Three sets of variation are shown for each of the three variables. The nine experiments are treated individually. The variables which are not being perturbed are held constant at their nominal values. For each experiment, the closed-loop characteristic equation will be considered an interval polynomial. Table of parameter values 3. For each of the experiments, perform a robustness analysis of the closed-loop system under the assumption of the characteristic equation being an interval polynomial. Kharitonov plots must be used to graphically support your analysis. Other methods, such as the Frequency Sweeping Function, discussed in Chapter 6, may also be used. 4. For variations in k and d, perform a root locus analysis and see if these results agree with those from the Kharitonov analysis. 5. Document your analysis in a typed report. Plots should be included to illustrate your discussion and conclusions. Click the icon to return to Dr. Beale's home page Latest revision was made on 05/08/01 08:28 PM
{"url":"https://people-ece.vse.gmu.edu/~gbeale/ece_940/prj_940_s97_01.html","timestamp":"2024-11-05T22:17:22Z","content_type":"text/html","content_length":"3728","record_id":"<urn:uuid:69f2bbe7-25b8-467d-9b4f-2f262de62c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00847.warc.gz"}
Andrew James (2002-05). Generalization of rotational mechanics and application to aerospace systems. Doctoral Dissertation. • This dissertation addresses the generalization of rigid-body attitude kinematics, dynamics, and control to higher dimensions. A new result is developed that demonstrates the kinematic relationship between the angular velocity in N-dimensions and the derivative of the principal-rotation parameters. A new minimum-parameter description of N-dimensional orientation is directly related to the principal-rotation The mapping of arbitrary dynamical systems into N-dimensional rotations and the merits of new quasi velocities associated with the rotational motion are studied. A Lagrangian viewpoint is used to investigate the rotational dynamics of N-dimensional rigid bodies through Poincar??e??s equations. The N-dimensional, orthogonal angularvelocity components are considered as quasi velocities, creating the Hamel coefficients. Introducing a new numerical relative tensor provides a new expression for these coefficients. This allows the development of a new vector form of the generalized Euler rotational equations. An N-dimensional rigid body is defined as a system whose configuration can be completely described by an N??N proper orthogonal matrix. This matrix can be related to an N??N skew-symmetric orientation matrix. These Cayley orientation variables and the angular-velocity matrix in N-dimensions provide a new connectionbetween general mechanical-system motion and abstract higher-dimensional rigidbody rotation. The resulting representation is named the Cayley form. Several applications of this form are presented, including relating the combined attitude and orbital motion of a spacecraft to a four-dimensional rotational motion. A second example involves the attitude motion of a satellite containing three momentum wheels, which is also related to the rotation of a four-dimensional body. The control of systems using the Cayley form is also covered. The wealth of work on three-dimensional attitude control and the ability to apply the Cayley form motivates the idea of generalizing some of the three-dimensional results to Ndimensions. Some investigations for extending Lyapunov and optimal control results to N-dimensional rotations are presented, and the application of these results to dynamical systems is discussed. Finally, the nonlinearity of the Cayley form is investigated through computing the nonlinearity index for an elastic spherical pendulum. It is shown that whereas the Cayley form is mildly nonlinear, it is much less nonlinear than traditional spherical
{"url":"https://vivo.library.tamu.edu/vivo/display/nd850b30a","timestamp":"2024-11-02T20:37:28Z","content_type":"text/html","content_length":"15700","record_id":"<urn:uuid:e04b4a4e-fa98-4525-ba1d-14395f00ce95>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00472.warc.gz"}
Data of two sites in Somalia underlying the paper: Multistability of model and real dryland ecosystems through spatial self-organization Olfa Jaïbi data matlab data ecological data arid ecosystems vegetation patterns self-organisation For both sites, topographical data was retrieved from the ALOS World 3D 30m (AW3D30, v. 2.1) digital raster elevation model. This model describes the height above sea level (in m, rounded to the nearest integer), at a ground resolution of approximately 30m at the equator. The elevation data was preprocessed for the removal of artifacts by applying a global soft-thresholding on its dual tree complex wavelet transform. Specifically, we set a threshold of 0.9 on the first five dual-tree complex wavelet transform levels. From the preprocessed data, we calculated the slope gradient (in %) and slope aspect (in degrees). We first extracted square DEM windows of 33 by 33 cells (i.e. approximately 990m × 990m), centered on the image windows. We then applied a least squares fitting procedure of an unconstrained quadratic surface on the unweighted elevation values. This gave the following datasets. <br> 4TU.ResearchData Mathematical Institute, Leiden University 2021-10-28 dataset matlab file. and read me file. 10.4121/16884913.v1 en The Haud pastoral region, Somalia; The Sool-Plateau pastoral area, Somalia CC0
{"url":"https://data.4tu.nl/export/dc/datasets/62e0d2e9-8ae0-4118-8862-50518b27f8c9","timestamp":"2024-11-11T23:39:08Z","content_type":"application/xml","content_length":"2489","record_id":"<urn:uuid:9e4875f2-a711-4c03-896d-3e3601c189d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00498.warc.gz"}