content
stringlengths
86
994k
meta
stringlengths
288
619
Variational Principle - (Mathematical Physics) - Vocab, Definition, Explanations | Fiveable Variational Principle from class: Mathematical Physics The variational principle is a fundamental concept in physics and mathematics that provides a method for finding the stationary points of a functional, typically to minimize or maximize a particular quantity. This principle asserts that the true physical trajectory of a system can be derived from a specific integral known as the action, leading to equations of motion. By applying this principle, one can derive significant results in different areas, including energy states in quantum mechanics and the dynamics of systems in classical mechanics. congrats on reading the definition of Variational Principle. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The variational principle leads to the Euler-Lagrange equations, which are crucial for determining the path taken by a physical system. 2. In quantum mechanics, the variational principle is used to approximate ground state energies by minimizing the expectation value of the energy operator with respect to a trial wavefunction. 3. The principle is connected to conservation laws through Noether's theorem, which relates symmetries in physics to conserved quantities. 4. Variational methods can be applied not only to classical and quantum mechanics but also to fields such as optics and fluid dynamics. 5. Variational principles facilitate numerical methods, such as finite element analysis, which are employed in solving complex physical problems. Review Questions • How does the variational principle lead to the derivation of equations of motion in Lagrangian mechanics? □ The variational principle asserts that the actual path taken by a system between two points in time makes the action stationary, meaning it either minimizes or maximizes it. By applying this principle and formulating the action as an integral of the Lagrangian over time, one derives the Euler-Lagrange equations. These equations provide a systematic way to obtain equations of motion for various mechanical systems based on their kinetic and potential energies. • Discuss how the variational principle is utilized in quantum mechanics for approximating ground state energies. □ In quantum mechanics, the variational principle is applied by selecting a trial wavefunction that approximates the true ground state wavefunction of a system. The expectation value of the Hamiltonian calculated using this trial wavefunction gives an upper bound for the ground state energy. By adjusting parameters within the trial wavefunction and minimizing this energy expectation value, one can refine the approximation and obtain more accurate estimates of ground state energies. • Evaluate the broader implications of applying variational principles across different fields of physics, including classical mechanics and quantum mechanics. □ Applying variational principles across various fields emphasizes their foundational role in understanding physical systems. In classical mechanics, these principles streamline problem-solving through Lagrangian mechanics, while in quantum mechanics, they provide powerful methods for energy approximation. Moreover, their use extends beyond these areas into optics and fluid dynamics, showcasing their versatility. The consistent application of variational principles highlights interconnectedness among different physical theories and aids in developing numerical methods for complex problems, influencing advancements in theoretical and applied physics. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/math-physics/variational-principle","timestamp":"2024-11-03T13:52:02Z","content_type":"text/html","content_length":"165291","record_id":"<urn:uuid:6ab64c0b-5de0-45b9-abb9-2fa3e5f1a5e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00670.warc.gz"}
Degrees of ambiguity for parity tree automata An automaton is unambiguous if for every input it has at most one accepting computation. An automaton is finitely (respectively, countably) ambiguous if for every input it has at most finitely (respectively, countably) many accepting computations. An automaton is boundedly ambiguous if there is k ∈ N, such that for every input it has at most k accepting computations. We consider Parity Tree Automata (PTA) and prove that the problem whether a PTA is not unambiguous (respectively, is not boundedly ambiguous, not finitely ambiguous) is co-NP complete, and the problem whether a PTA is not countably ambiguous is co-NP hard. Publication series Name Leibniz International Proceedings in Informatics, LIPIcs Volume 183 ISSN (Print) 1868-8969 Conference 29th EACSL Annual Conference on Computer Science Logic, CSL 2021 Country/Territory Slovenia City Virtual, Ljubljana Period 25/01/21 → 28/01/21 Funders Funder number Blavatnik Family Foundation • Automata on infinite trees • Degree of ambiguity • Omega word automata • Parity automata Dive into the research topics of 'Degrees of ambiguity for parity tree automata'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/degrees-of-ambiguity-for-parity-tree-automata","timestamp":"2024-11-02T06:17:04Z","content_type":"text/html","content_length":"51367","record_id":"<urn:uuid:5413ba2d-2829-454a-b93b-5fe8f76120b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00670.warc.gz"}
Arturo Magidin Associate Professor 404 Maxim Doucet Hall Arturo Magidin's webpage Ph.D.: 1998, University of California at Berkeley Matemático: 1993, Universidad Nacional Autónoma de México I got my PhD from the University of California at Berkeley in 1998,working under George M. Bergman; I started at UL Lafayette in 2005. My main research interest is in groups, often inspired by questions that come from General (also called Universal) Algebra; the latter is a way of unifying the study of many different types of algebraic structures. A group is a type of structures that was first defined to study symmetries, and they are ubiquitous. In more recent years, I have concentrated in finite p-groups and nilpotent groups in general. My most recent published work (in collaboration with Martha Kilpack) goes back to General Algebra and considers questions of lattice and closure operators associated to groups and their subgroups. My current collaboration with Luise-Charlotte Kappe and William Cocke looks at generalizations of the Chermak-Delgado lattice and measure on a finite group. Selected research publications:
{"url":"https://math.louisiana.edu/node/109","timestamp":"2024-11-05T10:03:25Z","content_type":"text/html","content_length":"34704","record_id":"<urn:uuid:fe76c0ff-d1d6-4edb-a32a-4fa04e249413>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00283.warc.gz"}
Battery Energy Storage This study presents a polynomial time algorithm to solve the lossless battery charging problem. In this problem the optimal charging and discharging schedules are chosen to maximize total profit. Traditional solution approaches have relied on either approximations or exponential algorithms. By studying the optimality conditions of this problem, we are able to reduce it to … Read more
{"url":"https://optimization-online.org/tag/battery-energy-storage/","timestamp":"2024-11-02T11:53:04Z","content_type":"text/html","content_length":"82727","record_id":"<urn:uuid:a47cb97e-8bd2-493c-a7e1-0bfb646cd8b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00486.warc.gz"}
New applications and properties of the symmetric cutoff rate of discrete-time channels with intersymbol interference for ISIT 1987 ISIT 1987 Conference paper New applications and properties of the symmetric cutoff rate of discrete-time channels with intersymbol interference Summary form only given, as follows. Several new applications and properties of the symmetric cutoff rate, which have not been exploited previously, are presented. The channel model is the discrete-time channel with intersymbol interference (ISI), where the independent inputs are binary and of equal probability. The unit-sample response of the transversal filter model of the channel is a finite-length sequence of unit energy. The noise added to the filter output is white Gaussian with mean zero and variance N0/2, where N0 is the one-sided noise power spectral density. The symmetric cutoff rate, which is defined for unquantized maximum-likelihood sequence decoding (MLSD), is first derived for this channel model. Using random-coding arguments, a simple expression is then developed to obtain a computationally efficient, tight approximation of the minimum squared Euclidean distance, which determines performance bounds for uncoded binary signaling over ISI channels using MLSD. This novel approach does not require identification of a specific error sequence which produces that minimum distance. Next, a method to approximate the unit-sample response of so-called minimum distance channels (MDCs) is described.
{"url":"https://research.ibm.com/publications/new-applications-and-properties-of-the-symmetric-cutoff-rate-of-discrete-time-channels-with-intersymbol-interference","timestamp":"2024-11-02T12:04:59Z","content_type":"text/html","content_length":"67223","record_id":"<urn:uuid:5d6a92c1-5e13-400d-aa9d-0beab440b40d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00730.warc.gz"}
Miura transform for W-algebras of exceptional type Miura transform for W-algebras of classical types can be found in e.g. Sec. 6.3.3 of Bouwknegt-Schoutens. Is there a similar explicit Miura transform for W-algebras of exceptional types, say, E6? It's 20 years since the review by B-S, so I'd hope somebody worked this out ... This post has been migrated from (A51.SE) Yes, for the "quasi-classical" case(i.e. for the case when the $W$-algebra is commutative, which occurs when the level is either infinite or critical) it was defined by Drinfeld and Sokolov long time ago; you can look at Section 4 of http://arxiv.org/PS_cache/math/pdf/0305/0305216v1.pdf for a good review. For the "quantum" case (i.e. for arbitrary level) it was studied by Feigin and Frenkel, but I am not sure what the right reference is; you can look for example at Section 4 of http://arxiv.org/ PS_cache/hep-th/pdf/9408/9408109v1.pdf, but there should be more modern references. In fact, the main tool in the work of Feigin and Frenkel is the screening operators, which describe the $W$-algebra explicitly as a subalgebra of (the vertex operator algebra associated to) the Heisenberg algebra (where the embedding to the Heisenberg algebra is the Miura transformation). This post has been migrated from (A51.SE) Most voted comments show all comments I think he wants the fields for each exponent of $E6$ together with their OPE. I don't think you'll find those Yuji, at least at the principal nilpotent. In the case of the minimal nilpotent, Kac and Wakimoto have explicit formulas in [this paper](http://arxiv.org/abs/math-ph/0304011) This post has been migrated from (A51.SE) Thanks everyone; I know got the generators at degree 2 and 5. Now I need those at degree 6, 8, 9 and 12 :p This post has been migrated from (A51.SE) Thanks, I managed to get the generators. The degree-9 one was not so bad; but the degree-12 one, when dumped to a file, has ~ 100MB as an expression. Oh Buddha. This post has been migrated from (A51.SE) By improving the program now the expression is about ~0.9MB :) This post has been migrated from (A51.SE) Most recent comments show all comments Since I don't believe in explicit formulas, I won't be able to say anything intelligent here:) One remark, though: you can describe the image of the W-algebra without the screening operators. It is just equal to the intersection over all simple roots of things like Virasoro$\otimes$Heisenberg of smaller rank (I hope it is clear what I mean) This post has been migrated from (A51.SE) Yes you're right. Physicists cover their lack of deep thinking by lots of explicit calculation:p I've been using that approach to find generators of W(E6), but that's still quite messy. That's why I asked the question here. This post has been migrated from (A51.SE)
{"url":"https://www.physicsoverflow.org/352/miura-transform-for-w-algebras-of-exceptional-type","timestamp":"2024-11-03T05:48:18Z","content_type":"text/html","content_length":"141849","record_id":"<urn:uuid:9a723288-656b-479e-b0cb-802696d1b715>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00355.warc.gz"}
Key stage 2 attainment, Academic year 2021/22 Geographic levels Time period 2010/11 to 2021/22 This file contains data on the key stage 2 disadvantage gap index in England. Variable names and descriptions Variable names and descriptions for this file are provided below: Variable name Variable description disadvantage_gap_index Disadvantage gap index total_disadvantaged_pupils Total number of eligible disadvantaged pupils at the end of key stage 2 total_other_pupils Total number of eligible non-disadvantaged pupils at the end of key stage 2 version The version of the data being presented 1. Figures for 2022 are based on revised data. Figures for other years are based on final data. 2. Data is not available for 2020 and 2021 as assessments were cancelled in these years due to the COVID-19 pandemic. 3. Includes only those pupils for whom a valid test level from 3-6 or teacher assessment level from W (working towards level 1) to 6 could be determined in reading, writing and maths from 2011/12 to 2014/15, or maths and English in 2010/11, or for a whom a valid scaled score could be determined in reading and maths in 2015/16 to 2018/19. This number may therefore differ from the total included in national test results which include pupils recorded as A - absent, U - unable to access test.
{"url":"https://explore-education-statistics.service.gov.uk/find-statistics/key-stage-2-attainment/2021-22/data-guidance","timestamp":"2024-11-09T22:51:23Z","content_type":"text/html","content_length":"385922","record_id":"<urn:uuid:ae800ff4-03ae-4e07-b8d1-fbeed7004394>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00724.warc.gz"}
Classifying Quadrilaterals by Four Sides Answer the following questions using the applet. 1. Can you make a quadrilateral of four sides being 2 cm, 3 cm, 3 cm and 9 cm? Why or why not? 2. If a quadrilateral has four equal sides, what are its possible shapes? 3. If a quadrilateral has two pairs of equal opposite sides, what are its possible shapes? 4. If a quadrilateral has two pairs of equal adjacent sides, what is its shape? 5. Make the trapezium of the four sides (from top to bottom) being: (a) 4 cm, 4 cm, 4 cm, 6 cm; (b) 4 cm, 4 cm, 5 cm, 6 cm; (c) 4 cm, 3 cm, 5 cm, 7 cm; (d) 3 cm, 3 cm, 5 cm, 7 cm. What is the characteristic of trapeziums?
{"url":"https://www.geogebra.org/m/F3X8eqKf","timestamp":"2024-11-07T12:14:44Z","content_type":"text/html","content_length":"90325","record_id":"<urn:uuid:170b7967-525b-4f45-bc1e-87cef3152988>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00518.warc.gz"}
An Explanation of the Basics and Units (2024) Without doubt E = mc 2 is the world’s most famous equation. This page explains what E = mc 2 means in simple terms and some of its consequences. The equation is derived directly from Einstein’s Special Theory of Relativity, and other pages in this series deal with the mathematical and logical derivation. Here though, we will examine the equation as it stands and keep the mathematics to a Energy = mass x the speed of light squared In other words: E = energy (measured in joules, J) m = mass (measured in kilograms, kg) c = the speed of light (measured in meters per second, ms -1 ), but this needs to be "squared". Note that the case of each letter is important and it would be incorrect to show the equation as, for example, e = MC². This is because physicists use the case of letters as well as the letters themselves to denote particular physical entities, quantities and constants in equations. In order for the equation to be correct we need to "square" the term c (the speed of light), i.e. we multiply the speed of light by itself; hence c 2 is the same as c times c. This allows us to be write the equation in another, slightly unusual, but equally correct way: E = m x c x c As a matter of interest, and to complete the terms used in the equation, the equals sign was only invented during the 16th century, by the Welsh mathematician Robert Recorde, apparently unhappy having to write out "is equal to" in his work. He could have chosen any number of symbols but chose two parallel lines because, as he himself put it, "noe 2 thynges can be moare equalle". We will now examine each unit (i.e. letter) in the equation in turn before addressing the question of what the equation means, but if you want to see worked examples of the equations you can do so here. The word "energy" is actually quite new. Its modern use dates from around the middle of the nineteenth century, when it was beginning to be realised that the power that drove many different processes could be explained by the concept of energy being transferred from one system and form to another. For example, the trains of the day were powered by coal. The coal was burned under a water-filled boiler to produce steam, which in turn pushed pistons attached to the wheels of the train, the wheels turned and the train was set in motion. In this example we start with locked up ("latent") chemical energy in the coal. The chemical energy is turned into heat energy (sometimes called "thermal energy") by burning the coal and boiling the water. Finally, the thermal energy is turned into the energy of movement ("kinetic energy") by forcing the steam into pistons to drive the wheels: A moving steam train Chemical energy - thermal energy - kinetic energy E = Energy There are many other forms of energy, such as electrical, gravitational, nuclear, and strain energy, such as that found in springs. However, as different as all these types of energy seem they can all be measured in the same way and thought of as the same thing. The unit that we use to measure energy, from whatever energy source, is the joule (J). Two ways in which we use this unit in everyday terms are: The total amount of energy in a system: As noted above, one example is a lump of coal, which when burned will release a certain number of joules (J) of energy, mostly in the forms of heat and light. Another, perhaps more common example, is that it takes about 1 joule to raise an apple by 1 meter. Energy used up over time: Most electrical devices have their power consumption rated in watts (W). A watt is a rate of energy consumption of one joule per second. So, if you have a light bulb in your room that's rated at 100 W it's using energy at a rate of 100 joules every second. To go back to the second example in the first bullet point, lifting an apple by 1 metre every second would mean that there is a power output of 1 watt. For most people this would be quite easy and could be kept up for quite a long time, but now imagine lifting 100 apples a second, i.e. 100 W. This, in human terms, is a large power output, but nothing special for many electrical devices. It's not uncommon, for example, for a kettle to be rated at 2000 W or more. That's a lot of apples! So, to summarise, energy comes in many forms, and it can be transferred from one system to another. The basic unit of measurement for energy is the joule. m = Mass c = the Speed of Light Mass is strictly defined as a measure of a body’s inertia, i.e. its resistance to acceleration. Another and simpler way of defining mass is to say that it's the total amount of matter in an object. This latter definition isn’t strictly true, but is good enough for our purposes here. Mass is measured in kilograms (kg). Note that mass isn’t the same as weight, although it's often thought to be. Weight is actually a measure of the gravitational force (pull) felt by a body and is measured in newtons (N) (note that scientific units that are named after people are almost always in lower case when spelled out fully, hence newtons and not Newtons, watts and not Watts etc.). For example, astronauts walking on the surface of the Moon have the same mass as on Earth but only weigh one sixth of what they would do back home. The reason for this is that while the mass of the astronauts hasn’t changed, the pull of the Moon’s gravity is only one sixth of what the Earth’s gravitational pull is. As with energy, the idea that mass is common to all objects is relatively new and again dates back to around the nineteenth century. Before that time different solids, liquids and gases were all thought to be only loosely connected in conceptual terms. As with energy, we now consider that mass is neither created or destroyed, but is merely changed from one form to another, e.g. we can turn water from a solid (ice) into a liquid (water) and into a gas (steam), but its total mass doesn’t change. We use the letter c to represent the speed of light. The ‘c’ comes from the Latin word “celeritas”, meaning swift, and it’s a very apt definition - there is nothing faster than light. In a vacuum, such as space, it travels at close to 186,300 miles per second (300,000 km per second). That’s about seven times around the Earth every second. The speed of light was first accurately estimated by the Danish astronomer Ole Roemer (sometimes written as Rømer) during the 1670s. Up until that time everyone assumed that the speed of light was infinite, i.e. that light arrived at its destination instantly. This isn’t such an unreasonable assumption given that when we look around us light does indeed appear to reach us instantly. However, during the seventeenth century it was discovered that there was a problem in calculating the orbital time of Io, the innermost moon of Jupiter. It sometimes took "too long" to make an orbit of the planet and at other times was "too quick". It was thought that the problem must be due to a wobble in the orbit of Io, but Roemer took a different, and very radical, view of the matter. He argued that light, instead of being everywhere instantly, had a finite speed and that this would explain the problem of Io. The Earth was known to travel around the Sun and this meant that sometimes the Earth was closer to Jupiter and sometimes further away. Roemer realised that when the Earth was on the opposite side of the Sun from Jupiter the light from Io would take longer to reach us than when the two bodies were on the same side: This means that the light has to travel further and therefore takes longer, providing, of course, that light has a speed in the first place. During a meeting of the new Academy of Science in Paris in 1676 Roemer demonstrated that the amassed observational data of the astronomer Cassini indicated that Io would next appear at 5.25pm on 9th November of that year. He himself predicted that it wouldn’t appear until 10 minutes and 45 second later, using his theory that light had a finite speed. The day came and virtually every major observatory in Europe was ready to test the prediction. At 5.25pm, the time predicted by Cassini, Io wasn’t visible. Even at 5.35pm Io wasn’t visible. But at exactly 5.35pm and 45 seconds it appeared, just as Roemer said it would. From this it was possible to make the first accurate measurement of the speed of light and the calculated figure was within one percent of what we know it to be today. You may think that that was the end of the matter and that Roemer was celebrated as a scientific genius, showered with honours and given a secure future. Sadly, that’s far from what happened. He was only 21 when he made his discovery, while Cassini was a well-respected if egotistical elder scientist, who used his powerful friends to back him up to rubbish Roemer’s ideas. Scientists, it seems, are human after all and this wasn’t the first, or sadly, the last time that an ego got in the way of a new discovery. Roemer eventually gave up science completely and later became the director of the port of Copenhagen and then head of the State Council of the Realm. It wasn’t until 50 years later that further experiments convinced the scientific community that Roemer had been right all along. Ole Roemer 1644 - 1710 What Does the Equation Mean? The equation tells us that energy and mass are, effectively, the same thing, and it also tells us how much energy is contained in a given mass, or vice versa. In other words, mass can be thought of as very tightly packed energy. That energy and mass are equivalent is quite an extraordinary claim and seems to go against two laws that had been established by scientists before Einstein came along: The Law of the Conservation of Mass: As we have seen, mass can be thought of as the quantity of matter in an object. The law of the conservation of mass states that mass is always conserved. That is, whatever we do with matter in a closed system we will always have the same amount of substance at the end. For example, if we burn a log, the wood gets lighter as the fuel it contains is used up. However, if we gather together the ashes, all of the tiny smoke particles and the water vapour produced by the burning process and then weigh everything we find that the mass is exactly equal to the mass of the log that was burned. Mass is just mass, or so it seems, and while it can be chemically altered, such as burned, the total amount in any system remains the same. The Law of the Conservation of Energy: But what about the energy released in burning the log? The energy released in the burning process is "chemical energy", i.e. the breaking and reforming of chemical bonds between atoms and molecules. Burning the wood released the chemical energy locked up in it. No energy was created in the process and none was destroyed; it was just changed from one sort of energy (chemical bonds) to other forms of energy (heat and light). In other words the total amount of energy, just like the total amount of mass, remained the same. After many experiments, notably by the scientist for whom the unit of energy is named, James Prescott Joule (1818 - 1889), it was established that the total amount of energy in a closed system always remains the same. This is known as the law of the conservation of energy. What Einstein showed via his now famous equation was that mass and energy are in fact the same thing. Converting one into the other doesn’t therefore violate either of the two conservation laws. Both quantities are conserved, although the state of the mass/energy may have changed. Each atom of a substance can be thought of as a little ball of tightly packed energy that can be released under certain circumstances. Likewise, we can take energy (such as particles of light, called photons) and turn it into matter. This was first achieved in the 1930s. That light can be turned into matter is perhaps a rather odd idea, but the picture below shows the first successful experiment in which this was done: Cloud chamber photon decay The picture shows the tracks of two matter particles that have been "created" after a high energy photon decayed, i.e. "fell apart", in a cloud chamber. The high energy photon is not in the visible range and has entered the chamber from the bottom of the picture. The Cloud Chamber A cloud chamber is a sealed tank filled with a gas, usually with a magnet to one side of it. When a particle, such as an atom, electron or proton passes through the tank it collides with some of the particles in the gas to produce little clouds that mark its path. For an electrically neutral particle, such as a neutron, the path will be straight. However, for any particle that is not electrically neutral its path will be bent towards or away from the magnet that forms part of the apparatus. The subject of turning matter into energy through both fusion and fission is dealt with in other pages in this series. Einstein’s Explanation of his Equation Einstein speaking about the equation E = mc 2 (211kB, .MP3 file) From the soundtrack of the film Atomic Physics. Copyright © J. Arthur Rank Organisation, Ltd., 1948. The recording is old, and that, together with Einstein’s accent, sometimes makes it difficult to hear the words properly. This is a transcript of the recording: “It followed from the Special Theory of Relativity that mass and energy are both but different manifestations of the same thing - a somewhat unfamiliar conception for the average mind. Furthermore, the equation E is equal to mc 2 , in which energy is put equal to mass, multiplied with the [by the] square of the velocity of light, showed that very small amounts of mass may be converted into a very large amount of energy and vice versa. The mass and energy were in fact equivalent, according to the formula mentioned before [E = mc 2 ]. This was demonstrated by Cockcroft and Walton in 1932, Albert Einstein (1879 - 1955) Albert Einstein (1879 - 1955) What do the Letters Stand For? Each of the letters of E = mc 2 stands for a particular physical quantity. Writing them out in full we get: Click here for a quick and easy E = mc 2 calculator. Click here for a quick and easy E = mc 2 calculator. [ Special Relativity ] [ General Relativity ] [ Einstein ] [ Time Dilation ] [ Black Holes ] [ Twin Paradox ] [ Time Dilation Formula ] NEW! Quick and Easy SuperFast Guides NEW!
{"url":"https://ebramu.shop/article/an-explanation-of-the-basics-and-units","timestamp":"2024-11-12T18:52:27Z","content_type":"text/html","content_length":"91369","record_id":"<urn:uuid:865ca66b-2501-43b6-bf5d-03fea47f7a9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00399.warc.gz"}
This class implements a 2D mesh generator. Template Parameters CDT must be a 2D constrained Delaunay triangulation, and type CDT::Face should be a model of the concept DelaunayMeshFaceBase_2. The geometric traits class of the instance of CDT has to be a model of the concept DelaunayMeshTraits_2. Criteria must be a model of the concept MeshingCriteria_2. This traits class defines the shape and size criteria for the triangles of the mesh. Criteria::Face_handle has to be the same as Using This Class The constructor of the class Delaunay_mesher_2 takes a reference to a CDT as an argument. A call to the refinement method refine_mesh() will refine the constrained Delaunay triangulation into a mesh satisfying the size and shape criteria specified in the traits class. Note that if, during the life time of the Delaunay_mesher_2 object, the triangulation is externally modified, any further call to its member methods may crash. Consider constructing a new Delaunay_mesher_2 object if the triangulation has been modified. Meshing Domain The domain to be mesh is defined by the constrained edges and a set of seed points. The constrained edges divides the plane into several connected components. The mesh domain is either the union of the bounded connected components including at least one seed, or the union of the bounded connected components that do no contain any seed. Note that the unbounded component of the plane is never See Also Mesh_2/mesh_class.cpp, and Mesh_2/mesh_optimization.cpp. The following functions are used to define seeds. void clear_seeds () Sets seeds to the empty set. More... template<class InputIterator > void set_seeds (InputIterator begin, InputIterator end, const bool mark=false) Sets seeds to the sequence [begin, end). More... Seeds_const_iterator seeds_begin () const Start of the seeds sequence. Seeds_const_iterator seeds_end () const Past the end of the seeds sequence. The function set_criteria() scans all faces to recalculate the list of bad faces, that are faces not conforming to the meshing criteria. This function actually has an optional argument that permits to prevent this recalculation. The filling of the list of bad faces can then be done by a call to set_bad_faces. void refine_mesh () Refines the constrained Delaunay triangulation into a mesh satisfying the criteria defined by the traits. const Criteria & get_criteria () Returns a const reference to the criteria traits object. void set_criteria (Criteria criteria) Assigns criteria to the criteria traits object. void set_criteria (Criteria criteria, bool recalculate_bad_faces) Assigns criteria to the criteria traits object. More... template<class InputIterator > void set_bad_faces (InputIterator begin, InputIterator end) This method permits to set the list of bad triangles directly, from the sequence [begin, end), so that the algorithm will not scan the whole set of triangles to find bad ones. The Delaunay_mesher_2 class allows, for debugging or demos, to play the meshing algorithm step by step, using the following methods. void init () This method must be called just before the first call to the following step by step refinement method, that is when all vertices and constrained edges have been inserted into the constrained Delaunay triangulation. More... bool is_refinement_done () Tests if the step by step refinement algorithm is done. More... bool step_by_step_refine_mesh () Applies one step of the algorithm, by inserting one point, if the algorithm is not done. More... template<typename CDT , typename Criteria > void CGAL::Delaunay_mesher_2< CDT, Criteria >::init ( ) This method must be called just before the first call to the following step by step refinement method, that is when all vertices and constrained edges have been inserted into the constrained Delaunay It must be called again before any subsequent calls to the step by step refinement method if new vertices or constrained edges have been inserted since the last call. template<typename CDT , typename Criteria > bool CGAL::Delaunay_mesher_2< CDT, Criteria >::is_refinement_done ( ) Tests if the step by step refinement algorithm is done. If it returns true, the following calls to step_by_step_refine_mesh will not insert any points, until some new constrained segments or points are inserted in the triangulation and init is called template<typename CDT , typename Criteria > template<class InputIterator > void CGAL::Delaunay_mesher_2< CDT, Criteria >::set_bad_faces ( InputIterator begin, InputIterator end This method permits to set the list of bad triangles directly, from the sequence [begin, end), so that the algorithm will not scan the whole set of triangles to find bad ones. To use if there is a non-naive way to find bad triangles. Template Parameters InputIterator must be an input iterator with value type Face_handle. template<typename CDT , typename Criteria > void CGAL::Delaunay_mesher_2< CDT, Criteria >::set_criteria ( Criteria criteria, bool recalculate_bad_faces Assigns criteria to the criteria traits object. If recalculate_bad_faces is false, the list of bad faces is let empty and the function set_bad_faces() should be called before refine_mesh. template<typename CDT , typename Criteria > template<class InputIterator > void CGAL::Delaunay_mesher_2< CDT, Criteria >::set_seeds ( InputIterator begin, InputIterator end, const bool mark = false Sets seeds to the sequence [begin, end). If mark==true, the mesh domain is the union of the bounded connected components including at least one seed. If mark==false, the domain is the union of the bounded components including no seed. Note that the unbounded component of the plane is never meshed. Template Parameters InputIterator must be an input iterator with value type Geom_traits::Point_2. template<typename CDT , typename Criteria > bool CGAL::Delaunay_mesher_2< CDT, Criteria >::step_by_step_refine_mesh ( ) Applies one step of the algorithm, by inserting one point, if the algorithm is not done. Returns false iff no point has been inserted because the algorithm is done.
{"url":"https://doc.cgal.org/4.8.2/Mesh_2/classCGAL_1_1Delaunay__mesher__2.html","timestamp":"2024-11-12T17:07:28Z","content_type":"application/xhtml+xml","content_length":"34923","record_id":"<urn:uuid:f97d20d8-98e0-4452-9e52-8159d16d41a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00501.warc.gz"}
if cos alpha + cos twoalpha is equal to P and sin alpha + sin two alp Trigonometry> if cos alpha + cos two alpha is equal to ... if cos alpha + cos twoalpha is equal to P and sin alpha + sin two alphais equal to Q then eliminate Alpha 1 Answers Last Activity: 6 Years ago For the sake of my convenience i am taking `alpha` as xNow we have, cosx +cos2x=P sinx+sin2x=Q Squaring and adding the two equations. Then by rearranging we get {(Sinx)^2+(cosx)^2}+{(sin2x)^2+(cos2x) ^2}+2(sinx*sin2x+cosx*cos2x)=(P)^2+(Q)^2Now using the following trigonometric identities (Sink)^2+(cosk)^2=1. (Identity 1) Here k is any angle. Cos(A-B)=cosA*cosB+sinA*sinB. (2nd idn) Now going back to our question we get 1+1+2cosx=(P)^2+(Q)^2 From here we get cosx= {(P)^2+(Q)^2-2}/2 Now use it in the original equation to eliminate x. Provide a better Answer & Earn Cool Goodies Enter text here... Ask a Doubt Get your questions answered by the expert for free Enter text here...
{"url":"https://www.askiitians.com/forums/Trigonometry/if-cos-alpha-cos-two-alpha-is-equal-to-p-and-sin_196769.htm","timestamp":"2024-11-07T00:00:02Z","content_type":"text/html","content_length":"183620","record_id":"<urn:uuid:09797acf-5050-41df-8229-036cc5c04fd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00066.warc.gz"}
Geometry Syllabus Course Overview Course Goals and Contents: Geometry is everywhere around us, in both the natural and Anthropocene worlds, and we can gain practical knowledge and personal satisfaction from understanding its rules and operating principles. This course covers the foundational content of plane geometry including points, lines, planes, angles, parallel lines, the theorems for similar and congruent triangles, right triangles, two-column proofs, circles, quadrilaterals, areas, the distance formula and the midpoint formula. Students shall be proficient in the skills listed in the Algebra 1 course description. Topic Areas: • Points, lines, planes and angles. • Theorems about angles and perpendicular lines. • Parallel lines and planes. • Properties of triangles: Theorems of triangle similarity and congruence (including SSS, SAS, ASA and Hypotenuse Leg theorems), CPCTC rule, two-column approach to proof. • Right triangles and their special properties. • Circles: tangents, arcs and chords; central and inscribed angles. • Quadrilaterals: properties of parallelograms. • Areas of plane figures. • Analytic geometry: distance and midpoint formulas. • Marine Navigation using NOAA charts • Elements of trigonometry and their applications Typical Class Session:Review : Homework is assigned at the end of each class session. At the start of the following session the teacher provides a short review and then offers to work through problems the students may not have been able to solve. Homework is collected. New Instruction : New content is presented with discussion, questions, and whiteboard examples. Students participate in solving additional problems at the whiteboard and at their desks. Hands-On Activity: One of the following is typically done: mechanical drafting problems, math puzzles, math games, exercises with surveying equipment and other math challenges. Students may work individually or in : A very short quiz provides a quick assessment of the student’s grasp of the new material. : Homework on the new material is assigned. Students are given a few minutes to begin the assignment. 20% for classroom participation, 20% for homework completion, and 60% for quizzes. Required Materials: Calculator: Texas Instruments Model TI-30XS Text: McDougal Littell’s Geometry, by Jurgensen, Brown and Jurgensen. ISBN 978-0-395-97727-9
{"url":"https://www.caerusacademy.com/geometry-syllabus.html","timestamp":"2024-11-01T23:11:49Z","content_type":"text/html","content_length":"33253","record_id":"<urn:uuid:2a34389f-5873-417c-ad58-50b84a014246>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00821.warc.gz"}
Convert micrometers to decimeters ( um to dm ) Last Updated: 2024-11-03 18:32:33 , Total Usage: 1111546 Converting micrometers to decimeters is a straightforward task within the metric system. This conversion is particularly useful in fields like engineering, science, and manufacturing, where understanding and translating measurements across different scales is essential. Historical or Origin Micrometers (µm): A micrometer, or micron, is a metric unit of length, equal to one-millionth of a meter. It's widely used in scientific contexts for measuring very small distances, such as the thickness of thin materials or the size of cells. Decimeters (dm): A decimeter is another metric unit, representing one-tenth of a meter. Though not as commonly used as millimeters or centimeters, decimeters are useful for measurements that fall between the scale of a meter and smaller units. Calculation Formula The formula to convert micrometers to decimeters is: \[ \text{Decimeters} = \text{Micrometers} \times \text{Conversion Factor} \] Since there are 10,000,000 micrometers in a decimeter, the conversion factor is \(1 \times 10^{-7}\) (or 0.0000001). Example Calculation For instance, to convert 50,000 micrometers to decimeters, the calculation is: \[ \text{Decimeters} = 50,000 \times 0.0000001 = 0.005 \text{ dm} \] Why It's Needed and Use Cases This type of conversion is essential in various technical and scientific applications, especially when dealing with measurements that span multiple scales, such as designing components in mechanical engineering or understanding biological structures in microbiology. Common Questions (FAQ) • Why are different metric units used for different scales? The metric system is designed with a range of units to conveniently express measurements at various scales, from the very small (like micrometers) to the large (like kilometers). • Is the conversion always a fixed ratio? Yes, within the metric system, conversions between units are based on fixed ratios, which are powers of 10. This makes the system easy to use and • Can this conversion be used for very large or very small numbers? Absolutely. The formula applies no matter the size of the number, maintaining its accuracy across all scales. In summary, converting micrometers to decimeters is a clear demonstration of the versatility and coherence of the metric system, enabling precise and consistent measurements across different scales of magnitude. This conversion is especially vital in fields requiring detailed understanding and manipulation of small-scale dimensions.
{"url":"https://calculator.fans/en/tool/um-to-dm-convertor.html","timestamp":"2024-11-06T17:48:45Z","content_type":"text/html","content_length":"12588","record_id":"<urn:uuid:03de68d6-a3b3-4517-9ccc-666b46a3e2f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00334.warc.gz"}
Ph.D. in Financial Mathematics Financial Mathematics Program The last decades witnessed the projection of sophisticated mathematical techniques to the center of the finance industry. In the 80's the main investment banks hired mathematicians, physicists and engineers to become financial engineers. Gradually, the main skills defining this professional category are being clarified and today many universities all over the world are designing programs to develop modeling and mathematical expertise in financial applications. In our country, the financial sector has enjoyed unparalleled expansion in the last decades and more and more sophisticated financial instruments are expected to be introduced into the sector in the forthcoming ones. Already there are serious attempts to integrate derivative securities and markets into the Turkish financial system. These developments will lead to a demand for talented people trained in the field of financial mathematics. The Institute of Applied Mathematics is responding to this new trend in the Turkish finance industry by developing an interdisciplinary program that will introduce students to models and mathematical techniques used in option pricing, pricing of other complex financial products, and to some aspects of financial econometrics. The program combines the strengths of the departments of Mathematics, Business Administration, Economics, Statistics and Industrial Engineering of the Middle East Technical University. Admission of Requirements The admission procedure of these programs will be implemented according to the “Academic Rules and Regulations Concerning Graduate Studies of METU”. However, some programs might have additional admission requirements. University graduates of any discipline willing to acquire expertise in financial mathematics are natural candidates for these programs. These programs are also open to graduates working in financial and insurance industries. In general, the applicants will be evaluated based on their success in their graduation fields, their LES (Graduate Education Examination) scores, English Proficiency, and the result of a possible examination/interview given by the Institute. The program is equally suitable for students who have just finished their undergraduate education or for practitioners in financial industry holding a Bachelor degree. Applicants must have a strong academic background showing good analytical skills. The applicants are expected to have working knowledge of calculus (including partial differentiation, Taylor series, Riemann-Stieltjes integrals), linear algebra (systems of equations, determinants, diagonalization of symmetric matrices, eigenvalues, etc.), elementary theory of ordinary and partial differential equations, in addition to some basic knowledge of computer programming. Course work or job experience in probability is also recommended. A promising student lacking prerequisites may be admitted but required to take the summer Mathematics Preparatory Course (MPC) before beginning the program. Ph.D. Program • 7 elective courses • Ph.D. Thesis(non-credit) Total : 21 credits Ph.D. Program on B.Sc. • 5 core courses • 9 elective courses • 1 seminar course(non-credit) • Ph.D. Thesis(non-credit) Total : 42 credits Core Courses for Ph.D. on B.Sc. 1. IAM 520 Financial Derivatives (3-0)3 This course is designed to provide a solid foundation in the principles of financial derivatives and risk management. It attempts to strike a balance between institutional details, theoretical foundations, and practical applications. The course equally emphasizes pricing and investment strategies in order to motivate students to start thinking about risk management in financial markets. Parallel to the already increasing attempts to integrate derivative securities and markets into the Turkish financial system, it is believed that this course will fill a gap and students will be exposed to a rather comprehensive coverage of theory and application in the derivatives area. This course is expected to give the students a “competitive advantage” when they enter the job market since “derivatives” is a “hot topic” nowadays and BA4825/5825/IAM520 is one of the very few courses offered on this topic in Turkey! 2. IAM 522 Stochastic Calculus for Finance (3-0)3 Discrete-time models: trading strategies, self-financing strategies, admissible strategies, arbitrage, martingales and viable markets, complete markets and option pricing. Optimal stopping problem and American options : Stopping time, Snell envelope, American options, European options. Brownian motion and stochastic differential equations: Brownian motion, martingales, stochastic integral and Itô calculus, Ornstein-Uhlenbeck process, stochastic differential equations. The Black-Scholes model: the behavior of prices, self-financing strategies, the Girsanov theorem, pricing and hedging of options, hedging of calls and puts, American options, perpetual puts. Option pricing and partial differential equations: European option pricing and diffusions, partial differential equations and computation of expectations, numerical solutions, application to American options. Interest rate models: modelling principles, some classical models. Asset models with jumps: Poisson process, dynamics of risky assets, pricing and hedging of options. Simulation and algorithms for financial models. 3. IAM 524 Financial Economics (3-0)3 Competitive Models with Symmetric Information : Arbitrage and Martingales, Pricing and Hedging Contingent Claims, Consumption and Portfolio Decisions, Walrasian Equilibrium Theory and Term Structure of Interest Rates. Strategic Models with Asymmetric Information: Market Microstructure : A Critique of the Walrasian Equilibrium : The Informational Role of Prices, The Rational Expectations Equilibrium (REE) Concept and Problems, Noisy REE: Aggregation and Transmission of Information, Trading contraints and New Problems, Market Structure and Regulation, Asymmetric Information Models of Market Making, Homogeneous Information Models of Market Making, Intraday transaction Prices and Volumes. 4. IAM 526 Time Series Applied to Finance (3-0)3 This course is concerned with recent developments in the time series techniques for the analysis of financial markets. It provides a rigorous account of the time series techniques dealing with univariate and multivariate time series models. The techniques will be illustrated by a number of applications. 5. IAM 541 Probability Theory (3-0)3 The objective of this course is to initiate students to Probability Theory in which the main tools are those of Measure Theory. The proposed outline constitutes the prerequisites for Stochastic Calculus and other studies in the domain of stochastic processes. The content of the course covers probability spaces, independence, conditional probability, product probability spaces, random variables and their distributions, distribution functions, mathematical expectation ( integration with respect to a probability measure), Lp-spaces, moments and generating functions, conditional expectation, linear estimation, Gaussian vectors, various convergence concepts, central limit theorem and laws of large numbers.
{"url":"https://www.educaedu-turkiye.com/ph-d-in-financial-mathematics-doktora-programlari-943.html","timestamp":"2024-11-14T10:06:47Z","content_type":"text/html","content_length":"87233","record_id":"<urn:uuid:b3a82f1c-59a3-49b3-9143-90c0dfaa3297>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00103.warc.gz"}
Power Analysis for Structural Equation Modeling – Jihong Z. - Play Harder and Learn Harder Power Analysis for Structural Equation Modeling Showcase current software used for power analysis of Structural Equation Modeling The statistical power of a hypothesis test is the probability of detecting an effect, if there’s a true effect presented to detect. Power, defined as the confidence in the conclusions drawn from the results, plays an important role of determining the minimum sample size (number of observation) required to detect an effect in an experiment. The power analysis of SEM is often ignored in application research partly because few guidelines exist for researchers to conduct power analysis in SEM framework. This tutorial introduces two web tools which can be used to conduct power analysis for Structural Equation Modeling (SEM). An empirical example of my own research on students’ time effect on academic motivations will also be presented for illustration. For privacy, only short name of the latent constructs are used. Wang and Rhemtulla (2021) argued that there’re two ways of determining sample size planning in SEM: (1) traditional rule of thumbs, (2) power analysis for SEM based on parameter effect size and model misspecification. To be more specific, two ways of determining sample size planning in SEM are: 1. Rules of thumbs based absolute minimum sample size: N = 100 or N = 200 2. Model Complexities based relative sample size: N = 5-10 or N = 3-6 per estimated parameters These two ways of determining minimum sample sizes may not agree with each other and have few theoretical or empirical research support. According to the objective of hypothesis test in SEM, there are two distinct kinds of power: (1) the power of detecting model misspecification, (2) and the power of determining specific effects within the model (i.e., whether one latent variable predicts another). In this tutorial, I called the former one global power and latter one local power since the global power analyzes the model-level effect size of misfit while the local power tests the effect size of a particular model parameter. 1 Global Power As mentioned above, global power is a test quantifying the degree of confidence of the hypothesis test of model misspecification. There are many fit indices for power analysis for model misspecification including: 1. Satorra and Saris’s (1985) \chi^2 likelihood-ratio test 2. MacCallum et al.’s (1996) root-mean-square error of approximation (RMSEA) tests For the first test, I recommend using WebPower, a free web tool, to calculate global power based on Chi-square likelihood-ratio test (LRT). The test examines the misspecification by comparing the user model to the null model with the hypothesis that the hypothesized model is same as the null model. Most of cases will reject the hypothesis. WebPower tests the global power of SEM based on chi-square likelihood ratio test. Input the chi-square test with 480.82 and vary the sample size from N=1 to N=300, we found N = 5 is enough sample size for power up to 1. Thus, normally SEM based on Chi-square LRT to null model has very low bar for misspecification. In this case, perhaps doubling the sample size (N \times 2 = 261 \times 2 = 522) is a more reasonable rule of thumb of minimum sample size. 2 Local Power Wang and Rhemtulla (2021) recommend a Monte Carlo simulation approach to calculate power for detecting a target effect in SEM and introduced pwrSEM, a Shiny web app to estimate the power of parameter in SEM. Wang, Y. Andre, and Mijke Rhemtulla. 2021. “Power Analysis for Parameter Estimation in Structural Equation Modeling: A Discussion and Tutorial.” Advances in Methods and Practices in Psychological Science 4 (1): 2515245920918253. Let’s take one mixture Confirmatory factor analysis with known classes (time = 0, time = 1) for example. The goal is to estimate the power of time effects on five latent constructs with sample size ( N = 256). The model has been fitted to the data using lavaan package beforehand. After opening up the shiny app, we can copy and paste the model syntax from lavaan to the app. Then the model specification will be visualized as following: Within this example model, InR, IdR, IGR, IER, ER are five factors measured twice. Time is a indicator variable where time = 0 represents pre-test and time = 1 represents post-test. The next step is to input the model syntax and corresponding parameter estimates generated by lavaan: Then, check the effects needed to be examined. In this case, the five time effects on ER, IER, IGR, IdR, and InR. The local powers of five time effects on latent factors are as follows: We can vary our sample size to test how many sample sizes required for the target power of LnR regressed on Time (for example, if the goal of regression coefficient is .9, we slowly increase the sample size from 300 to 500). I tried two sample size in this case: As the figures shown, based on MCMC simulation, sample size N = 300 is expected to increase the power of time effect on InR (InR ~~ time) to value 0.81 while N = 500 will increase the power to value 0.9, which is enough power for single regression coefficient effect. However, with such moderately large sample size, other time effects on ER, IRT, IGR, IdR (i.e., ER ~~ time) still stay at a low level because of the lower estimated effect size. Thus, from this experiment, we can conclude that to make the power of time effect on latent variable InR up to 0.9, at lease around 300~500 samples are required. 3 Conclusion To summarize, in this post, I illustrated two types of power: (1) local power (2) global power, and how they can be calculated using pwrSEM and WebPower correspondingly. It should be noted that, they are not the only software in the market. Another newly published software is semPower R package (see Jobst, Bader, and Moshagen 2021 for details), which can also be used to provide local power based on RMSEA. Jobst, Lisa J., Martina Bader, and Morten Moshagen. 2021. “A Tutorial on Assessing Statistical Power and Determining Sample Size for Structural Equation Models.” Psychological Methods , No Pagination Specified–. Local power determines the minimum sample sizes for one specific effect in SEM. Global power determines the minimum sample sizes needed for testing model misspecification. Global power will give rise to different results depending on the model misfit indices chosen. As for local power, it is important to decide on which effect needed to be examined in the model. 4 Discussion There are few guidelines about how and what to report local power and global power in SEM literature. Global power seems a common thing to report for all SEM paper since all SEM research need to deal with model misspecification before drawing any conclusion. Local power, on the other hand, is necessary when one effect within SEM model is of most interest and core of the study. Some may argue that power in SEM is not important as long as the model fit of SEM is acceptable. However, the acceptance of model-data fit cannot guarantee that the sample size meet the requirement of detecting specific effect. In other words, there may be some sort of false positive rate when reporting significance test in small sample setting. More research needed to investigate the relationship between power and local misfit indices to understand this problem. Please let me know your thoughts in comments below. Back to top BibTeX citation: author = {Zhang, Jihong}, title = {Power {Analysis} for {Structural} {Equation} {Modeling}}, date = {2022-04-29}, url = {https://www.jihongzhang.org/posts/2022-04-29-power-analysis-for-sem/}, langid = {en} For attribution, please cite this work as: Zhang, Jihong. 2022. “Power Analysis for Structural Equation Modeling.” April 29, 2022.
{"url":"https://jihongzhang.org/posts/2022-04-29-power-analysis-for-sem/","timestamp":"2024-11-02T14:31:14Z","content_type":"application/xhtml+xml","content_length":"89590","record_id":"<urn:uuid:cc3fb78e-cc4b-4759-bc79-c13b132749fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00133.warc.gz"}
Nobel prize for physics: the full list of winners The Nobel Prize in physics has been awarded 115 times and there are 219 Nobel Prize laureates. John Bardeen is the only laureate to receive the honour twice, in 1956 and 1972. The oldest physics laureate is Arthur Ashkin — 96 years old when he won in 2018 for his work in “optical tweezers and their application to biological systems”. The youngest is Lawrence Bragg, who was 25 years old when he was awarded the Nobel Prize with his father in 1915 — “for their services in the analysis of crystal structure by means of X-rays”. There have been three other father-son winners and one married couple — Pierre and Marie Curie — in 1903. “for groundbreaking contributions to our understanding of complex systems” Syukuro Manabe and Klaus Hasselmann “for the physical modelling of Earth’s climate, quantifying variability and reliably predicting global warming” Giorgio Parisi “for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales” Roger Penrose “for the discovery that black hole formation is a robust prediction of the general theory of relativity” Reinhard Genzel and Andrea Ghez “for the discovery of a supermassive compact object at the centre of our galaxy” “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos” James Peebles “for theoretical discoveries in physical cosmology” Michel Mayor and Didier Queloz “for the discovery of an exoplanet orbiting a solar-type star” “for groundbreaking inventions in the field of laser physics” Arthur Ashkin “for the optical tweezers and their application to biological systems” Gérard Mourou and Donna Strickland “for their method of generating high-intensity, ultra-short optical pulses” Rainer Weiss, Barry C. Barish and Kip S. Thorne “for decisive contributions to the LIGO detector and the observation of gravitational waves” David J. Thouless, F. Duncan M. Haldane and J. Michael Kosterlitz “for theoretical discoveries of topological phase transitions and topological phases of matter” Takaaki Kajita and Arthur B. McDonald “for the discovery of neutrino oscillations, which shows that neutrinos have mass” Isamu Akasaki, Hiroshi Amano and Shuji Nakamura “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources” François Englert and Peter W. Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider” Serge Haroche and David J. Wineland” for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems” Saul Perlmutter, Brian P. Schmidt and Adam G. Riess” for the discovery of the accelerating expansion of the Universe through observations of distant supernovae” Andre Geim and Konstantin Novoselov “for groundbreaking experiments regarding the two-dimensional material graphene” Charles Kuen Kao” for groundbreaking achievements concerning the transmission of light in fibres for optical communication” Willard S. Boyle and George E. Smith” for the invention of an imaging semiconductor circuit — the CCD sensor” Yoichiro Nambu “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics” Makoto Kobayashi and Toshihide Maskawa “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature” Albert Fert and Peter Grünberg” for the discovery of Giant Magnetoresistance” John C. Mather and George F. Smoot” for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation” Roy J. Glauber” for his contribution to the quantum theory of optical coherence” John L. Hall and Theodor W. Hänsch” for their contributions to the development of laser-based precision spectroscopy, including the optical frequency comb technique” David J. Gross, H. David Politzer and Frank Wilczek “for the discovery of asymptotic freedom in the theory of the strong interaction” Alexei A. Abrikosov, Vitaly L. Ginzburg and Anthony J. Leggett “for pioneering contributions to the theory of superconductors and superfluids” Raymond Davis junior and Masatoshi Koshiba “for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos” Riccardo Giacconi “for pioneering contributions to astrophysics, which have led to the discovery of cosmic X-ray sources” Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman “for the achievement of Bose-Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the “for basic work on information and communication technology” Zhores I. Alferov and Herbert Kroemer” for developing semiconductor heterostructures used in high-speed- and optoelectronics” Jack S. Kilby “for his part in the invention of the integrated circuit” Gerardus ‘t Hooft and Martinus J.G. Veltman “for elucidating the quantum structure of electroweak interactions in physics” Robert B. Laughlin, Horst L. Störmer and Daniel C. Tsui “for their discovery of a new form of quantum fluid with fractionally charged excitations” Steven Chu, Claude Cohen-Tannoudji and William D. Phillips “for development of methods to cool and trap atoms with laser light” David M. Lee, Douglas D. Osheroff and Robert C. Richardson “for their discovery of superfluidity in helium-3” “for pioneering experimental contributions to lepton physics” Martin L. Perl “for the discovery of the tau lepton” Frederick Reines “for the detection of the neutrino” “for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter” Bertram N. Brockhouse” for the development of neutron spectroscopy” Clifford G. Shull” for the development of the neutron diffraction technique” Russell A. Hulse and Joseph H. Taylor junior “for the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation” Georges Charpak “for his invention and development of particle detectors, in particular the multiwire proportional chamber” Pierre-Gilles de Gennes “for discovering that methods developed for studying order phenomena in simple systems can be generalised to more complex forms of matter, in particular to liquid crystals and Jerome I. Friedman, Henry W. Kendall and Richard E. Taylor “for their pioneering investigations concerning deep inelastic scattering of electrons on protons and bound neutrons, which have been of essential importance for the development of the quark model in particle physics” Norman F. Ramsey “for the invention of the separated oscillatory fields method and its use in the hydrogen maser and other atomic clocks” Hans G. Dehmelt and Wolfgang Paul” for the development of the ion trap technique” Leon M. Lederman, Melvin Schwartz and Jack Steinberger “for the neutrino beam method and the demonstration of the doublet structure of the leptons through the discovery of the muon neutrino” J. Georg Bednorz and K. Alexander Müller” for their important breakthrough in the discovery of superconductivity in ceramic materials” Ernst Ruska” for his fundamental work in electron optics, and for the design of the first electron microscope” Gerd Binnig and Heinrich Rohrer” for their design of the scanning tunnelling microscope” Klaus von Klitzing” for the discovery of the quantised Hall effect” Carlo Rubbia and Simon van der Meer “for their decisive contributions to the large project, which led to the discovery of the field particles W and Z, communicators of weak interaction” Subramanyan Chandrasekhar” for his theoretical studies of the physical processes of importance to the structure and evolution of the stars” William Alfred Fowler” for his theoretical and experimental studies of the nuclear reactions of importance in the formation of the chemical elements in the universe” Kenneth G. Wilson “for his theory for critical phenomena in connection with phase transitions” Nicolaas Bloembergen and Arthur Leonard Schawlow “for their contribution to the development of laser spectroscopy” Kai M. Siegbahn “for his contribution to the development of high-resolution electron spectroscopy” James Watson Cronin and Val Logsdon Fitch “for the discovery of violations of fundamental symmetry principles in the decay of neutral K-mesons” Sheldon Lee Glashow, Abdus Salam and Steven Weinberg “for their contributions to the theory of the unified weak and electromagnetic interaction between elementary particles, including, inter alia, the prediction of the weak neutral current” Pyotr Leonidovich Kapitsa “for his basic inventions and discoveries in the area of low-temperature physics” Arno Allan Penzias and Robert Woodrow Wilson “for their discovery of cosmic microwave background radiation” Philip Warren Anderson, Sir Nevill Francis Mott and John Hasbrouck Van Vleck “for their fundamental theoretical investigations of the electronic structure of magnetic and disordered systems” Burton Richter and Samuel Chao Chung Ting “for their pioneering work in the discovery of a heavy elementary particle of a new kind” Aage Niels Bohr, Ben Roy Mottelson and Leo James Rainwater “for the discovery of the connection between collective motion and particle motion in atomic nuclei and the development of the theory of the structure of the atomic nucleus based on this connection” Sir Martin Ryle and Antony Hewish “for their pioneering research in radio astrophysics: Ryle for his observations and inventions, in particular of the aperture synthesis technique, and Hewish for his decisive role in the discovery of pulsars” Leo Esaki and Ivar Giaever “for their experimental discoveries regarding tunnelling phenomena in semiconductors and superconductors, respectively” Brian David Josephson “for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects” John Bardeen, Leon Neil Cooper and John Robert Schrieffer “for their jointly developed theory of superconductivity, usually called the BCS-theory” Dennis Gabor “for his invention and development of the holographic method” Hannes Olof Gösta Alfvén “for fundamental work and discoveries in magnetohydrodynamics with fruitful applications in different parts of plasma physics” Louis Eugène Félix Néel “for fundamental work and discoveries concerning antiferromagnetism and ferrimagnetism which have led to important applications in solid state physics” Murray Gell-Mann “for his contributions and discoveries concerning the classification of elementary particles and their interactions” Luis Walter Alvarez “for his decisive contributions to elementary particle physics, in particular the discovery of a large number of resonance states, made possible through his development of the technique of using hydrogen bubble chamber and data analysis” Hans Albrecht Bethe “for his contributions to the theory of nuclear reactions, especially his discoveries concerning the energy production in stars” Alfred Kastler “for the discovery and development of optical methods for studying Hertzian resonances in atoms” Sin-Itiro Tomonaga, Julian Schwinger and Richard P. Feynman “for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles” Charles Hard Townes, Nicolay Gennadiyevich Basov and Aleksandr Mikhailovich Prokhorov “for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser-laser principle” Eugene Paul Wigner “for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles” Maria Goeppert Mayer and J. Hans D. Jensen “for their discoveries concerning nuclear shell structure” Lev Davidovich Landau “for his pioneering theories for condensed matter, especially liquid helium” Robert Hofstadter “for his pioneering studies of electron scattering in atomic nuclei and for his thereby achieved discoveries concerning the structure of the nucleons” Rudolf Ludwig Mössbauer “for his researches concerning the resonance absorption of gamma radiation and his discovery in this connection of the effect which bears his name” Donald Arthur Glaser “for the invention of the bubble chamber” Emilio Gino Segrè and Owen Chamberlain “for their discovery of the antiproton” Pavel Alekseyevich Cherenkov, Il’ja Mikhailovich Frank and Igor Yevgenyevich Tamm “for the discovery and the interpretation of the Cherenkov effect” Chen Ning Yang and Tsung-Dao (T.D.) Lee “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles” William Bradford Shockley, John Bardeen and Walter Houser Brattain “for their researches on semiconductors and their discovery of the transistor effect” Willis Eugene Lamb “for his discoveries concerning the fine structure of the hydrogen spectrum” Polycarp Kusch “for his precision determination of the magnetic moment of the electron” Max Born “for his fundamental research in quantum mechanics, especially for his statistical interpretation of the wave function” Waltzer Bothe “for the coincidence method and his discoveries made therewith” Frits Zernike” for his demonstration of the phase contrast method, especially for his invention of the phase contrast microscope” Felix Bloch and Edward Mills Purcell “for their development of new methods for nuclear magnetic precision measurements and discoveries in connection therewith” Sir John Douglas Cockcroft and Ernest Thomas Sinton Walton “for their pioneer work on the transmutation of atomic nuclei by artificially accelerated atomic particles” Cecil Frank Powell “for his development of the photographic method of studying nuclear processes and his discoveries regarding mesons made with this method” Hideki Yukawa “for his prediction of the existence of mesons on the basis of theoretical work on nuclear forces” Patrick Maynard Stuart Blackett “for his development of the Wilson cloud chamber method, and his discoveries therewith in the fields of nuclear physics and cosmic radiation” Sir Edward Victor Appleton “for his investigations of the physics of the upper atmosphere especially for the discovery of the so-called Appleton layer” Percy Williams Bridgman “for the invention of an apparatus to produce extremely high pressures, and for the discoveries he made therewith in the field of high pressure physics” Wolfgang Pauli “for the discovery of the Exclusion Principle, also called the Pauli Principle” Isidore Isaac Rabi “for his resonance method for recording the magnetic properties of atomic nuclei” Otto Stern “for his contribution to the development of the molecular ray method and his discovery of the magnetic moment of the proton” No Nobel Prize was awarded this year. The prize money was with 1/3 allocated to the Main Fund and with 2/3 to the Special Fund of this prize section. No Nobel Prize was awarded this year. The prize money was with 1/3 allocated to the Main Fund and with 2/3 to the Special Fund of this prize section. No Nobel Prize was awarded this year. The prize money was with 1/3 allocated to the Main Fund and with 2/3 to the Special Fund of this prize section. Ernest Orlando Lawrence” for the invention and development of the cyclotron and for results obtained with it, especially with regard to artificial radioactive elements” Enrico Fermi” for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons” Clinton Joseph Davisson and George Paget Thomson “for their experimental discovery of the diffraction of electrons by crystals” Victor Franz Hess “for his discovery of cosmic radiation” Carl David Anderson “for his discovery of the positron” James Chadwick” for the discovery of the neutron” No Nobel Prize was awarded this year. The prize money was with 1/3 allocated to the Main Fund and with 2/3 to the Special Fund of this prize section. Erwin Schrödinger and Paul Adrian Maurice Dirac “for the discovery of new productive forms of atomic theory” Werner Karl Heisenberg “for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen” No Nobel Prize was awarded this year. The prize money was allocated to the Special Fund of this prize section. Sir Chandrasekhara Venkata Raman” for his work on the scattering of light and for the discovery of the effect named after him” Prince Louis-Victor Pierre Raymond de Broglie “for his discovery of the wave nature of electrons” Owen Willans Richardson “for his work on the thermionic phenomenon and especially for the discovery of the law named after him” Arthur Holly Compton “for his discovery of the effect named after him” Charles Thomson Rees Wilson” for his method of making the paths of electrically charged particles visible by condensation of vapour” Jean Baptiste Perrin” for his work on the discontinuous structure of matter, and especially for his discovery of sedimentation equilibrium” James Franck and Gustav Ludwig Hertz “for their discovery of the laws governing the impact of an electron upon an atom” Karl Manne Georg Siegbahn “for his discoveries and research in the field of X-ray spectroscopy” Robert Andrews Millikan “for his work on the elementary charge of electricity and on the photoelectric effect” Niels Henrik David Bohr “for his services in the investigation of the structure of atoms and of the radiation emanating from them” Albert Einstein “for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect” Charles Edouard Guillaume “in recognition of the service he has rendered to precision measurements in Physics by his discovery of anomalies in nickel steel alloys” Johannes Stark “for his discovery of the Doppler effect in canal rays and the splitting of spectral lines in electric fields” Max Karl Ernst Ludwig Planck “in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta” Charles Glover Barkla “for his discovery of the characteristic Röntgen radiation of the elements” No Nobel Prize was awarded this year. The prize money was allocated to the Special Fund of this prize section. Sir William Henry Bragg and William Lawrence Bragg “for their services in the analysis of crystal structure by means of X-rays” Max von Laue “for his discovery of the diffraction of X-rays by crystals” Heike Kamerlingh Onnes “for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium” Nils Gustaf Dalén “for his invention of automatic regulators for use in conjunction with gas accumulators for illuminating lighthouses and buoys” Wilhelm Wien “for his discoveries regarding the laws governing the radiation of heat” Johannes Diderik van der Waals “for his work on the equation of state for gases and liquids” Guglielmo Marconi and Karl Ferdinand Braun “in recognition of their contributions to the development of wireless telegraphy” Gabriel Lippmann “for his method of reproducing colours photographically based on the phenomenon of interference” Albert Abraham Michelson “for his optical precision instruments and the spectroscopic and metrological investigations carried out with their aid” Joseph John Thomson “in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases” Philipp Eduard Anton von Lenard “for his work on cathode rays” Lord Rayleigh (John William Strutt) “for his investigations of the densities of the most important gases and for his discovery of argon in connection with these studies” Antoine Henri Becquerel “in recognition of the extraordinary services he has rendered by his discovery of spontaneous radioactivity” Pierre Curie and Marie Curie, née Sklodowska “in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Hendrik Antoon Lorentz and Pieter Zeeman “in recognition of the extraordinary service they rendered by their researches into the influence of magnetism upon radiation phenomena” Wilhelm Conrad Röntgen” in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him” Post a Comment
{"url":"https://www.engineerability.com/2022/10/nobel-prize-for-physics-full-list-of.html","timestamp":"2024-11-12T22:06:34Z","content_type":"application/xhtml+xml","content_length":"246174","record_id":"<urn:uuid:2d170e01-2b1d-4334-9a45-ea89901a4a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00744.warc.gz"}
Prediction of non-exercise activity thermogenesis (NEAT) using multiple linear regression in healthy Korean adults: a preliminary study In modern people, increased intake of high-calorie foods and imbalance in nutrient intake occur, and energy expenditure (EE) decreases due to a lack of physical activity and exercise [ ]. Recently, due to social distancing during the novel coronavirus disease (COVID-19) pandemic, physical activity has become lesser than before, and sedentary behaviors such as television viewing time [ ], computer activity, and time reclining or lying down are increasing. Studies report that decreased physical activity is not only associated with obesity [ ], cardiovascular disease, and mortality but also causes obesity and various other conditions, such as cardiovascular disease, hypertension, diabetes, and cancer [ To prevent obesity and promote health, total EE (TEE) must be increased, and TEE is primarily composed of resting EE (REE), diet-induced thermogenesis (DIT), and physical activity-induced EE (AEE) [ ]. REE is the minimum metabolic activity necessary to sustain life, accounting for approximately 60% of the TEE, and is highly related to body size, such as lean mass. DIT accounts for about 10% of the TEE by digestion, absorption, and autonomic nervous activity following conversion to intermediate metabolites after food intake, which does not vary significantly from person to person. Finally, AEE can be subdivided into exercise-related activity thermogenesis (EAT) and non-exercise activity thermogenesis (NEAT), accounting for approximately 30% or more of the TEE [ REE and DIT are relatively constant, whereas AEE is highly versatile. In general, the EAT method has most commonly been used to increase TEE, but recently, NEAT has been shown to be as effective at energy consumption as exercise [ ]. NEAT includes the EE of all physical activities, except voluntary activities such as exercise, and can be used in various ways related to work and leisure [ ]. In previous studies, sedentary people accounted for 6-10% of TEE, while active people accounted for more than 50%, and physical activities such as shaking legs, cleaning, walking, and climbing stairs consumed 20% more energy than at rest [ ]. As such, NEAT can increase the EE of daily activities and is considered a good way to improve health. In addition, the need for NEAT is growing during the COVID-19 pandemic, where external activities are limited, and it is essential to measure and evaluate NEAT accurately. The first method to determine NEAT is by measuring and evaluating an individual’s physical activity, amount of regular exercise, and occupational activity intensity [ ]. The advantage of this method is that it can collect data without special measuring equipment, but it is known to be inaccurate due to individual variations. The second method is to predict the amount of activity by collecting the intensity, duration, and frequency of physical activity using an accelerometer [ ]. It is convenient to use in the field; however, it does not accurately reflect the amount of EE of special situations, such as walking with an object or walking uphill. The third method measures EE by analyzing oxygen and carbon dioxide concentrations with a breathing gas analyzer [ ]. Although the accuracy is high, it is not suitable for large-scale research because it is costly and time-consuming. Lastly, the most standardized method of measuring EE is by ingesting water, using the isotope measurement method, and substituting it into the formula using the amount removed from the body over time [ ]. It is difficult to accurately measure the intensity, frequency, and EE consistently. As described above, there are various methods for analyzing NEAT. However, most of the measured values are obtained through laboratory measurements; thus, it is difficult to apply them to the field [ ]. Therefore, it is vital to develop a simple and accurate EE estimation equation to measure and evaluate various NEAT activities. Currently, some studies have developed a regression model using height, age, weight, and heart rate (HR) for EE during exercise and rest. However, almost no studies at home or abroad have presented a regression model for estimating EE according to various NEAT activities. Therefore, it is essential to develop a regression model using multiple dependent variables for NEAT. Thus, this study provides a regression model for predicting NEAT EE by measuring EE during various NEAT activities in Korean adults (male and female), making it easy to apply in public health programs and the field. A total of 71 healthy adults (male = 29; female = 42) were included in the present study ( Table 1 ). Subjects who met one or more of the following exclusion criteria were not eligible to participate in the study: unstable angina, recent cardiac infarction (4 weeks), uncompensated heart failure, severe valvular illness, pulmonary disease, uncontrolled hypertension, kidney failure, orthopedic/neurological limitations, cardiomyopathy, planned surgery during the research period, reluctance to sign the consent form, drug or alcohol abuse, or involvement in another study. There was also no history of an orthopedic disease or other medical issues over the past year in the pre-screening surveys. All subjects were fully acquainted with the nature of the study and informed of the experimental risks before signing a written consent form to participate. It was explicitly stated that the subjects could withdraw from the study at any point. The researchers fully explained their pretest research and received voluntary consent. All study procedures were approved by the Institutional Review Board of Konkuk University (7001355-201903-HR-305) in Korea and were conducted in accordance with the Declaration of Helsinki. Experimental design All subjects were required to avoid strenuous exercise for 48 h and arrive at the laboratory early in the morning (8:00 AM) after overnight fasting (≥ 8 h) and rested for 30 min. Next, body composition, blood pressure, and resting HR were measured, followed by a standardized breakfast (2 pieces of bread (200 kcal), 1 boiled egg (80 kcal), 1 cup of orange juice (120 kcal), and 1 cup of water). The subjects rested comfortably after breakfast and participated in the experiment 2 h later. Body composition Body height, body mass index (BMI), body weight, free fat mass (FFM), fat mass, and percent body fat were measured using bioelectrical impedance analysis equipment (InBody 770, InBody, Seoul, Korea). The participants were asked to wear light clothing and remove metal items and were measured standing upright and barefoot on the machine platform, placing their feet on the electrode of the platform, while their hands gripped the wires on the handles. Blood pressure and resting HR After all the subjects were sufficiently rested for more than 30 min, their blood pressure (systolic blood pressure [SBP] and diastolic blood pressure [DBP]) were measured twice using an autonomic blood pressure monitor (HBP9020, Omron, Tokyo, Japan), and the average value was used for analysis. The blood pressure parameters measured were SBP, DBP, mean arterial pressure (MAP = DBP + PP/3), pulse pressure (PP = SBP - DBP), and rate pressure product (RPP = SBP × HR). The resting HR was measured using an autonomic HR monitor (V800, Polar, Helsinki, Finland). NEAT measurement NEAT was measured by indirect calorimetry using a wearable metabolic gas analyzer (K5, Cosmed, Rome, Italy). Calibration was performed using calibration gas (16% O2 and 5% CO ) before the measurements. The measurement room had controlled humidity (50%) and temperature (23 ± 1 °C). Sitting, leg jiggling, standing, and walking were performed for 10 min each, and walking was performed on a treadmill (S25T, STEX, Seoul, Korea) at a speed of 4.5 km/h and 6.0 km/h. Stair climbing was performed with a gas analyzer and a stair height of 20 cm stepmill (StairMaster Gauntlet, Core Health, and Fitness, Washington, D.C.), and climbing up one stair and climbing up two stairs was carried out for 1 min each. After the measurement of each item was completed, sufficient rest was provided, and when the energy metabolism returned to the stable level, the measurement was started again [ Statistical analysis The means and standard deviations were calculated for all the measured parameters. The Shapiro-Wilk test verified the normal distribution of all outcome variables. To perform the linear regression analysis, we verified the independent variables by checking the regression coefficient (β-value). Regression analysis using the stepwise method was used to predict NEAT based on sex, age, height, weight, BMI, FFM, fat mass, percent body fat, SBP, DBP, MAP, PP, RPP, HR_rest, HR_average, and HR_sum. A two-tailed Student’s paired t-test was used to detect the differences between the measured and predicted NEAT. Bias was calculated as the difference between the measured and predicted NEAT values. The authors rigorously conformed to the basic assumptions of a regression model (linearity, independence, continuity, normality, homoscedasticity, autocorrelation, and outlier). Statistical Package for the Social Sciences (SPSS) version 25.0 (IBM Corporation, Armonk, NY, USA) was used for the statistical analysis, and the level of significance (p-value) was set at 0.05. Correlation between dependent variables and measured NEAT To delete outlier data, the absolute value of the standard residual was checked as ≥ 3, and a stepwise method was used to estimate the regression model. To use only the variables with a large influence on each NEAT item in the regression model, gender, age, height, weight, BMI, FFM, fat mass, percent body fat, SBP, DBP, MAP, PP, RPP, HR_rest, HR_average, HR_sum, and various interactions related to HR were used as variables to explain NEAT. The correlations between the measured NEAT and the dependent variables are presented in Table 2 Significance of regression models and the independent variables We verified the significance of each model using the F-test and used a -test to verify the significance of the regression coefficients of the independent variables. The results of the regression analysis for estimating the NEAT for each motion based on the results of the exploratory data analysis are shown in Table 3 . The regression coefficients of the selected independent variables (age, weight, HR_average, weight × HR_average, weight × HR_sum, SBP × HR_rest, fat ÷ height , gender × HR_average, and gender × weight × HR_sum) for each motion were statistically significant when the integrated regression model was developed using the stepwise method. Performance evaluation of regression models and regression equations The coefficients of determination (R ), adjusted coefficients of determination (adjusted R ), and standard errors of estimates (SEE) were calculated for the regression model. The mean explanatory power of the sitting EE regression models developed by age, weight × HR_average, SBP × HR_rest, and Gender × HR_average were 58.4% (R ) and 55.9% (adjusted R ), while the mean SEE was 0.32. The mean explanatory power of the leg jiggling EE regression models developed by age, weight, and gender × HR_average were 56.1% (R ) and 54.2% (adjusted R ), and the mean SEE was 0.34 kcal/min. The mean explanatory power of the standing EE regression models developed by age and gender × weight × HR_sum was 59.4% (R ) and 58.2% (adjusted R ), and the mean SEE was 0.32. The mean explanatory power of the 4.5 km/h walking EE regression models developed by weight and weight × HR_sum was 70.2% (R ) and 69.3% (adjusted R ), and the mean SEE was 0.52 kcal/min. The mean explanatory power of the 6.0 km/h walking EE regression models developed by HR_average and weight × HR_average was 76.5% (R ) and 75.8% (adjusted R ), and the mean SEE was 0.62 kcal/min. The mean explanatory power of the climbing-up-1-stair EE regression models developed by age, weight × HR_average, SBP × HR_rest, and fat ÷ height were 56.4% (R ) and 53.7% (adjusted R ), and the mean SEE was 0.59 kcal/min. The mean explanatory power of the climbing-up-2-stairs EE regression models developed by age, SBP × HR_rest, fat mass ÷ height , and weight × HR_sum was 60.8% (R ) and 58.5% (adjusted R ), and the mean SEE were 0.74 kcal/min ( Table 4 Difference between measured and predicted NEAT of Korean adults In the present study, there was no significant difference between NEAT for each motion measured using a metabolic gas analyzer and NEAT for each motion predicted by the equation. The mean bias between the measured and predicted NEAT equations were as follows: sitting = 0.003 kcal/min; leg jiggling = 0.004 kcal/min; standing + 0.003 kcal/min; 4.5 km/h walking - 0.005 kcal/min; 6.0 km/h walking + 0.003 kcal/min; climbing up 1 stair + 0.007 kcal/min; and climbing up 2 stairs + 0.004 kcal/min, respectively ( Table 5 ). The measured and predicted NEAT showed a similar average value, and their correlation coefficients also showed a significant correlation (sitting: R = 0.764, = 0.000; leg jiggling: R = 0.749, = 0.000; standing: R = 0.771, = 0.000; 4.5 km/h walking: R = 0.838, = 0.000; 6.0 km/h walking: R = 0.874, = 0.000; climbing up 1 stairs: R = 0.751, = 0.000; and climbing up 2 stairs: R = 0.780, = 0.000). Oxygen consumption (VO ) is considered the most accurate variable for measuring the EE of physical activity and can be measured directly in the laboratory using a metabolic cart or respiratory gas analyzer. Portable devices are available for field measurements, but only for a limited period of time and with a limited number of targets. Therefore, efforts are being made to find a more feasible way to estimate VO in field studies [ ]. In particular, it has been reported that individual characteristics such as age, sex, and weight should be considered. However, easily measurable HR is used as a way to estimate VO ]. Most of the studies using HR have been used in regression models that estimate EE of exercise in active energies. None of the regression models that estimate the EE of NEAT has been studied using HR. Therefore, we suggest ways to estimate NEAT EE using HR. In this work, a preliminary study was conducted to develop a regression model for estimating the EE of various NEAT activities in Korean adults using various dependent variables that are easy to measure. Based on the collected data, our study developed a regression model of NEAT for each motion (sitting EE = 1.431 - 0.013 × age + 0.00014 × (weight × HR_average) - 0.00005 × (SBP × HR_rest) + 0.006 × (gender × HR_average); leg jiggling EE = 1.102 - 0.011 × age + 0.013 × weight + 0.005 × (gender × HR_average); standing EE = 1.713 - 0.013 × Age + 0.0000017 × (gender × weight × HR_sum); 4.5 km/h walking EE = 0.864 + 0.035 × weight + 0.0000041 × (weight × HR_sum); 6.0 km/h walking EE = 4.029 - 0.024 × HR_average + 0.00071 × (weight × HR_average); climbing up 1 stair EE = 1.308 - 0.016 × age + 0.00035 × (weight × HR_average) - 0.000085 × (SBP × HR_rest) - 0.098 × (fat mass ÷ height^2); and climbing up 2 stairs EE = 1.442 - 0.023 × age - 0.000093 × (SBP × HR_rest) - 0.121 × (fat mass ÷ height^2) + 0.0000624 × (weight × HR_sum)). Looking at the correlation between various NEAT activity EE and dependent variables in Korean adults, the variables related to HR and various measured variables showed a significant correlation with NEAT (e.g., age; weight; HR_ average; weight × HR_average; weight × HR_sum; SBP × HR_rest; fat mass ÷ height ; gender × HR_average; and gender × weight × HR_sum) for each motion. Previously, Park et al. [ ] developed a regression model of the EE of an exercise stress test using the HR of college students in their 20s (EE 1 (cal/min) = 100.127 + (s × - 8577.731) + (w × 106.729) + (h × 12.580) + ((s × w) × 113.209) + ((w × h) × 38.847) + ((s × h) × 1.251) + ((s × h × w) × - 0.23), where s = sex : male-1, female-0, h = heart rate : beat/min, w = weight: kg, R = 0.85; and EE 2 (cal/min) = 15289.276 + (s × 117.083) + (w × 102.905) + (h × 1883.398), where s = sex : male-1, female-0, h = heart rate : beat/min, w = weight : kg, R = 0.82). In addition, the studies of Charlot et al. [ ] developed a regression model to estimate the EE of exercise using HR (EE [kcal · h−1] = 171.62 + 6.87 × HR (bpm) + 3.99 × height (cm) + 2.30 × weight (kg) −139.89 × sex (1 or 2) − 4.26 × resting HR (bpm) − 4.87 × HRmax (bpm), R = 0.879). Both estimation expressions show a high regression model with a correlation coefficient of 0.80 or higher, but it was a study to estimate the EE of exercise rather than NEAT activity. Other studies, such as Bouchard and Trudeau [ ] and Levine et al. [ ], used an accelerometer to measure EE and showed a high correlation between the actual measured value but underestimated according to exercise intensity or overestimated. As in this study, studies on estimating EE using HR in various NEAT activities are insufficient; thus, many studies should be conducted in the future. Estimating EE using HR is very linear, regardless of age or gender, and is effective against estimating energy due to low in-person variability. However, errors in measurement methods and predictions have also been reported, and careful attention is needed to interpret the results [ ]. Although it is judged that more accurate measurements should be made because of the nature of NEAT with low exercise intensity, it is convenient to use a regression model for each NEAT activity that can be easily applied by ordinary people to effectively manage their health. In conclusion, through preliminary experiments, we developed a regression model using HR and multiple variables to estimate the EE of various NEAT activities in healthy Korean adults. The developed model is as follows: sitting EE = 1.431 - 0.013 × age + 0.00014 × (weight × HR_average) - 0.00005 × (SBP × HR_rest) + 0.006 × (gender × HR_average); leg jiggling EE = 1.102 - 0.011 × age + 0.013 × weight + 0.005 × (gender × HR_average); standing EE = 1.713 - 0.013 × age + 0.0000017 × (gender × weight × HR_sum); 4.5 km/h walking EE = 0.864 + 0.035 × weight + 0.0000041 × (weight × HR_sum); 6.0 km/h walking EE = 4.029 - 0.024 × HR_average + 0.00071 × (weight × HR_average); climbing up 1 stair EE = 1.308 - 0.016 × age + 0.00035 × (weight × HR_average) - 0.000085 × (SBP × HR_rest) - 0.098 × (fat mass ÷ height^2); and climbing up 2 stairs EE = 1.442 - 0.023 × age - 0.000093 × (SBP × HR_rest) - 0.121 × (fat mass ÷ height^2) + 0.0000624 × (weight × HR_sum). Bias between estimated NEAT and measured NEAT (sitting = - 0.003; leg jiggling = 0.004; standing = 0.003; 4.5 km/h walking = - 0.005; 6.0 km/h walking = 0.003; climbing up 1 stair = 0.007; and climbing up 2 stairs = 0.004) and correlation (sitting: R = 0.764; leg jiggling: R = 0.749; standing: R = 0.771; 4.5 km/h walking: R = 0.838; 6.0 km/h walking: R = 0.874; climbing up 1 stair: R = 0.751; and climbing up 2 stairs: R = 0.780) was reasonable. However, this study has limitations as a preliminary study. The sample size was small, we were unable to develop regression models for gender, and validity tests could not be performed. Therefore, further research is required to overcome these limitations.
{"url":"https://www.e-pan.org/journal/view.php?number=719","timestamp":"2024-11-10T09:07:36Z","content_type":"application/xhtml+xml","content_length":"169422","record_id":"<urn:uuid:fc9be6f0-c88d-4502-bd31-f70ca8b25d9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00448.warc.gz"}
Coincidence Transformation Coincidence Transformation --- Introduction --- This is a graphic exercise which asks you to make a coordinate transformation in order to transform one given plane shape into another. The most recent version This page is not in its usual appearance because WIMS is unable to recognize your web browser. Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program. • Description: transform a given 2D shape into another given one. interactive exercises, online calculators and plotters, mathematical recreation and games • Keywords: interactive mathematics, interactive math, server side interactivity, affine_geometry, linear_algebra, affine_geometry, linear_algebra, translation, rotation, matrix,linear_maps
{"url":"https://wims.divingeek.com/wims/wims.cgi?lang=en&+module=U1%2Falgebra%2Fcoinxform.en","timestamp":"2024-11-07T13:57:06Z","content_type":"text/html","content_length":"7051","record_id":"<urn:uuid:3306e92c-7afb-4f6b-8b0f-9b5561d37d14>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00565.warc.gz"}
How about infinity plus one? Posted by: Gary Ernest Davis on: January 21, 2010 • In: Uncategorized Corina Silveira came from Argentina to carry out research for her doctorate in mathematics education at the University of Southampton in England. Corina was interested in how children developed an informal sense of number through their world experiences – particularly seeing adults deal with prices of goods and vehicles, and seeing telephone numbers, street numbers, and such. She had worked in Argentina on mismatches between children’s developing sense of numbers out of school, and the more structured work they did in school usually with much smaller numbers – typically up to 20 in the first semester or school term. Corina was interested to map out a picture of the development of children’s everyday number sense, and its relationship to known development in counting. We found a school in South Wonston, north of Southampton, where a grade 1 teacher and her students were happy to work with us. Among other things, Corina devised a game for about 4 children in which each child was dealt a small number of cards, each of which had a (different) number from 1 through 100. Children took turns in placing a card on the table and a hand was won by the child who put down the largest numbered card. There is an element of strategy in this game in that if a card on the table is higher than all your cards then you should play your lowest card. Corina was interested in an answer to the question: how do these 5 year old children know which is the largest number? A few times the card with “100” on it was dealt, and each time that happened the child who got the card started laughing. One boy laughed so hard he fell backwards onto the floor, crying out: “Oh, oh! It’s the biggest one of all!” Corina asked a group of children how they knew 63, on a card, was bigger than 29, as they said it was. One child pointed to the card with 63 and said: “Because it’s got a 6.” When Corina objected that the other card had a 9, the children jostled to tell her, with almost pity in their eyes that an adult would not know such a thing: “But the left hand’s the boss!”. These children could not read 69, but knew that the left hand 6 meant it was a bigger number than 29. Among the children in this first grade was a boy named Tom. He was somewhat developmentally delayed, having relatively poor coordination, and being slow at both counting and reading.  Tom had arrived at the school the year before and was, by his teacher’s account, not well socialized at that point. Now, however, Tom was very keen to join in the card game. He told us very proudly that he was a good counter. “Would you like me to count for you?” he asked. Of course we agreed . So he counted: “One, two, three, four, five, seven, eleven, twenty, thirty!” He beamed. “I’m a good counter. I practice at home.” That was Tom: happy to be in school, happy to be part of his grade of school mates, happy to be learning to read and count. It is common in England for children to be placed in groups, or sets, according to their general academic ability. Tom was in the bottom set. The top set in Tom’s grade 1 consisted of 4 boys who were keen to know why we were in their classroom. I explained that we were interested in what they knew about big numbers, such as one hundred, one thousand or even one million. One of these top set boys said to me: “I will tell you what’s the biggest number there is.” “The biggest number there is?” I queried. “Yes. It’s called infinity.” “Infinity?” I repeated. “Yes, it’s the biggest number there is.” I nodded, looked him in the eye, and said: “Infinity plus one.” “No, no, no”, he said. “You can’t do that … cause infinity is the biggest number.” Another of the boys chimed in with: “Infinity plus two!” No, no!” said the first boy. “You can’t do that…” Then the last boy in the group looked at us all and said: “Infinity plus infinity!” All the boys looked at each other, and at me, apparently wondering if this might make some sense. The teacher in this grade 1 class was wonderful. Friendly, supportive, with as challenging activities for each group as she could find or devise. She taught them reading, writing, arithmetic, geography, social studies, science and who knows what else. Yet how could this excellent teacher deal individually in mathematics – grade 1 mathematics – with Tom who could not yet confidently and reliably count to 10, and the top set boys who were pondering Corina and I found 4 and 5 year olds who knew confidently that there would be no house in their street numbered 1024 because they lived in a very short street, or who knew that 2,500 was a large number because that’s what someone’s mother paid for her car. We also found children who knew that the number on their house was the same as the number on their trash bin, but did not know why. Kids learn about numbers by learning to count.  They also learn about them as artifacts in the adult world.  In the early years of schooling teachers generally stick to counting up to small numbers. But the children have another sense of number and magnitude and meaning that comes from the way numbers are used in the world outside of school. Only by listening to their stories will we know what they know, and how we can help advance their knowledge and thinking. Kids need listening to as much as they need teaching. […] How about infinity plus one? Corina Silveira came from Argentina to carry out research for her doctorate in mathematics education at the University […] […] . Their wonder, if trampled by too much abstraction, might make them numb(er).-) . I too remember seeing some surprising extrapolations by children from their occasionally out-of-context numerical experiences out-of-school. How often do teachers try to use those experiences as the basis for allegory that they can then build a more coherent context around: ie. a picture of numbered houses on a street, and numbered trash-can cut-outs to be aligned on the picture. (numbers used for naming vs counting?) . Is it fair (& does it matter) to add the ‘1’ at the beginning of infinity?
{"url":"http://www.blog.republicofmath.com/how-about-infinity-plus-one/","timestamp":"2024-11-03T13:37:14Z","content_type":"application/xhtml+xml","content_length":"68502","record_id":"<urn:uuid:a526713f-a750-45ba-a876-40f94e7f52d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00665.warc.gz"}
Cellular Automation | Testing Waters Conway's Game of Life, also known as the Game of Life or simply Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is the best-known example of a cellular automaton. The "game" is actually a zero-player game, meaning that its evolution is determined by its initial state, needing no input from human players. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbours, which are the cells that are directly horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: Any live cell with fewer than two live neighbours dies (referred to as underpopulation or exposure[1]). Any live cell with more than three live neighbours dies (referred to as overpopulation or overcrowding). Any live cell with two or three live neighbours lives, unchanged, to the next generation. Any dead cell with exactly three live neighbours will come to life. The initial pattern constitutes the 'seed' of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed — births and deaths happen simultaneously, and the discrete moment at which this happens is sometimes called a tick. (In other words, each generation is a pure function of the one before.) The rules continue to be applied repeatedly to create further generations. // Conway's Game of Life def goL(l1:bool[]..[]) mdRw = 0..List.FirstItem(List.Count(l1<1>))-1; bln1 = List.GetItemAtIndex(List.ShiftIndices(l1,[1,0,-1])<1><2>,List.ShiftIndices(mdRw,[1,0,-1])); bln2 = List.Flatten(List.Transpose(List.Transpose(List.Transpose(bln1<1><2>))<1>)<1><2>,-1); tru1 = List.CountTrue(bln2<1><2>); bln3 = List.GetItemAtIndex(bln2<1><2>,4) ? (tru1<3 || tru1>4 ? false : true) : (tru1==3 ? true :false); return bln3; };// Iterations // Iterations def GoL(bl:bool[]..[],n:int) return = [Imperative] b = bl; for (c in (1..n)) a = goL(b); b = a; return b; // Initial Configuration x = 20; y = 26; allPnt = Point.ByCoordinates((0..x)<1>,(0..y)<2>); flsPnt = List.Flatten([allPnt[1][y-3],allPnt[2][y-4],allPnt[3][y-(2..4)]],-1); blnLst = List.Contains(flsPnt,allPnt<1><2>); pnt1 = List.FilterByBoolMask(allPnt,GoL(blnLst,n)); gol1 = [GeometryColor.ByGeometryColor(List.Clean(Cuboid.ByLengths(pnt1["in"],1,1,1),false),Color.ByARGB(255,255,153,51)),
{"url":"https://gitbook.testingwaters.in/motion/cellular-automation","timestamp":"2024-11-07T18:44:16Z","content_type":"text/html","content_length":"351516","record_id":"<urn:uuid:2bbb9935-f79c-42fe-8d2a-ffb0963c8499>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00070.warc.gz"}
From Fractalization to Fundamental Physics: The 2024 Brauer Lectures - Department of Mathematics From Fractalization to Fundamental Physics: The 2024 Brauer Lectures The Department is thrilled to have had Professor Peter J. Olver from the University of Minnesota give our 2024 Brauer Lectures, held from February 27th to 29th. These lectures serve as a tribute to the esteemed legacy of Alfred T. Brauer, whose profound influence within our department encompassed the years 1944 to 1966 following his courageous escape from Nazi Germany. More information about Professor Brauer and the lecture series can be found at the Brauer Lectures Website. During the event, Professor Olver delivered a series of three lectures, delving into diverse topics ranging from the dynamics of periodic dispersive systems exhibiting fractal behavior at irrational times, to the mathematical intricacies of reassembling fragmented objects. In addition, he discussed recent developments related to Noether’s theorem, shedding light on the pivotal role of invariant variational problems in the contemporary formulation of fundamental physics. We extend our heartfelt gratitude to Professor Greg Forest for his organization of this enlightening lecture series.
{"url":"https://math.unc.edu/math-new/from-fractalization-to-fundamental-physics-professor-peter-j-olvers-2024-brauer-lectures/","timestamp":"2024-11-06T10:12:17Z","content_type":"text/html","content_length":"89134","record_id":"<urn:uuid:9eac7396-fec3-41b9-ba00-fad5ceb094dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00509.warc.gz"}
B.9. Mathematics Perl can do just about any kind of mathematics you can dream up. B.9.1. Advanced Math Functions All of the basic mathematical functions (square root, cosine, logarithm, absolute value, and many others) are available as built in functions; see the perlfunc manpage for details. Some others (like tangent or base-10 logarithm) are omitted, but those may be easily created from the basic ones, or loaded from a simple module that does so. (See the POSIX module for many common math functions.) B.9.2. Imaginary and Complex Numbers Although the core of Perl doesn't directly support them, there are modules available for working with complex numbers. These overload the normal operators and functions, so that you can still multiply with * and get a square root with sqrt, even when using complex numbers. See the Math::Complex module. B.9.3. Large and High-Precision Numbers You can do math with arbitrarily large numbers with an arbitrary number of digits of accuracy. For example, you could calculate the factorial of two thousand, or determine Math::BigInt and Math::BigFloat modules. Copyright © 2002 O'Reilly & Associates. All rights reserved.
{"url":"https://docstore.mik.ua/orelly/perl3/lperl/appb_09.htm","timestamp":"2024-11-11T04:28:26Z","content_type":"text/html","content_length":"9222","record_id":"<urn:uuid:8e3d93c7-42e8-4214-9234-272dc6c9be30>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00414.warc.gz"}
Pointwise estimates for 3-monotone approximation We prove that for a 3-monotone function F ∈ C [ - 1, 1] , one can achieve the pointwise estimates |F(x)-Ψ(x)|≤cω3(F,ρn(x)),x∈[-1,1], where ρn(x){double colon equal}1/n ^2+√1-x ^2/n and c is an absolute constant, both with Ψ, a 3-monotone quadratic spline on the nth Chebyshev partition, and with Ψ, a 3-monotone polynomial of degree ≤. n.The basis for the construction of these splines and polynomials is the construction of 3-monotone splines, providing appropriate order of pointwise approximation, half of which nodes are prescribed and the other half are free, but "controlled". Funders Funder number Natural Sciences and Engineering Research Council of Canada Ministerio de Ciencia e Innovación MTM 2011-2763 • 3-monotone approximation by piecewise polynomials and splines • 3-monotone polynomial approximation • Degree of pointwise approximation • Shape preserving approximation Dive into the research topics of 'Pointwise estimates for 3-monotone approximation'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/pointwise-estimates-for-3-monotone-approximation","timestamp":"2024-11-02T09:45:23Z","content_type":"text/html","content_length":"49784","record_id":"<urn:uuid:67c7ea0c-bec6-4c28-81b2-faa6c0e35328>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00001.warc.gz"}
Quadrant refers to the four sections of the Cartesian plane created through the intersection of the x and y-axes. They are numbered 1 through 4, beginning with the top right quadrant and moving counter clockwise around the plane. Each of the four quadrants is labelled on the plane below. The general quadratic equation in one variable is $$ax^2+bx+c=0,$$ where $$a≠0$$. The solutions are given by the quadratic formula: $$x=\frac{-b\pm\sqrt[{}]{b^2-4ac}}{2a}$$. A quadratic expression or function contains one or more terms in which the variable is raised to the second power, but no variable is raised to a higher power. Examples of quadratic expressions include $$3x^2+7$$ and $$x^2+2xy+y^2-2x+y+5$$. A quadrilateral is a polygon with four sides. Quartiles are the values that divide an ordered data set into four (approximately) equal parts. It is only possible to divide a data set into exactly four equal parts, when the number of data values is a multiple of four. There are three quartiles. The first, the lower quartile (Q[1]) divides off (approximately) the lower 25% of data values. The second quartile (Q[2]) is the median. The third quartile, the upper quartile (Q[3]), divides off (approximately) the upper 25% of data values. A quotient is the result of dividing one number or algebraic expression by another. See also remainder.
{"url":"https://www.australiancurriculum.edu.au/f-10-curriculum/mathematics/glossary/?letter=Q","timestamp":"2024-11-05T12:39:17Z","content_type":"text/html","content_length":"48878","record_id":"<urn:uuid:51993288-1a53-43ac-bf08-f69b7a5b3728>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00587.warc.gz"}
Report on The International workshop on Gödel's Incompleteness theorems-School of Philosophy-WHU Report on The International workshop on Gödel's Incompleteness theorems Date:2021-09-10 Clicks:206 From August 16th to August 20th, 2021, the International workshop on Gödel's Incompleteness theorems hosted by the School of Philosophy of Wuhan University was successfully held online. The incompleteness theorems published by Gödel in 1931 profoundly impacted the fields of logic, philosophy, mathematics, theoretical computer science, etc. The motivation of this workshop is to celebrate the 90th anniversary of the publication of Gödel's incompleteness theorems, and promote communications on the latest research on Gödel's Incompleteness theorems. There are 24 talks and ten sessions of this workshop, with three generations of international scholars from 13 countries. Speakers include internationally renowned logicians and philosophers such as Saul Aaron Kripke (Academician of the American Academy of Arts and Sciences), Lev D. Beklemishev (Academician of the Russian Academy of Sciences), and Wilfried Sieg (American Academy of Arts and Sciences Academician), Harvey Friedman (Distinguished Professor of Mathematics, Philosophy, and Computer Science at Ohio State University), Michael Rathjen, (Professor in Department of Mathematics, University of Leeds), Julia F. Knight (Professor in Department of Mathematics, University of Notre Dame) and Matthias Baaz (Professor in Department of Mathematics, Vienna University of Technology) etc. The organizers include Prof. Yong Cheng (School of Philosophy at Wuhan University), Prof. Albert Visser (Royal Netherlands Academy of Arts and Sciences), Prof. Andreas Weiermann (Department of Mathematics at Ghent University, Belgium), and Prof. Yue Yang (Department of Mathematics at the National University of Singapore). At 3 pm on August 16th, Qizhu Tang(Vice President of Wuhan University), Dianlai Li (Dean of the School of Philosophy of Wuhan University), Professor Matthias Baaz (Executive Vice Chairman of the Kurt Gödel Society), and Professor Albert Visser (Academician of the Royal Netherlands Academy of Arts and Sciences) attended the opening ceremony and gave speeches. In his speech, Vice President Qizhu Tang pointed out that it is of great academic significance to hold an international symposium on the occasion of the 90th anniversary of the publication of Gödel's incompleteness theorems. On behalf of Wuhan University, Vice President Tang welcomed participants from different countries and wished all attendees will enjoy exchanging their ideas. In his speech, Dean Dianlai Li pointed out that the incompleteness theorem has had a profound and vital impact on logic and philosophy. It is a great honour for the School of Philosophy of Wuhan University to host this workshop, and he wished the conference a great success. Professor Matthias Baaz summarized Gödel's academic contributions and the significance and impact of the incompleteness theorems. On behalf of the Kurt Gödel Society, he congratulated Wuhan University on hosting this workshop and wished the workshop a big success. Prof. Albert Visser comprehensively and profoundly analyzed the main research directions and questions of the current research on the incompleteness theorems. He looks forward to wonderful discussions of these questions. Finally, the opening ceremony ended in an English introductory video about Wuhan University. The content of this workshop covers nine themes about the research on incompleteness: different proofs of incompleteness, incompleteness and provability logic, incompleteness and self-reference, the limit of the applicability of the incompleteness theorems, incompleteness and computability theory, Hilbert program and incompleteness, incompleteness and philosophy of mathematics, the intensionality of incompleteness, and incompleteness in weak arithmetic. Each online talk is 1 hour and 15 minutes, including a one-hour for lecture and 15 minutes for Q&A exchanges. Attendees were active and provided excellent discussions. The first session started at 3:30 pm on August 16th, including two talks, presided over by Prof. Albert Visser from the Royal Netherlands Academy of Humanities and Sciences. The first lecture was given by professor Volker Halbach from the Department of Philosophy, University of Oxford, UK. He talked about self-reference and intensionality in metamathematics. He first discussed the philosophical importance of intensionality, then the intensionality in meta-mathematics and relations to other intensionalities in logic. He also discussed the sources of intensionality. Finally, he summarized the current research status and some unsolved problems. The second lecture was given by professor Stanislaw Krajewski from the Department of Philosophy, University of Warsaw, Poland. He talked about some consequences of the incompleteness theorems. He discussed several philosophical issues related to the incompleteness theorems, emphasizing whether humans' understanding of natural numbers can be compiled into computer programs. The speaker analyzed the anti-mechanism arguments based on the incompleteness theorems in detail. He concluded that these arguments do not imply that the human mind is not a machine only from Gödel's incompleteness theorems. Finally, he argued that we could not define the human understanding of natural numbers based on the incompleteness theorems. The second session started at 8 pm on August 16th and included two lectures, presided over by Professor Andreas Weiermann, from the Department of Mathematics, Ghent University, Belgium. The third lecture was given by Professor Michael Rathjen, from the Department of Mathematics, University of Leeds, UK. He talked about Hilbert's program and (semi) Intuitionism. The talk pointed out that although Gödel's incompleteness theorems are often considered to falsify Hilbert's program, Hilbert's method of adding ideal elements to prove specific mathematical propositions was successfully applied in some situations. In particular, in the (semi)intuitionistic logic framework, the classical non-intuitive principle can be added to the theory without forfeiting conservativity for elementary statements. Professor Rathjen also proved the proof-theoretic strength of some constructive set theories under the framework of intuitionistic logic. The fourth lecture was given by Professor Harvey Friedman, from the Department of Mathematics of Ohio State University. He is a Distinguished Professor of Mathematics, Philosophy, and Computer Science, the youngest professor of the Guinness World Records, also won the honorary title of "Top 100 American Scientists Under 40". He talked about aspects of incompleteness. He first discusses the different general forms of the first incompleteness theorem (G1). Afterwards, various forms of the second incompleteness theorem (G2) based on the concept of interpretation are discussed. Next, Professor Friedman gave a new proof of G2 based on explicitly remarkable sets, which cleanly separates the auxiliary construction from the direct verification. Finally, he discussed the most recent examples of Tangible Incompleteness from ZFC with various large cardinal hypotheses. The third session of the workshop started at 9 am on August 17th and included two talks. The host, Albert Visser (Royal Dutch Academy of Art and Sciences), could not attend due to the time difference, and he introduced the two speakers Professor Juliet Floyd and Saul Aaron Kripke through video recording. This session was hosted by Sergei N. Artemov(Distinguished Professor of the City University of New York). The fifth lecture was given by Professor Juliet Floyd, from the Department of Philosophy at Boston University. She talked about Truth in Early Wittgenstein and Gödel. The talk pointed out that the work of Gödel and early Wittgenstein was influenced to a certain extent by Russell's theory of truth. She introduced Russell's theory of truth and then discussed the views of Gödel and Wittgenstein on it. Professor Floyd analyzed the different understandings of the concept of truth between Gödel and Wittgenstein and emphasized that these different understandings are philosophically comparable. The sixth lecture was given by Professor Saul Aaron Kripke (Schock Prize winner, American Academy of Arts and Sciences, European Academy of Arts and Sciences, and Distinguished Philosophy Professor of the City University of New York) from the City University of New York. He talked about "A Model-Theoretic Approach to Gödel's Theorem". The talk pointed out that Gödel's famous incompleteness theorem is unusual in being a purely proof-theoretic argument. Usually, when one prove that a statement is unprovable from some axioms, one produces a model in which the statement is false. One version of Gödel's second incompleteness theorem was long known, that it is impossible to prove within Gödel-Bernays set theory that it had a well-founded model. It might be hard to do this for arithmetic, where nonstandard models are never well-ordered. In this talk, he showed that this difficulty could be overcome, yielding a model-theoretic version of Gödel's theorem. The fourth session of the workshop was held at 3:30 pm on August 17th and included three talks, chaired by Professor Yang Yue from the Department of Mathematics, National University of Singapore. The seventh lecture was given by Professor Saeed Salehi from the Department of Mathematics, Tabriz University, Iran. The topic is "Some Fairies in the Incompleteness Wonderland". The landscape of incompleteness (if there is such a land) is indeed a wonderland, full of surprises, beauties, and fairies. Some fairies are the pretty proofs of the first incompleteness theorem given by Gödel (1931), Rosser (1936), Kleene (1936, 1950), Chaitin (1970), and Boolos (1989). In this talk, he studied some properties of these proofs, like their constructivity and independence (Rosser property), and examined if they deliver Gödel's second incompleteness theorem (G2) or could be derived from it. The eighth lecture was given by Lev D. Beklemishev, an Academician of the Russian Academy of Sciences. The topic is "Strictly positive provability logics: recent progress and open questions". The talk dealt with the fragment of propositional modal logic consisting of implications of formulas built up from the variables and the constant "true" by conjunction and diamonds only. We call such fragments strictly positive. Strictly positive logics recently attracted attention both in the description logic and in the provability logic communities for their combination of efficiency and sufficient expressivity. Moreover, strictly positive logics allow for alternative interpretations that are quite natural from a proof-theoretic point of view. He presented recent results and remaining open questions in this area. The ninth lecture was given by Professor Julia F. Knight from the Department of Mathematics of the University of Notre Dame (Charles L. Huisking Professor of Mathematics at the University of Notre Dame). The topic is “Completions of PA and ω-models of KP. The talk considered analogies between a computable binary branching tree whose paths represent the completions of first-order Peano Arithmetic and a computable infinitely branching tree whose paths represent the complete diagrams of ω-models of Kripke-Platek set theory. She also discussed what is computed by the paths through these trees. In both settings, there is self-awareness of the kind that Gödel used for his Incompleteness Theorems. The fifth session of this workshop was held at 3:30 pm on August 18th, including two talks, chaired by Professor Andreas Weiermann, Department of Mathematics, Ghent University, Belgium. The tenth lecture was given by Dr. Juan P. Aguilera, from the Department of Mathematics, Ghent University, Belgium. The topic is "The Pi^1_2 Consequences of a theory". The talk gave a categorical definition of the Pi^1_2-norm of a theory T, a specific well-foundedness preserving functor on the category of ordinals. This is an analogue of the Pi^1_2 ordinal for T for Pi^1_2 notions. He showed that for Pi^1_2-sound, recursively enumerable extensions of ACA_0, this norm is well defined and recursive and captures all the Pi^1_2 consequences of T. The eleventh lecture was given by Professor David Fernandez-Duque from the Department of Mathematics, Ghent University, Belgium. His topic is "When Ackermann meets Goodstein". The classical Goodstein process consists of writing a number in terms of the base-2 exponential, then iteratively raising the base and subtracting one. Similar processes can be defined by writing natural numbers in terms of other functions, such as the Ackermann function. In this talk, he discussed Ackermannian variants of the Goodstein process and showed how the proof-theoretic strength of termination varies wildly as we modify how numbers are represented. The sixth session of the workshop was held at 8 pm on August 18th, including three lectures, chaired by Professor Yong Cheng from the School of Philosophy of Wuhan University. The twelfth lecture is given by Professor Wilfried Sieg (Academician of the American Academy of Arts and Sciences) from the Department of Philosophy of Carnegie Mellon University. He talked about "Gödel in AProS". He opened with the question: How to prove Gödel's incompleteness theorems in an automatic proof system. Professor Sieg proposed a new proof search system AProS, and studied how to give an abstract proof of the incompleteness theorem in AProS. The research showed that people could construct a clear formal proof of the incompleteness theorem and related theorems in the system The thirteenth lecture, "On the hierarchy of natural theories", was given by Dr. James Walsh from Cornell University School of Philosophy. It is a well-known empirical phenomenon that natural axiomatic theories are pre-well ordered by consistency strength. However, without a precise mathematical definition of "natural", it is unclear how to study this phenomenon mathematically. James discussed some strategies for addressing this problem that has been developed recently. These strategies emphasize connections between reflection principles and ordinal analysis and draw on analogies with recursion theory. The fourteenth lecture, "Tight Theories", was given by Professor Ali Enayat from the Department of Philosophy, University of Gothenburg, Sweden. A first-order theory T is said to be *tight* if no two different deductively closed extensions of T (in the same language) are bi-interpretable. Albert Visser established the tightness of PA (Peano arithmetic) in his 2006 paper "Categories of theories and interpretations". The tightness of certain other foundational theories, including Z_2 (Second-order arithmetic), ZF (Zermelo-Fraenkel set theory), and KM (Kelley-Morse theory of classes), was established in her paper "Variations on a Visserian theme"; the same paper includes a conjecture that no deductively closed proper subtheory of PA, Z_2, ZF and KM is tight. In this talk, she presented recent results that provide partial evidence for the veracity of this conjecture. The seventh session of the workshop was held at 3:30 pm on August 19th, including two lectures chaired by Professor Yue Yang from the National University of Singapore. The fifteenth lecture was given by Professor Taishi Kurahashi from the Institute of System Information, Kobe University, Japan. His topic is "Inclusions between quantified provability logics". Quantified provability logic QPL_Sigma(T) is known to be heavily dependent on the theory T and the Sigma_1 definition Sigma(v) of T. The talk investigates several consequences of inclusion relations between quantified provability logics and show that inclusion relations rarely hold. Moreover, Kurahashi gives a necessary and sufficient condition for the inclusion relation between quantified provability logics with respect to Sigma_1 arithmetical interpretations. The sixteenth lecture was given by Dr. Balthasar Grabmayr, Department of Computer Science, Humboldt University, Germany, on "A Step Towards Absolute Versions of Metamathematical Results". He argued that there is a gap between the mathematical expression of the second incompleteness theorem (G2) and the informal version used in philosophical discussions. To fill this gap, we need to study the conditions for the establishment of G2. He analyzed the relationship between whether G2 is established and the choice of grammar system and coding method, given definitions of acceptable grammar systems and acceptable coding methods and proves that G2 is valid under acceptable grammatical systems and coding methods. Still, under incompatible grammatical systems and coding methods, G2 may not be valid. The eighth session of the seminar was held at 8 pm on August 19th, including three lectures, chaired by Professor Andreas Weiermann, Department of Mathematics, Ghent University, Belgium. The seventeenth lecture was given by Dr. Anton Freund from the Department of Mathematics, Darmstadt University of Technology, Germany. He talked about "Independence without computational strength". In the spirit of Hilbert's program, we can ask whether all universal statements about the natural numbers are decided by Peano arithmetic. While Gödel's incompleteness theorems provide a negative answer, mathematical examples are scarce in the purely universal realm, despite the work of H. Friedman and S. Shelah. Anton represented a related result: Kruskal's theorem can be cast into an axiom scheme that transcends Peano arithmetic but does not add provably total functions. The only known independence proof is a reduction to Gödel's theorem. In this respect, the new axioms behave like universal statements (they are in fact Sigma_2) and are very different from other examples of mathematical independence. The eighteenth lecture was given by Professor Matthias Baaz from the Department of Mathematics, Vienna University of Technology, Austria. The topic is "Incompleteness and attempted proofs of consistency". In this lecture, he analyzed the origins of the completeness and consistency problem and also the technical background, why the Hilbert school was convinced that a positive solution is possible. He described attempted proofs of consistency of the Hilbert school (especially using epsilon calculus and Herbrand's theorem) and describe how they provide valuable information if seen from another point of view. The nineteenth lecture on "Arithmetization-free Gödel's Second Incompleteness Theorem for finitely axiomatizable theories" was given by Dr. Fedor Pakhomov, Department of Mathematics, Ghent University, Belgium. The motivation of this research is to find a general abstract form of the second incompleteness theorem. Visser proved that no consistent sequential finite axiomatizable theory T could explain its predicative comprehension PC(T). This provides an arithmetization-free version of the Second Incompleteness Theorem. In the talk, Fedor showed that, in fact, this result holds in far greater generality: no finitely axiomatizable theory T can one-dimensionally interpret its predicative comprehension PC(T). The ninth session of the seminar was held at 3:30 pm on August 20th, including two lectures, chaired by Professor Yong Cheng from the School of Philosophy of Wuhan University. The twentieth lecture on "On two topics dear to Kurt Gödel" was given by Dr. Sam Sanders from the Philosophy Department of Ruhr University Bochum, Germany. He discussed two topics in mathematical logic in the spirit of Gödel's celebrated legacy. Firstly, was constructing models of higher-order arithmetic in which the reals numbers form countable sets, i.e. there is an injection/bijection to the natural numbers. Secondly, he discussed the implication for the foundations of mathematics and physics. The twenty-first lecture was given by Professor Juliette C. Kennedy, from the Department of Mathematics and Statistics, University of Helsinki, Finland. The topic is "Gödel and the Scope Problem: From Incompleteness to Extended Constructibility". Gödel once proposed whether we can develop an absolute definability concept. In this talk, Juliette offered an implementation of Gödel's suggestion for definability in the direction of extended constructibility. This involved considering Gödel's constructible hierarchy L, which is built over first-order logic, and asking if we vary the underlying logic, i.e. if we replace the first-order logic in the construction of L by another logic, do we get L back? The last session of the seminar was held at 8 pm on August 20th, containing three talks, chaired by Prof. Albert Visser (Academician of the Royal Netherlands Academy of Humanities and Sciences). The twenty-second lecture, on "Minimal Logics for Incompleteness", was given by Professor Joost J. Joosten from the Department of Philosophy, University of Barcelona, Spain. In this talk, he started out in the propositional realm and discussed various different approaches to take when describing formal provability, its generalizations and its limitations. After that, he mentioned that moving from a full logic to a closed fragment or to a strictly positive fragment supposes a drop in complexity for the corresponding decision problems, for example, from PSPACE to PTIME. Never however has this decrease been so drastic as in the case of Quantified Provability Logic where one goes from Pi_2 complete to decidable. He discussed the main dynamics behind this decrease. The twenty-third lecture was given by Professor Emil Jerabek of the Czech Academy of Sciences on "Hereditarily bounded sets". Vaught's set theory VS is one of the weakest and simplest essentially undecidable theories. In contrast, the finite fragments VS_k of VS are not essentially decidable. However, known proofs of this fact are rather undirect: each V Sk is interpretable in any theory with a pairing function, and there exist decidable theories with pairing. The decidable extensions of VS_k obtained in this way are quite unnatural, e.g., they are incompatible with extensionality. In this talk, the speaker showed that Th(Hk) is decidable, and can be presented by a transparent explicit axiom set. Emil also characterized elementary equivalence of tuples in models of Th(Hk), show that it enjoys a form of quantifier elimination, and determines its computational complexity. The last talk was given by Professor Pavel Pudlak of the Czech Academy of Sciences on "Incompleteness theorems for weak theories of arithmetic and some stronger versions of the incompleteness theorem". During the talk, the speaker first discussed some stronger versions of (G2), then discussed G2 for the weakest reasonable arithmetical theory. To do that, he proved a stronger version of the incompleteness theorem that asserts that for any definable initial segment of integers I without the largest element, it is consistent to assume that the proof of contradiction is in I. Finally, he discussed some other applications of this strengthening. At midnight on August 20th, the organizing committee chair, Professor Yong Cheng from the School of Philosophy of Wuhan University, reviewed and summarized the workshop, and thanked all speakers for offering such wonderful talks and discussions. So far, the 5-day International workshop on Gödel's Incompleteness Theorems had come to a successful conclusion. (By Jinghui Tao)
{"url":"https://philo.whu.edu.cn/info/1009/2508.htm","timestamp":"2024-11-02T10:31:12Z","content_type":"text/html","content_length":"59035","record_id":"<urn:uuid:0ed13273-909f-4ad0-8eeb-0844d3a384a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00420.warc.gz"}
A Quest for a Civil Time Polar A Quest for a Civil Time Polar Sundial (Assembled 5 Jan 2003 by Mac Oglesby) Courtesy of Dave Bell, this site is a temporary home for certain materials which relate to the quest for a civil time polar sundial, as discussed during recent weeks on the Sundial Mailing List and in private email messages. These materials are gathered here to make it easier for interested sundial list members to access the photos and drawings whose size exceeds that allowed for attachments. I apologize in advance if I have misrepresented or misunderstood anyone, and I take full responsibility any typos and other errors which occur as a result of my editing. The Problem Dialist John Close asked for help in designing a civil time polar sundial that didn’t use analemmas, half analemmas, or unfolded analemmas. He wondered if a gnomon could be constructed which would satisfy his requirements. The dialists he contacted gave differing advice as to the possibility of such a gnomon: some said yes, some said no, and some said maybe. That’s about where we are still, but the journey has been The Journey To clarify our thinking, let’s consider what a polar sundial is, and perhaps mention what the sundial user must do in order to obtain civil time. A very strict definition of a polar might be that it has a flat dial face which lies parallel to the Earth’s axis and perpendicular to the plane of the meridian, has straight hour lines, and has either a pointed post gnomon, or a straight edge gnomon parallel to the Earth’s axis. (The edge gnomon may be replaced by a taut cable.) [NOTE: Commenting on my draft, Fer de Vries writes, “I call any dial parallel to the earth’s axis a polar dial. Also the east and west facing dial, and many more, are polar. But this is arbitrary of course.” I agree with Fer, knowing that others will disagree.] This strict polar dial would commonly be delineated to show solar time, but never as many as 12 hours in a day. Though it may easily be designed to show zonal solar time, to get civil time one must consult a graph or table of EoT values, which are then applied to the dial’s reading. As we alter the strict definition, our polar dial becomes a modified polar. One useful modification would be to allow the dial plate to be rotated around an axis parallel to the Earth’s, such as the edge of the gnomon, or an edge of the dial face. [Fer writes, “In my definitions it stays a polar dial if you rotate the plate. The equivalent horizontal dial is always at latitude 0 degrees. The pole style always is parallel to the dial.”] If the dial is turned 15 degrees, the time on the dial will be 1 hour earlier or later, depending upon which way it is rotated. To change the dial time by 4 minutes, turn it 1 degree, etc. Thus we can easily make allowance for summer time, and/or longitudinal offset, and/or EoT, although, since the EoT value is constantly changing, one would need to reset the dial now and then. H. Robert Mills, in his book “Practical Astronomy,” details such an arrangement on pages 106-109, where he suggests using a wedge to rotate the dial plate. Also, his polar dial has each end of the dial plate folded up 90 degrees, which shortens the length of the dial and makes it usable for a full 12 hours. Graphic 1 shows a modified polar dial which, once properly installed, will give either solar time or zonal solar time without further adjustment. The dial may be periodically rotated on its mounting post to show civil time directly. Having hour lines only, the dial doesn’t allow for highly precise time readings, so frequent adjustments aren’t necessary. This dial, entitled Sun Bather, uses 2 gnomons, color coded to indicate which set of hour lines to look at. If the green gnomon is casting the shadow, look at green hour lines, etc. The time shown is just before 8 A different approach is to use hour lines which have the EoT correction factored in. If the hour “lines” are full analemmas, the dial may be confusing to read, especially if the curves overlap. If half-analemmas (from solstice to solstice, for example) are used, then one needs 2 dial plates and has to swap them twice a year. Another choice is to use “unfolded” analemmas in conjunction with date lines. Christopher St J.H. Daniel’s polar sundial at Otley indicates civil time as well as solar time (see Graphic 2). This dial plate is not flat, which allows a wider range of hours than a traditional polar. (I’m not certain about the source of this photo, but I believe it came from John Davis.) A very different approach is the cycloid dial of Thijs J. de Vries, designed about 1980 (see Graphic 3). An article by Fred Sawyer on this dial appeared in the December 1998 Compendium. On this dial an adjustment for civil time may be made not only by rotating the entire dial, but, since the hour lines are equally spaced, they may be shifted east or west, relative to the gnomon The Journey Continues Let’s return to the problem of designing a shaped gnomon which could be used to solve John Close’s puzzle. Bill Gottesman circulated a preliminary sketch of a possibility (see Graphic 4). The axis of the 3D gnomon is parallel to the Earth’s, and civil time is read along a line of hour points perpendicular to the gnomon’s axis. We wait for some one to show that Bill’s design will work, or to prove that it cannot. Quite recently Thibaud Taudin-Chabot and Fer J. de Vries sent me some very fascinating material from 20 year-old issues of the Journal of The Dutch Sundial Society. Graphic 5 shows a model by Willem Bits which uses a shaped gnomon (in this case a curved wire in space), has equidistant and straight hour lines, and straight date lines. Solar time is read where the gnomon’s shadow intersects the current date. The hour lines may be slid east or west for zonal solar time, or shifted every day or so to show civil time. (Of course, this dial can also be rotated around a polar axis.) I mistakenly thought that this dial could give civil time directly, but such is not the case. Perhaps someone can figure out how to modify the gnomon so this dial will display civil time, or else prove that it cannot be done. The final group of exhibits in this collection from The Dutch Sundial Society consists of 5 pages and a diagram. These items are labeled as Graphic 6 through Graphic 11, just for identification purposes. I don’t read Dutch, so I asked Fer de Vries for comments. Gracious as always, Fer provided the following text, in two parts. Graphic 6 Graphic 7 Graphic 8 Graphic 9 Graphic 10 Graphic 11 Part 1 of Text by Fer J. de Vries The dial you mentioned [see Graphic 5] was an idea of Willem Bits. He started after he had seen the polar dial of Thijs de Vries, the dial recently spoken about on the list. A polar dial is the same as an horizontal dial at latitude 0 and then it’s rather easy to visualize the process. The first goal of Bits was to make equidistant hour lines. Place a vertical gnomon with length g1 on the north-south line. At one o’clock this points to the hour line 1 at distance x1 from the n-s line. Place a second gnomon on the north-south line with length g2 but shorter than g1. This second gnomon has to point to hour line 2 at a distance x2 from the n-s line that is twice the distance of the hour line 1. Do this for all the hours and the hour lines are equidistant and we have a number of gnomons on the n-s- line. (Shall we call them hour gnomons?) The distance between all these gnomons is You may choose any space between the gnomons but keep them on the n-s- line. If you do this process for all the times and not only for the hours you get a plate with a curved edge on the n-s- line. But you can’t read the time. What gnomon has to be used? So incorporate the date and also draw date lines. For each hour line and its related gnomon the points for the dates on that hour line may be calculated. Connect all the points for a certain date with a curve and you have the date lines also. Now the dial may be read at the intersection of the date line and the shadow of the edge of the plate on the n-s- line. See [Graphic 6] for some possible solutions. Many shapes are possible, depending where you place the gnomons. Have in mind that the pattern isn’t in the east-west direction but in an arbitrary direction. The hour lines are equidistant. If you place the hour-gnomons also equidistant then the date line for 0 degrees is a straight line, else this date line is curved. If the distance between the gnomons is the same as the distance between the hour lines the angle of the equinox line is 45 degrees. This was published in our bulletin in March 1981. Part 2 of Text by Fer J. de Vries In the previous part you saw a polar dial with equidistant hour lines and the equinox line as a straight line, but all other date lines as curves. The gnomons for each hour are on a n-s line and all equidistant apart. Otherwise the equinox isn’t straight. n = the hour (6 -18) q = equidistant distance between hour lines gn = height of gnomon (g0, g1 .... g6) We now have: gn = (12- n) * q / tan (n*15) For hour 6 and 18 this gives g6 = g18 = 0 For hour = 12 (noon) the equation fails. Take n = 0.0001. Then g0 is about 3.82 q The distance between a point on the straight equinox line and a point on another date line is delta = tan (decl) * (n-12) * q / sin (n*15) Because of the equidistant hour lines any correction for longitude and EoT can be made by shifting the hour lines or the combination of gnomons and date lines. But it can’t have a built in EoT correction. Second problem to solve. Now we try to make all the date lines straight. For a normal polar dial the distance of a date point from the equinox line is: y = g tan (decl) / cos (n*15) We want to have y as a constant value to get parallel lines to the equinox line and name that value k. We then have: g tan (decl) / cos (n*15) = k and g = k * cos (n*15) / tan (decl) We choose for k : k = k1 * tan (decl) which is allowed because for a certain date decl is constant and also tan (decl) is constant. and then gn = k1 * cos (n*15) in which k1 = k / tan (decl) is a constant. Again place a number of such gnomons on the n-s- line. Doing so we have parallel date lines, but no longer equidistant hour lines. Only the gnomons g0, g6 and g18 are correct and that hour lines stay where they were. The gnomon g6 and g18 have a height = 0. See also the attached example. [see Graphic 7, fig. IV] But also Graphic 12 is such an example. This last one comes from an article by P. Oyen in “Zonnetijdingen”, nr. 5, 1997, the bulletin of our Belgian friends. The other hour lines are shifted and we want to shift them back as may be seen in fig. VII on page 702 [see Graphic 8]. This shift is in x and y direction. Well, do this including the appropriate gnomon for that hour and all the hour lines become equidistant again. And the final gnomon wire gets its nice curve. But still the EoT isn’t built in and I don’t see a possibility for that. At page 704 [Graphic 10] and 705 [Graphic 11] Bits gave the formulas. The gnomon g12 is H For any other gnomon gn = H cos (N*15) Distance of the date line AB = .... = H tan (decl) That is a parallel line to the equinox line. The shadow length of a gnomon for decl = 0 is: SL = .... = H sin (N*15) and the distance AN = H sin (N*15)*tan(alpha) This alpha may be chosen at will. At page 705 [Graphic 11] the shift is described. The not linear hour line _._._. has to be shifted to ........ The final formulas you need for the wire are: (a) for the x shift (b) for the y shift Hn for the height of the hour gnomon An for the place of that hour gnomon With g12 the hour lines can be calculated. And alpha is your own choice. [end of text by Fer de Vries] Although we haven’t (yet) found a gnomon shape which gives civil time directly on a polar dial, speaking personally, I’ve learned several new things about polar sundials. And there always seem to be new lessons around every bend in the road. If any reader has new thoughts on this problem of civil time on a polar dial, please post to the sundial mailing list, or write to me at <oglesby@sover.net>. After the Conclusion - the Quest Continues! On 10 January 03, Fer J. de Vries sent some additional information to the Sundial Mailing List. He wrote: As a contribution to the discussion about the polar dial I made a drawing of the construction of such a dial. In this picture [see Graphic 13] I try to explain what I did. At the left from top to bottom you find a series of gnomons for each hour starting at noon with the appropriate hour line with date points for the solstices and equinoxes. The lengths of the gnomons are as Bits formula (Graphic 11, bottom) gn = g * cos t Now all the hour lines with date points are equal, only the place and length of the gnomons is different. In the middle top figure these gnomons and hour lines are placed on a north-south line, using the formula an = g * sin t * tan alpha, with alpha = 45. The distances between the foot points of the gnomons aren't equal. In that case we get the pattern as by Oyen. The shape of the line through all the endpoints of the hour gnomons is a semi-circle. The second half of the pattern is added in the figure middle bottom. The difference with the figure by Bits is that he had all the gnomons equidistant as may be seen at the left in Graphic 8. Now shift the gnomons and hour lines with date points according the formula by Bits for delta-x and delta-y and the result is in the figure right bottom. The foot points of the hour gnomons are drawn in red. The foot points for 6 and 18 are arbitrary because the gnomon length is 0. Now it's easy to draw a side view of the wire. We know the distance of the foot points from the center point and the lengths of the hour gnomons. The figure at top right shows roughly the result in Reading the time before about 7 am and after 5 pm is difficult because the length of the hour gnomon is small. If I didn't make a mistake I conclude that the principle by Bits is all right but his drawings are somewhat misleading. [end of text by Fer de Vries]
{"url":"http://www.advanceassociates.com/Sundials/A_Civil-Time_Polar_Dial/","timestamp":"2024-11-14T09:53:28Z","content_type":"text/html","content_length":"30548","record_id":"<urn:uuid:c0c65a64-bdab-4678-9fcc-96d3dba58973>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00151.warc.gz"}
ViPErLEED work segments ViPErLEED work segments ViPErLEED operates using a set of self-contained work segments (see also the RUN parameter). The three main segments, following the logic of calculations using the tensor-LEED approximation, are: 1. Reference calculation: Full-dynamic LEED calculation, which outputs a set of theoretical beams for a given structure and the “Tensors”. 2. Delta-amplitudes calculation: The delta amplitudes specify how parameter changes affect the scattering amplitudes within the tensor-LEED approximation. This calculation is based on the tensors and a set of parameter variations specified by the user. The output of a delta-amplitudes calculation are the “Delta files”. 3. Structure search: Using the Delta files to vary the theoretical beams, looks for a set of parameters such that the \(R\) factor between the theoretical beams and a given set of experimental beams is minimized. Which of these segments should be executed must be specified using the RUN parameter, using the segment numbers in the list above or a contraction of their names. More information on the allowed contractions are found in the documentation for RUN. Besides these three main segments, there are also the following minor segments, which are inserted automatically during normal ViPErLEED execution when appropriate (but can also be explicitly selected via RUN): • Initialization: Always runs at the beginning. Reads and checks input files, runs symmetry search, generates derived input files if appropriate. • Superposition calculation: Automatically runs after the search. Generates a set of theoretical beams for the actual best-fit configuration based on the tensor-LEED approximation. • \(R\)-factor calculation: Automatically runs after the reference-calculation segment if an experimental-beams file is present, and after the superpos section. Calculates the \(R\) factor per beam and for the entire set of beams, and outputs an Rfactor_plots_<section>.pdf file. Further specialized segments include: • Error calculation: Based on a given reference structure (i.e., after a reference calculation), calculates one-dimensional error curves for variation of a single parameter. Effectively, this produces delta amplitudes for variations of a single parameter, and outputs the \(R\) factor for every single configuration along that axis. • Full-dynamic optimization: Optimizes parameters that are not accessible to the tensor-LEED approximation, like BEAM_INCIDENCE, V0_IMAG, or unit-cell scaling. This is achieved by performing multiple full-dynamic (i.e., “reference”) calculations (but without producing Tensor files). The behavior is controlled by the OPTIMIZE parameter. The pages listed above cover normal operation, in which the theoretical beams correspond to only one surface structure. If multiple structures coexist on the sample, the same segments need to be executed, but their behavior is somewhat different, as described in • Domain calculations: Reference calculations are run separately for the different domains (if necessary) and delta amplitudes are generated independently. The search then combines the optimization of the different structures — weighted by their area fraction — for the best overall \(R\) factor with respect to the experimental beam set.
{"url":"https://www.viperleed.org/content/calc/work_segments.html","timestamp":"2024-11-13T07:58:49Z","content_type":"text/html","content_length":"19932","record_id":"<urn:uuid:6da49cef-3805-4aac-b6c6-313cb4eee61a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00845.warc.gz"}
Digital Math Resources Display Title Math Example: Ratios with Double Number Lines: Example 6 Math Example: Ratios with Double Number Lines: Example 6 This example presents a three-part ratio of 3:2:2 for orange, lemon, and raspberry juice. Given 6 cups of lemon juice, students need to determine the amounts of orange and raspberry juice required. The solution shows that 9 cups of orange juice and 6 cups of raspberry juice are needed to maintain the ratio. By using a different three-part ratio and starting with a non-primary ingredient, this example challenges students to think flexibly about ratios and their applications. It demonstrates how double number lines can be used effectively even when the given information doesn't align with the first part of the ratio, reinforcing the versatility of this problem-solving tool. Providing multiple worked-out examples is essential for students to develop a comprehensive understanding of ratios and double number lines. Each new example builds upon previous knowledge while introducing new variations, helping students to see patterns and relationships across different scenarios. This approach enhances their ability to apply the concept in various situations and improves their critical thinking skills. Teacher Script: "Now we have a juice mixture with a 3:2:2 ratio of orange, lemon, and raspberry. If we have 6 cups of lemon juice, how can we use our double number line to figure out how much orange and raspberry juice we need? Notice how we start with the lemon juice amount this time. Can you explain why the orange juice line increases by 3s while the lemon/raspberry line increases by 2s? How does this help us solve the problem efficiently?" For a complete collection of math examples related to Ratios click on this link: Math Examples: Double Number Lines Collection.
{"url":"https://www.media4math.com/library/math-example-ratios-double-number-lines-example-6","timestamp":"2024-11-15T00:13:43Z","content_type":"text/html","content_length":"51346","record_id":"<urn:uuid:b78c0f5b-ec11-4700-be68-bc8b093f9518>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00883.warc.gz"}
On energy, Laplacian energy and $p$-fold graphs For a graph $G$ having adjacency spectrum ($A$-spectrum) $\lambda_n\leq\lambda_{n-1}\leq\cdots\leq\lambda_1$ and Laplacian spectrum ($L$-spectrum) $0=\mu_n\leq\mu_{n-1}\leq\cdots\leq\mu_1$, the energy is defined as $ E(G)=\sum_{i=1}^{n}|\lambda_i|$ and the Laplacian energy is defined as $LE(G)=\sum_{i=1}^{n}|\mu_i-\frac{2m}{n}|$. In this paper, we give upper and lower bounds for the energy of $KK_n^j,~1\leq j \leq n$ and as a consequence we generalize a result of Stevanovic et al. [More on the relation between Energy and Laplacian energy of graphs, MATCH Commun. Math. Comput. Chem. {\ bf 61} (2009) 395-401]. We also consider strong double graph and strong $p$-fold graph to construct some new families of graphs $G$ for which $E(G)> LE(G)$. Spectra of graph; energy; Laplacian energy; strong double graph; strong $p$-fold graph
{"url":"https://www.ejgta.org/index.php/ejgta/article/view/116","timestamp":"2024-11-06T11:06:57Z","content_type":"application/xhtml+xml","content_length":"96593","record_id":"<urn:uuid:9d5e7d6c-1c12-4a28-a4ed-f87f6b5a7f2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00345.warc.gz"}
Kids.Net.Au - Encyclopedia > Hipparchus For the alternative temporary version of this article see Hipparchos. This article is being in a process of redirecting there. When finished the final article will rest inhere Hipparchus (Greek Hipparchos) (circa 194 B.C. - circa 120 B.C.) was a Greek astronomer, mathematician and geographer. Hipparchus was born in Nicaea (Greek Nikaia), ancient district Bithynia, (modern-day İznik[?]) in province Bursa, today Turkey. The exact dates of his life are not known for sure, but he is believed to have made his observations from 162 B.C. to 126 B.C.. The date of his birth (circa 190 B.C.) was calculated by Delambre, based on clues in his work. We don't know anything about his youth either. Most of what is known about Hipparchus is from Strabo's Geographica (Geography), from Pliny the Elder's Naturalis historia ( Natural sciences) and from Ptolemy's Almagest. He probably studied in Alexandria. His main original works are lost. His only preserved work is the (Commentary) on the Phaenomena of Eudoxus and Aratus or Commentary on Aratus, a commentary in 2 books on a poem by Aratus, which describes the constellations and the stars, which comprise them. This work contains many measurements of stellar positions[?] and was translated by Karl Manitius[?] (In Arati et Eudoxi Phaenomena, Leipzig, 1894). All his other works were lost in a burning of the Great Royal Alexandrian Library in 642. For his accession he is recognized as originator and father of scientific astronomy. He is believed to be the greatest Greek astronomer observer and many regard him at the same time as the greatest astronomer of ancient times, although Cicero still gave preferences to Aristarchus of Samos. Some put on this place also Ptolemy of Alexandria. Hipparchus had in 134 B.C. ranked stars in six magnitude classes according to their brightness: he assigned the value of 1 to the 20 brightest stars, to weaker ones a value of 2, and so forth to the stars with a class of 6, which can be barely seen with the naked eyes. This was later adopted by Ptolemy, and modern astronomers with telescopes, photographic plates and with other measuring devices for the light as they extended a luminosity with a density of light current j of a star on the Earth and put it on a qunatitative base. Observations with measuring devices for the light had shown that the density of light current of a star with a apparent magnitude 1^m is hundred times greater of a star with a magnitude 6^m. If we consider a property of an eye that a response is proportional with a logarithm of irritation, we get Pogson[?]'s physiological law (also called Pogson's ratio) from 1854 (other sources 1858): <math>m - m_0 = -2.5 \log_{10}{\left(\frac{j}{j_0}\right)}</math> Hipparchus had made a lot of astronomical instruments, which were used for a long time with naked-eye observations. About 150 B.C. he made the first astrolabion, which may have been an armillary sphere or the predecessor of a planar instrument astrolabe, which was improved in 3rd century by Arab astronomers and brought by them in Europe in 10th century. With an astrolabe Hipparchus was among the first able to measure the geographical latitude and time by observing stars. Previously this was done at daytime by measuring the shadow cast by a gnomon, but the way this was used changed during his time. They put it in a metallic hemisphere, which was divided inside in concentric circles, and used it as a portable instrument, named scaphion, for determination of geographical coordinates from measured solar altitudes. With this instrument Eratosthenes of Cyrene 220 B.C. had measured the length of Earth's meridian, and after that they used this instrument to survey smaller regions as well. Hipparchus had proposed to determine the geographical longitudes of several cities at solar eclipses. An eclipse does not occur simultaneously at all places on Earth, and their difference longitude can be computed from the difference in time when the eclipse is observed. His method would give the most accurate data as would any previous one, if it would be correctly carried out. But it was never properly applied, and for this reason maps remained rather inaccurate until modern times. Ptolemy reported that Hipparchus invented an improved type of theodolite with which to measure angles. We know that Hipparchus compiled one of the first catalogue of stars, and also compiled the first trigonometry tables. He tabulated values for the chord function, which gave the length of the chord for each angle. In modern terms, the chord of an angle equals twice the sine of half of the angle, e.g., chord(A) = 2 sin(A/2). He had a method of solving spherical triangles[?]. The theorem in plane geometry called Ptolemy's theorem was developed by Hipparchus. This theorem was elaborated on by Carnot. Hipparchus was the first to show that the stereographic projection is conformal, and that it transforms circles on the sphere that do not pass through the center of projection to circles on the plane. This was the basis for the astrolabe. Hipparchus is perhaps most famous for having been the first to measure the precession of the equinoxes. There is some suggestion that the Babylonians may have known about precession but it appears that Hipparchus was to first to really understand it and measure it. According to al-Battani Chaldean astronomers had distinguished the tropical and siderical year. He stated they had around 330 B.C. an estimation for the length of the sidereal year to be S[K] = 365^d 6^h 11^m (= 365.2576388^d) with an error of (about) 110^s. This phenomenon was probably also known to Kidinnu around 314 B.C.. A. Biot and Delambre attribute the discovery of precession also to old Chinese astronomers. Hipparchus mostly used simple astronomical instruments such as the gnomon, astrolabe, armillary sphere and so. Before him Meton[?], Euktemon[?] and their students had determined 440 B.C. (431 B.C.) the two points of the solstice. Hipparchus on his own in Alexandria 146 B.C. determined the equinoctial point. He used Archimedes' observations of solstices. Hipparchus himself made several observations of the solstices and equinoxes. From these observations a year later in 145 B.C. he also on his own determined the length of the tropical year to be T[H] = 365.24666...^d = 365^d 5^h 55 ^m 12 ^s (365^d + 1/4 - 1/300 = 365.24666...^d = 365^d 5^h 55 ^m), which differs from the actual value (modern estimate) T = 365.24219...^d = 365^d 5^h 48^m 45^s by only 6^m 27^s (6^m 15^s) (365.2423 ^d = 365^d 5^h 48^m, by only 7^m). We do not know the correct order for the precision of this value but most probably he was not able to make measurements within seconds so the correct value of his discovery was 365^d 5^h 55 ^m. Before him the Chaldean astronomers knew the lengths of seasons are not equal. Hipparchus measured the full length of winter and spring to be 184 1/2 days, and of summer and autumn 180 1/2 days. In his geocentrical view, which he preferred, he explained this fact with the adoption that the Earth is not in the centre of Sun's orbit around it, but it lies eccentrically for 1/24 r. With his estimation of the length of seasons he tried to determine, as of today, the eccentricity of Earth's orbit, and according to Dreyer[?] he got the incorrect value e = 0.04166 (which is too large). The questions remains if he is really the author of this estimation. After that from 141 B.C. to 126 B.C. mostly on the island of Rhodes, again in Alexandria and in Siracuse[?], and around 130 B.C. in Babylon, during which period he made a lot of precise and lasting observations. When he measured the length of gnomon shadow at solstice he determined the length of tropical year and he was finding times of the known bright star sunsets and times of sunrises. From all of these measurements he found in 134 B.C. the length of sidereal year to be S[H] = 365^d 6^h 10^m (365.2569444...^d), which differs from today's S = 365.2563657...^d = 365^d 6^h 9^m 10^s for 50^ s. Hipparchus also had measurements of the times of solstices from Aristarchus dating from 279 B.C. and from the school of Meton and Euctemon dating from 431 B.C.. This was a long enough period of time to allow him to calculate the difference between the length of the sidereal year and the tropical year, and led him to the discovery of precession. When he compared both lengths, he saw the tropical year is shorter for about 20 minutes from sidereal. And as first in the history he correctly explained this with retrogradical movement of vernal point γ over the ecliptic for about 45", 46" or 47" (36" or 3/4' according to Ptolemy) per annum (today's value is Ψ'=50.387", 50.26") and he showed the Earth's axis is not fixed in space. After that in 135 B.C., enthusiastic of a nova star in the constellation of Scorpius he measured with equatorial armillary sphere the ecliptical coordinates of about 850 (1600 or 1080, what is falsely quoted many times elsewhere) and till 129 B.C. he made first big star catalogue. This map served him to find any changes on the sky and for great sadness it is not preserved today. His star map was thoroughly modificated as late as 1000 years later in 964 by A. Ali Sufi and 1500 years later in 1437 by Ulugh Beg. Later, Halley would use his star catalogue to discover proper motions as well. His work speaks for itself. And another sad fact is that we do not know almost nothing from his life, what was already stressed by Hoyle. In his star map Hipparchus drew position of every star on the basis of its celestial latitude, (its angular distance from the celestial equator) and its celestial longitude (its angular distance from an arbitrary point, for instance as is custom in astronomy from vernal equinox). The system from his star map was also transferred to maps for Earth. Before him longitudes and latitudes were used by Dicaearchus of Messana, but they got their meanings in coordinate net not until Hipparchus. By comparing his own measurements of the position of the equinoxes to the star Spica during a lunar eclipse at the time of equinox with those of Euclid's contemporaries Timocharis (circa 320 B.C.-260 B.C.) of Alexandria and Aristyllus[?] 150 years earlier, the records of Chaldean astronomers and specially Kidinnu's records he still later observed that the equinox had moved 2° relative to Spica. He also noticed this motion in other stars. He obtained a value of not less than 1° in a century. The modern value is 1° in 72 years. He also knew the works Phainomena (Phenomena) and Enoptron (Mirror of Nature) of Eudoxus of Cnidus, who had near Cyzicus on the southern coast of the Sea of Marmara his school and through Aratus' astronomical epic poem Phenomena Eudoxus' sphere, which was made from metal or stone and where there were marked constellations, brightest stars, tropic of Cancer and tropic of Capricorn. These comparisons embarrassed him because he couln't put together Eudoxus' detailed statements with his own observations and observations of that time. From all this he found that coordinates of the stars and the Sun had systematically changed. Their celestial latitudes λ ramained unchanged, but their celestial longitudes β had reduced as would equinoctial points, intersections of ecliptic and celestial equator, move with progressive velocity every year for 1/100'. After him many Greek and Arab astronomers had confirmed this phenomenon. Ptolemy compared his catalogue with those of Aristyllus, Timocharis, Hipparchus and the observations of Agrippa and Menelaus of Alexandria from the early 1st century and he finally confirmed Hipparchus empirical fact that poles of celestial equator in one Platonic year or approximately in 25777 years encircle ecliptical pole[?]. The diameter of these cicles is equal to the inclination of ecliptic. The equinoctial points in this time traverse the whole ecliptic and they move for 1° in a century. This velocity is equal to Hipparchus' one. Because of these accordances Delambre, P. Tannery and other French historian of astronomy had wrongly jumped to conclusions that Ptolemy recorded his star catalogue from Hipparchus' one with an ordinary extrapolation. This was not known until 1898 when Marcel Boll and the others had found that Ptolemy's catalogue differs from Hipparchus' one not only in the number of stars but otherwise. This phenomenon was named by Ptolemy just because the vernal point γ leads the Sun. In Latin praecesse means to overtake or to outpass and today means to twist or to turn too. Its own name shows this phenomenon was discovered practically before its theoretical explanation, otherwise would be named with a better term. Many later astronomers, physicists and mathematicians had occupied themselves with this problem, practically and theoretically. The phenomenon itself had opened many new promising solutions in several branches of celestial mechanics: Thabit's theory of trepidation and oscilation of equinoctial points, Newton's general gravitational law, which had explained it in full, Euler's kinematic equations and Lagrange's equations of motion, d'Alembert's dynamical theory of the movement of the rigid body, some algebraic solutions for special cases of precession, Flamsteed's and Bradley's difficulties in making of precise telescopic star catalogues, Bessel's and Newcomb's[?] measurements of precession and finally the precession of perihelion in Einstein's General Theory of Relativity. Lunisolar precession[?] causes the motion of point γ by the ecliptic in the opposite direction od apparent solar year's movement and the circulation of celestial pole. This circle becomes a spiral because of additional ascendancy of the planets. This is planetary precession[?] where ecliptical plane swings from its central position for ±4° in 60000 years. The angle between ecliptic and celestial equator ε = 23° 26' is reducing for 0.47" per annum. Besides the point γ slides by equator for p = 0.108" per annum now in the same direction as the Sun. The sum of precessions gives an annual general precession in longitude Ψ = 50.288" which causes the origination of tropical year. Hipparchus described the motion of the Sun and obtained a value for the eccentricity. It was known that the seasons were of unequal length, not something that would be expected if the Sun moved around the Earth in a circle at uniform speed (of course today we know that the planets move in ellipses, but this was not discovered until Kepler published his first two laws of planetary motion in 1609). His solution was to place the Earth not at the center of the Sun's motion, but at distance from the center. This model of the Sun's motion described the actual motion of the Sun fairly well. ...to be written ... ...to be written ... Hipparchus also studied the motion of the Moon and obtained more accurate measurements of some periods of the motion than existed previously, and undertook to find the distances and sizes of the Sun and the Moon. He determined the length of synodic month to 23/50^s = 0.46^s about 139 B.C. in Babylon according to Strabo of Amaseia[?] in Pontus. He determined the Moon's horizontal parallax. He discovered the irregularity in lunar movement, which changes medium lunar longitude and today is called the equalization of the center with a value: I = 377' sin m + 13' sin 2m, where m is medium anomaly[?] of the Moon. Delambre in his Histoire de l'Astronomie Ancienne (1817) concluded that Hipparchus knew and used a real (celestial) equatorial coordinate system, directly with the right ascension and declination (or with its complement, polar distance). After that Otto Neugebauer[?] (1899-1990) in his A History of Ancient Mathematical Astronomy (1975) rejected Delambre's claims. Hipparchus is believed to have died on the island of Rhodes. An astrometric project of the Hipparcos Space Astrometry Mission of the European Space Agency (ESA) was named after him. See also: All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/hi/Hipparchus","timestamp":"2024-11-12T16:39:06Z","content_type":"application/xhtml+xml","content_length":"54891","record_id":"<urn:uuid:3396e942-3d6a-4d98-af55-43c95e67c19c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00341.warc.gz"}
Error with convex >= affine I’m working on model that is related to my previous post. A slight modification is applied as: variable a(N); variable p(N, M); expression b(N) obj = obj + a * Constant; subject to for i = 1:N b(i) = sum(p(i)); for i = 1:length(list) b(list(i)) = max(b(list(i)), 1e-3); non-zero_threshold*a <= b <= U.*a; As in the model, I’d like to modify some values that is indexed by a provided list. If the values <= 1e-3, then it will be set to 1e-3. For example, if provided a list like [2, 3, 5], if b(2), b(3), or b(5) is zero, then they will be set to 1e-3. However, I got an error showing that: b>=non-zero_threshold*a - error: {convex}>={real affine} As described in here, I think that the max() function should be convex, and thus the problem should also be convex, but sometimes things go wrong and sometimes it works well. Am I missing something or misunderstand the usage of the max() function in cvx? Thank you in advance! The formulation I gave works when b is s variable, not a nonlinear expression, such as max. When b is nonlinear, one of the two inequalities will be going in the wrong direction to be convex. So you will also have to deal with the non-convex direction of max. You can see how to handle max, used in a non-convex way, in section 2.3 of https://www.fico.com/en/resource-access/download/3217 . If you need more help on this, I will refer you to https://or.stackexchange.com/ , where your questions should be written out mathematically, and not in terms of CVX. I really don’t know what problem you are trying to solve (formulate), but your formulation attempt looks rather a mess as of now. I will also note that the statement does nothing. That is the min of a vector. That is not an objective function statement, which would be minimize(something), where something needs to evaluate to a real scalar, not a vector. Thank you Mark! I will read the materials you provided carefully.
{"url":"https://ask.cvxr.com/t/error-with-convex-affine/8652","timestamp":"2024-11-02T22:06:38Z","content_type":"text/html","content_length":"16529","record_id":"<urn:uuid:563b5799-21da-402f-90be-d369ec1bfe6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00826.warc.gz"}
CFA Level 1: Measures of Location Level 1 CFA® Exam: Measures of Location In this lesson, we're going to deal with measures of central tendency and measures of location. Measures of location are a broader category that includes measures of central tendency. Both measures of central tendency and measures of location allow us to characterize an entire population or its Population consists of all elements of a group. We can see it as a set of all the members of the group we're interested in. A descriptive characteristic of a population is called a parameter. A parameter can be for example a mean value. Sample is a subset of a population. A sample is usually selected randomly and a random sample is another key term in statistics. A sample is described by the so-called sample statistic (statistic for short), for example, a sample mean. Measures of central tendency help us determine what the center of analyzed data is. The most common types of the measures of central tendency include the arithmetic mean, the mode, and the median. Measures of central tendency are a sub-type of measures of location. Measures of location are a broader concept than measures of central tendency. Measures of location provide us with information on observations in different locations, not only in the center. Since measures of location include measures of central tendency, they tell us where data are centered but they also provide information on data location (or distribution). Measures of central tendency and measures of location help to determine the similarities between the elements of a dataset, whereas the differences are examined using measures of dispersion, skewness, and peakedness, which you're going to learn about in the next lessons. For now, let's focus on measures of central tendency. The measures of central tendency that we're going to discuss for your level 1 CFA exam include: • arithmetic mean, • median, • mode, • weighted mean, • geometric mean, and • harmonic mean. The arithmetic mean can be most simply defined as the sum of all observations divided by the number of observations. If we're dealing with a sample, the arithmetic mean can be computed using the following formula: Note: When you multiply the arithmetic mean by the number of observations, the result will be the sum of the observations. Also, remember that the sum of deviations of individual elements of a set from the mean equals zero. The arithmetic mean is a popular and frequently used measure, as it is easy to calculate and can be interpreted intuitively. We should remember, however, that it doesn't always properly reflect the characteristics of a dataset. One of the reasons is that the arithmetic mean takes into account all elements of a dataset including outliers. Outliers are extreme observations, that is observations extremely different from the majority of observations for a variable. So, outliers take either extremely high or extremely low values. In the case of a large difference between the highest or the lowest value and the central value, the arithmetic mean can also be very high or very low, which may distort the characteristics of the examined data. On the other hand, the fact that the mean includes all elements of a dataset is an advantage when compared to such measures of central tendency as the mode or the median. When we detect outliers, the first thing we should do is to check for possible errors in our data. We might draw outliers from a different population or erroneously record an outlier. If there are no errors, we generally have two options – either we leave the data as it is, namely we don’t remove the outliers, or we remove the outliers. If we decide to remove outliers, two solutions come in handy: • trimmed mean, and • winsorized mean. The following example shows how the trimmed mean and winsorized mean are calculated. Example 1 (trimmed mean vs winsorized mean) Our sample dataset includes the following 60 numbers: -111, -33, -12, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 327, 576, 5012 You notice the outliers in the dataset and decide to remove them. How will the dataset for 10% trimmed mean and 90% winsorized mean look like? Another measure of central tendency is the median. The median is the value of the middle element of a set. Therefore, a median divides a dataset into two equal parts. Half of the observations are smaller than the median and the other half is greater. • When there is an even number of elements in a set, the median is the arithmetic mean of the two neighboring middle numbers. • When there is an odd number, the median is the element in the middle. An advantage of the median is the fact that it ignores extreme values, so it is insensitive to any extreme deviations. What is more, we can determine the median even if we don't know all observations precisely. To find it, we need to sort the items in the set from the smallest to the greatest value. Then, we need to determine the location of the middle value. When there is an odd number of observations, the median equals the value of the item in the middle. So, for a number of observations equal to \(n\), where \(n\) is an odd number, the middle value is determined using the following formula: \(\text{position of middle value}=\frac{n+1}{2}\) Again, when there is an odd number of observations, the location of the middle value is clearly defined and all we need to do is read the value from the set. However, when the number of observations is even, we need to take the arithmetic mean of the two values closest to the middle. Where this is the case, the median can be calculated using the following \(\text{median}=\frac{\text{value in 'n' position}+\text{value in 'n+1' position}}{2}\) Now, let's discuss a few other measures of central tendency, that is: • the weighted mean, • the geometric mean, and • the harmonic mean. The weighted mean is often used in portfolio analysis or the portfolio approach. For example, we can calculate the rate of return achieved by a mutual fund that holds securities of 10 different companies. Knowing the rates of return on the individual stocks, we also need to know their weights in the portfolio to be able to compute the weighted mean return. The weighted mean equals the sum of products of the values of observations and their weights: Note that the sum of all weights must always equal 1. In the case of the weighted mean return, by the value of observations we mean the returns on the individual stocks. Another measure of central tendency is the geometric mean. The geometric mean is often used to compute the average rate of return over a series of periods or to calculate the growth rate. The geometric mean of a set of observations is given by the following formula: Of course every observation should be greater than or equal to zero. Very often we use the geometric mean while analyzing rates of return. When calculating a rate of return, the formula for the geometric mean looks as follows: The last type of mean we're going to discuss here is the harmonic mean. The harmonic mean has fewer applications than the arithmetic mean and the geometric mean. It can be used, however, to determine the average purchase price paid for stocks if we bought them in several periods for the same amount or to calculate the average time necessary for the production of a given product. The harmonic mean applies reciprocals of the values of observations. It can be represented with this expression: The harmonic mean equals the number of observations divided by the sum of the inverse values of observations. Measures of location are a broader concept than measures of central tendency. The latter, as their name suggests, refer to the middle of the data. Measures of location provide us with information on observations in different locations, not only in the center. Therefore, measures of central tendency are a sub-type of measures of location. For your level 1 CFA exam, we're going to take a look at measures of location such as: • percentiles, • quartiles, • quintiles, and • deciles. Let's start with percentiles. A given percentile is a value below which a given percentage of observations is located. Percentiles divide a set into a hundred parts. For example, if we take the 30th percentile, 30% of observations have a lower value and 70% a higher value than this percentile. The location of a percentile can be determined as follows: A quintile is a fifth of a population. For example: The first quintile is a value below which 20% of the dataset is located. Note that, for example, we can calculate the location of the third quintile as the location of the 60th percentile and the location of the fourth quintile as the location of the 80th percentile. The last measure we're going to discuss is the decile. A decile is a tenth of the population. Take the following example. The fourth decile is equal to the value of the 40th percentile. The fifth decile is the median of the population. Below the 7th decile, there is 70% of the dataset and 30% of a dataset has a higher value than the decile. Example 3 (measures of location) Suppose you have the following data on the annual rates of return of a fund. Year Rate of return (%) 2018 -19 2019 -9 2022 -1 Calculate and interpret the rate of return as the arithmetic mean and geometric mean. Then, determine the median, the mode, the first quartile, and the eighth decile. 1. Measures of central tendency and measures of location help to determine the similarities between the elements of a dataset. 2. The arithmetic mean is the sum of all observations divided by the number of observations. 3. The sum of deviations of individual elements of a set from the mean equals zero. 4. Outliers are extreme observations, that is observations extremely different from the majority of observations for a variable. 5. If we decide to remove outliers we can either use trimmed mean and winsorized mean. 6. The median is the value of the middle element of a set. 7. The mode is the most frequently occurring element of a set. 8. If we can identify one most frequent value, we're dealing with unimodal distribution. 9. The weighted mean is often used in portfolio analysis or the portfolio approach. 10. The harmonic mean can be used to determine the average purchase price paid for stocks if we bought them in several periods for the same amount. 11. A given percentile is a value below which a given percentage of observations is located. 12. To find the location of a percentile we need to sort the data set in ascending order.
{"url":"https://soleadea.org/cfa-level-1/measures-of-location","timestamp":"2024-11-13T21:37:33Z","content_type":"text/html","content_length":"173173","record_id":"<urn:uuid:20414a72-f4d1-42dd-a2e8-130a19423406>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00598.warc.gz"}
A box with an initial speed of 5 m/s is moving up a ramp. The ramp has a kinetic friction coefficient of 5/7 and an incline of (3 pi )/8 . How far along the ramp will the box go? | HIX Tutor A box with an initial speed of #5 m/s# is moving up a ramp. The ramp has a kinetic friction coefficient of #5/7 # and an incline of #(3 pi )/8 #. How far along the ramp will the box go? Answer 1 The distance is $= 1.07 m {s}^{-} 1$ Taking the direction up and parallel to the plane as positive #↗^+# The coefficient of kinetic friction is #mu_k=F_r/N# Consequently, the object's net force is Newton's Second Law states Where #a# is the acceleration The coefficient of kinetic friction is #mu_k=5/7# The incline of the ramp is #theta=3/8pi# A deceleration is indicated by the negative sign. We utilize the equation of motion. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the distance along the ramp the box will go, you can use the following steps: 1. Calculate the acceleration of the box along the ramp using the components of gravity and friction. 2. Use the kinematic equation to find the distance traveled by the box along the ramp. First, calculate the acceleration: a = g * sin(θ) - μ_k * g * cos(θ) • g is the acceleration due to gravity (approximately 9.8 m/s^2) • θ is the angle of the incline (3π/8) • μ_k is the coefficient of kinetic friction (5/7) Then, calculate the distance using the kinematic equation: d = (v_i^2 - v_f^2) / (2 * a) • v_i is the initial velocity (5 m/s) • v_f is the final velocity (0 m/s, because the box eventually stops) • a is the acceleration calculated earlier After obtaining the values, plug them into the equation to find the distance traveled by the box along the ramp. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-box-with-an-initial-speed-of-5-m-s-is-moving-up-a-ramp-the-ramp-has-a-kinetic--6-8f9af8ac57","timestamp":"2024-11-05T20:36:41Z","content_type":"text/html","content_length":"581418","record_id":"<urn:uuid:49777fa1-9bea-4436-81a9-6e1113950ca1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00360.warc.gz"}
Python Program to Check number representation is in Binary - Quescol Python Program to Check number representation is in Binary In this tutorial you will learn how to write a program in python programming language to check a given number representation is in binary or not. Here we are not going to check any base of number. As you know we can represent any number in binary format like in 0 and 1. Just like the binary representation of 2 is 0010. So we will check only if given input number has 0 and 1 or is any other digits. What is Binary Number Representation? Binary number representation refers to a way of representing numbers using only two symbols or digits—typically 0 and 1. This base-2 numeral system forms the foundation of all modern computer systems because it directly corresponds to the off-and-on states of electronic switches, such as transistors. Key Characteristics of Binary Number Representation 1. Base-2 System: □ Unlike the decimal system, which is a base-10 system using digits from 0 to 9, the binary system uses only two digits, 0 and 1. Each digit in this system is called a “bit” (short for binary 2. Positions and Values: □ Each position in a binary number represents a power of 2, with the rightmost position representing 2020, the next one 2121, then 2222, and so on. For example, the binary number 101 is calculated as 1×2^2+0×2^1+1×2^0=5 in decimal. 3. Applications: □ Binary numbers are essential for computers and digital systems. They help in encoding data, performing arithmetic, and managing operations within microprocessors and other digital circuits. 4. Addition and Subtraction: □ Binary arithmetic is straightforward because it involves only two digits. Rules for binary addition include: ☆ 0+0=0 ☆ 1+0=1 (and vice versa) ☆ 1+1=10 (where 10 in binary represents 2 in decimal) □ Subtraction follows similar rules but requires borrowing in the case of 0−1. 5. Advantages in Computing: □ Using only two states (on and off) simplifies the design of electronic components. This simplicity makes it easier to design reliable and efficient hardware. 6. Storage and Transmission: □ Binary data is easy to store and transmit because it requires less information to represent states or values. It also adapts well to error-checking and correction techniques, which are essential in data communications and storage. How our program will behave? In the below program if someone give any input in 0 and 1 format then our program will run and give output as given number is in binary format. And if someone give another number different from 0 and 1 like 2, 3 or any other then our program will give output as given number is not in a binary format. Program to check given number representation is in binary or not num = int(input("please give a number : ")) if j!=0 and j!=1: print("num is not binary") if num==0: print("num is binary") please give a number : 5 num is not binary Program Explanation 1. Input from User: □ The program starts by asking the user to input a number. The input is converted to an integer using int(). 2. Loop to Check Each Digit: □ A while loop runs as long as num is greater than 0. This loop is used to check each digit of the number from right to left. 3. Digit Extraction and Check: □ Inside the loop, the last digit of num is extracted using num % 10 and stored in j. □ The program then checks if j is not 0 or 1. If j is neither 0 nor 1, the program prints “num is not binary” and breaks out of the loop, ending the execution. 4. Updating the Number: □ If the digit is either 0 or 1, num is updated by removing the last digit using integer division by 10 (num // 10). 5. Final Check for Binary: □ If the loop completes and num becomes 0 (indicating all digits were checked and were either 0 or 1), the program prints “num is binary”.
{"url":"https://quescol.com/interview-preparation/given-number-format-is-a-binary-in-python","timestamp":"2024-11-05T16:44:32Z","content_type":"text/html","content_length":"88107","record_id":"<urn:uuid:fa6b5cb7-0512-4230-98f3-bd5de1715e74>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00763.warc.gz"}
Dhanbad to Phusro distance Distance in KM The distance from Dhanbad to Phusro is 55.938 Km Distance in Mile The distance from Dhanbad to Phusro is 34.8 Mile Distance in Straight KM The Straight distance from Dhanbad to Phusro is 43.7 KM Distance in Straight Mile The Straight distance from Dhanbad to Phusro is 27.2 Mile Travel Time Travel Time 1 Hrs and 5 Mins Dhanbad Latitud and Longitude Latitud 23.7956201 Longitude 86.43036310000002 Phusro Latitud and Longitude Latitud 23.7627297 Longitude 86.00236310000003
{"url":"http://www.distancebetween.org/dhanbad-to-phusro","timestamp":"2024-11-04T05:11:38Z","content_type":"text/html","content_length":"1724","record_id":"<urn:uuid:9599aa3b-c238-481a-b2b7-b1bad0ad3d9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00058.warc.gz"}
SIDE is CHARACTER*1 = 'R': compute right eigenvectors only; [in] SIDE = 'L': compute left eigenvectors only; = 'B': compute both right and left eigenvectors. EIGSRC is CHARACTER*1 Specifies the source of eigenvalues supplied in (WR,WI): = 'Q': the eigenvalues were found using DHSEQR; thus, if H has zero subdiagonal elements, and so is block-triangular, then the j-th eigenvalue can be assumed to be an eigenvalue of the block containing [in] EIGSRC the j-th row/column. This property allows DHSEIN to perform inverse iteration on just one diagonal block. = 'N': no assumptions are made on the correspondence between eigenvalues and diagonal blocks. In this case, DHSEIN must always perform inverse iteration using the whole matrix H. INITV is CHARACTER*1 = 'N': no initial vectors are supplied; [in] INITV = 'U': user-supplied initial vectors are stored in the arrays VL and/or VR. SELECT is LOGICAL array, dimension (N) Specifies the eigenvectors to be computed. To select the real eigenvector corresponding to a real eigenvalue WR(j), SELECT(j) must be set to .TRUE.. To select the complex [in,out] SELECT eigenvector corresponding to a complex eigenvalue (WR(j),WI(j)), with complex conjugate (WR(j+1),WI(j+1)), either SELECT(j) or SELECT(j+1) or both must be set to .TRUE.; then on exit SELECT(j) is .TRUE. and SELECT(j+1) is N is INTEGER [in] N The order of the matrix H. N >= 0. H is DOUBLE PRECISION array, dimension (LDH,N) [in] H The upper Hessenberg matrix H. LDH is INTEGER [in] LDH The leading dimension of the array H. LDH >= max(1,N). [in,out] WR WR is DOUBLE PRECISION array, dimension (N) WI is DOUBLE PRECISION array, dimension (N) On entry, the real and imaginary parts of the eigenvalues of H; a complex conjugate pair of eigenvalues must be stored in [in] WI consecutive elements of WR and WI. On exit, WR may have been altered since close eigenvalues are perturbed slightly in searching for independent VL is DOUBLE PRECISION array, dimension (LDVL,MM) On entry, if INITV = 'U' and SIDE = 'L' or 'B', VL must contain starting vectors for the inverse iteration for the left eigenvectors; the starting vector for each eigenvector must be in the same column(s) in which the eigenvector will be stored. [in,out] VL On exit, if SIDE = 'L' or 'B', the left eigenvectors specified by SELECT will be stored consecutively in the columns of VL, in the same order as their eigenvalues. A complex eigenvector corresponding to a complex eigenvalue is stored in two consecutive columns, the first holding the real part and the second the imaginary part. If SIDE = 'R', VL is not referenced. LDVL is INTEGER [in] LDVL The leading dimension of the array VL. LDVL >= max(1,N) if SIDE = 'L' or 'B'; LDVL >= 1 otherwise. VR is DOUBLE PRECISION array, dimension (LDVR,MM) On entry, if INITV = 'U' and SIDE = 'R' or 'B', VR must contain starting vectors for the inverse iteration for the right eigenvectors; the starting vector for each eigenvector must be in the same column(s) in which the eigenvector will be stored. [in,out] VR On exit, if SIDE = 'R' or 'B', the right eigenvectors specified by SELECT will be stored consecutively in the columns of VR, in the same order as their eigenvalues. A complex eigenvector corresponding to a complex eigenvalue is stored in two consecutive columns, the first holding the real part and the second the imaginary part. If SIDE = 'L', VR is not referenced. LDVR is INTEGER [in] LDVR The leading dimension of the array VR. LDVR >= max(1,N) if SIDE = 'R' or 'B'; LDVR >= 1 otherwise. MM is INTEGER [in] MM The number of columns in the arrays VL and/or VR. MM >= M. M is INTEGER The number of columns in the arrays VL and/or VR required to [out] M store the eigenvectors; each selected real eigenvector occupies one column and each selected complex eigenvector occupies two columns. [out] WORK WORK is DOUBLE PRECISION array, dimension ((N+2)*N) IFAILL is INTEGER array, dimension (MM) If SIDE = 'L' or 'B', IFAILL(i) = j > 0 if the left eigenvector in the i-th column of VL (corresponding to the eigenvalue w(j)) failed to converge; IFAILL(i) = 0 if the [out] IFAILL eigenvector converged satisfactorily. If the i-th and (i+1)th columns of VL hold a complex eigenvector, then IFAILL(i) and IFAILL(i+1) are set to the same value. If SIDE = 'R', IFAILL is not referenced. IFAILR is INTEGER array, dimension (MM) If SIDE = 'R' or 'B', IFAILR(i) = j > 0 if the right eigenvector in the i-th column of VR (corresponding to the eigenvalue w(j)) failed to converge; IFAILR(i) = 0 if the [out] IFAILR eigenvector converged satisfactorily. If the i-th and (i+1)th columns of VR hold a complex eigenvector, then IFAILR(i) and IFAILR(i+1) are set to the same value. If SIDE = 'L', IFAILR is not referenced. INFO is INTEGER = 0: successful exit [out] INFO < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, i is the number of eigenvectors which failed to converge; see IFAILL and IFAILR for further
{"url":"https://netlib.org/lapack/explore-html-3.4.2/d2/d0f/dhsein_8f.html","timestamp":"2024-11-08T15:14:20Z","content_type":"application/xhtml+xml","content_length":"21111","record_id":"<urn:uuid:922cf32f-9069-4822-b55b-89af6b2dbc24>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00175.warc.gz"}
NumPy for beginners I will show you NumPy functions that will help you most in your data processing and algorithms. Nowadays, Numpy is one of the most used Python packages to manage and transform structured data. This growth is due to its continuous improvement and differentiation with standard Python lists. What are the advantages of Numpy? • The memory space occupied by a data vector is much smaller due to its architecture, so when you have multi-dimensional arrays that take up a lot of space, this is an important advantage. • Numpy provides us very optimized functions which save us many steps that we do not have to program. In short, the main advantages are its speed and efficiency. Most used commands Following, I will show and explain to you the commands that are most used and can help you in the world of vector computing. First, we must import the package which is usually already installed in many environments, on the contrary, you can use pip to install it. import numpy as np Create a list We are going to create a python list and a Numpy one that we can check the type of each one. python_list = [1,2,3,4,5,6,7,8,9,10] numpy_list = np.array(python_list) # First Type: <class 'list'> # Second Type: <class 'numpy.ndarray'> As you can see, in NumPy these data structures are not known as 'lists', they are 'numpy.ndarray' (n-dimensional array) similar to the arrays of other programming languages. Arrays can also be created without cast from a Python standard list. Sequential Values Create an array with the first 10 values. numpy_array_0 = np.arange(10) [0 1 2 3 4 5 6 7 8 9] Create an array with numbers between 10 and 20 (20 not included) numpy_array_10 = np.arange(10,20) [10 11 12 13 14 15 16 17 18 19] Create an array with numbers between 0 and 20 but with a step of 2 numpy_array_steps = np.arange(0,20,2) [ 0 2 4 6 8 10 12 14 16 18] Continuous Values Create an array with 9 numbers between 0 and 10. numpy_array_cnt_step = np.linspace(0,10,9) [ 0. 1.25 2.5 3.75 5. 6.25 7.5 8.75 10. ] Static values Although these functions seem to be rare, these arrays are often used for independent variables in predictive models. Create an array filled with 10 zeros or 10 ones numpy_zeros_2D = np.zeros(10) numpy_zeros_2D = np.ones(10) Create an array by choosing random values from a normal distribution (Gaussian) numpy_random_float = np.random.randn(10) [ 0.51734484 -0.84932945 -1.16675107 -0.47245504 1.22452811 0.44823052 -1.13637128 -1.34081725 -0.38066925 1.99223281] If you want to do the same with Integer numbers, you can just use the next function. numpy_random_int = np.random.randint(1,5,10) [2 1 2 4 1 3 4 3 3 3] Statistical calculations If you want to obtain statistical results based on the array data, there are a lot of functions: • max() • min() • mean() • sum() • std() Moreover, if you want to perform operations between two or more arrays, you can also use mathematical operations such as '+', '-', '/'... We have our array, however, if we want to use its data we must use indexes. Get value by index # Array<1,3,7,1,6,7,1,3,0> numpy_value = numpy_array[4] Get the first or last 3 positions numpy_value = numpy_array[:3] numpy_value = numpy_array[-3:] If we want to get each one that match the condition, it is very similar to Pandas. condition = numpy_array > 3 numpy_value = numpy_array[condition] [ 7,6,7 ] Numpy is very easy to use and at the same time gives us the ability to manipulate very large lists more comfortably. Also, all the commands shown above can be used with N-dimensional matrix.
{"url":"https://www.datascienceland.com/blog/numpy-for-beginners-86/","timestamp":"2024-11-13T19:07:43Z","content_type":"text/html","content_length":"24583","record_id":"<urn:uuid:61fc8521-9b4b-4ee1-a0bb-67e200c287f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00608.warc.gz"}
Universal Gravitation - Problems – The Physics Hypertextbook 1. Verify the inverse square rule for gravitation with the following chain of calculations… 1. Determine the centripetal acceleration of the moon. (Assuming the moon is held in it's orbit by the gravitational force of the Earth, you are then also calculating the acceleration due to gravity of the Earth at the moon's orbit.) 2. Determine the ratio of the radius of the moon's orbit to the radius of the Earth. 3. Use the results of a. and b. to calculate the acceleration due to gravity on the surface of the Earth. 4. How does this value compare to the generally accepted value of g? Are the results of your calculations in close enough agreement with experimental observations to verify the inverse square rule for gravitation? Discuss briefly. 2. Estimate the value of the universal gravitational constant from the following approximate measurements taken during the original Cavendish experiment (and converted into SI units for us)… □ one hundred kilogram fixed and one kilogram rotating masses □ ten centimeter separation between fixed and rotating masses □ one millionth newton of force on each of the rotating masses 3. Check it out. 1. Determine the acceleration due to gravity (g) on the surface of the Earth from Newton's law of universal gravitation. 2. How does this value compare to the standard acceleration due to gravity (g)? 3. Are the results of your calculation close enough to the standard value to verify the distance-dependent portion of Newton's law of universal gravitation? Discuss briefly. 4. Jupiter is about eleven times larger in diameter and three hundred times more massive than the Earth. How does the gravitational field on Jupiter compare to that on Earth? 1. What would happen to objects on the Earth's surface if… 1. the Earth's gravitational field gradually disappeared? 2. the Earth's gravitational field was fine, but the Earth slowly stopped rotating? 2. What effect, if any, would removing the Earth's core have on the gravitational field at its surface? (Assume the size and shape of the Earth does not change; that is, assume the Earth was partially hollowed out.) 3. The Earth has a radius about twice as great and a mass ten times greater than the planet Mars. How does the acceleration due to gravity on Mars compare to that on Earth? 1. Determine the following quantities for a 10 kg frozen turkey… 1. its mass on the surface of the Earth 2. its weight on the surface of the Earth 3. its mass in orbit one Earth radius above the surface of the Earth 4. its weight in orbit one Earth radius above the surface of the Earth 5. its mass on the surface of the moon 6. its weight on the surface of the moon (Note: The word "weight" in these questions refer to the gravitational force, not the apparent weight.) 2. Astrology 1. Calculate the force of gravity between a 3.0 kg newborn baby and a 60 kg doctor standing 0.25 m away. 2. Calculate the force of gravity between a 3.0 kg newborn baby and the planet Jupiter when it is nearest to the Earth. 3. What is the ratio of the force of gravity from Jupiter on the baby compared to the force of gravity from the doctor on the baby? 4. What is the likelihood that astrology (assuming it had any validity) could be explained as a result of planetary gravitation at the moment of your birth? (Keep in mind that Jupiter is the largest planet and that it is rarely as far from the Earth as its nearest approach.) 3. Walking on the moon 1. Calculate the gravitational field strength on the surface of the moon. 2. How does the gravitational field on the surface of the moon compare to the gravitational field on the surface of the Earth? 3. Describe the effect that the moon's reduced gravity would have on your athletic abilities. Identify one sport or athletic event in which your abilities would get better and one in which your abilities would get worse. 4. Weightlessness 1. Calculate the weight of a 75 kg astronaut on the surface of the Earth. 2. Calculate the same astronaut's weight aboard a space station as it orbits 3.5 × 10^5 m above the Earth's surface. 3. According to common wisdom, objects in outer space are "weightless". Why then isn't the answer to the second part of this question zero? What's wrong with the common wisdom? 5. The purpose of this problem is to determine the possible nature of the planetoid LV-426 from the 1979 science fiction horror film Alien. Begin by reading the begining of Scene 21 from the revised final screenplay. The crew of the interstellar mining ship Nostromo receive a mysterious transmission and locate the source. INT. BRIDGE21 Dallas, Kane, Ripley and Ash stand around the illuminated map table. Lambert sits at the radio directional console. We all hear that, Lambert? She switches on the audio system. Static. Then... An ungodly sound. Eight seconds worth. Good God. Doesn't sound like any radio signal I've heard. Maybe it's a voice. Well we'll soon know. Can you hone in on that? What was the position? Alright, I've found the quadrant. Ascension 6 minutes 20 seconds, declination 39 degrees 2 seconds. Okay, put that on the screen for me. Lambert punches buttons. One of the viewscreens flickers, and a small far off light appears. Alright, well, that's it. It's a planetoid. 1200 kilometers. It's tiny. Any rotation? About two hours. What about gravity? Point eight six. You can walk on it. "Alien" by Walter Hill and David Giler. Based on a screenplay by Dan O'Bannon. Story by Dan O'Bannon and Ronald Shusett, 1978 Use the data given in the scene quoted above to determine the following quantities for LV-426… 1. its mass 2. its average density (assuming it's spherical) 3. the centripetal acceleration of a point on its equator Speculate on the nature of LV-426. 4. What kind of material is it possibly made of? 6. The most massive exoplanet found to date is the brown dwarf HR 2562 b orbiting a star 111 light years from our Sun. Using the information in the table below, determine… 1. its average density… 1. in kg/m^3 2. as a multiple of Earth's average density 3. as a multiple of Jupiter's average density 2. its surface gravity… 1. in m/s^2 2. as a multiple of Earth's surface gravity 3. as a multiple of Jupiter's surface gravity Planetary parametersSources: ^1NASA Exoplanet Archive, ^2 Executive Committee of the International Astronomical Union HR 2562 b^1 Earth^2 Jupiter^2 mass 9535 Earth masses 5.97217 × 10^24 kg 1.89812 × 10^27 kg 30 Jupiter masses radius 12.4 Earth radii 6,378,100 m 71,492,000 m 1.11 Jupiter radii Note for pedants: Very massive planets like Jupiter and HR 2562 b don't really have well defined surfaces like the Earth does. The word "surface" for these objects refers to any position where the atmospheric pressure is equal to that of the Earth. 7. What separation between two earth-like exoplanets would result in the same gravitational force as two bowling balls in contact? Give your answer in meters and light years. Two roughly spherical objects compared quantity bowling earth mass 15.00 lb 1.317 × 10^25 lb 6.804 kg 5.972 × 10^24 kg radius, equatorial - 6,378,100 m radius, polar 6,356,800 m diameter 8.500 in - 21.59 cm gravitational force some the same value value separation in ? 1. Determine the height h above the surface of a planet of radius r and mass m at which the gravitational field will be one-half its surface value. 2. The purpose of this problem is to determine the possible nature of the planet Krypton. Begin by reading the introduction from the 1950s TV series Superman. Faster than a speeding bullet. More powerful than a locomotive. Able to leap tall buildings in a single bound. Look, up in the sky! It's a bird! It's a plane! It's Superman! Yes, it's Superman, strange visitor from another planet who came to Earth with powers and abilities far beyond those of mortal men. Superman, who can change the course of mighty rivers, bend steel with his bare hands, and who, disguised as Clark Kent, mild-mannered reporter for a great metropolitan newspaper, fights a never-ending battle for truth, justice and the American way. Superman's strength is partly attributed to the gravity of his home planet, Krypton. The people of Krypton evolved to stand, walk, and lift ordinary objects in Krypton's strong gravitational field. When Superman came to Earth, he found that his Kryptonian physique was sort of over-designed. He could "leap tall buildings in a single bound", for example. This is much like when humans go to the moon. They find themselves strong enough to do all sorts of things they couldn't do on Earth — like run effortlessly with long strides while wearing an 80 kg (180 lb) space suit, for Think of how high a typical human can jump on Earth. Assume Superman can only jump as high as that on Krypton. Then consider how high Superman can jump on Earth. Use this knowledge to determine the physical characteristics of Krypton. (State all values on Krypton in comparison to their values on Earth. Do not state them with a number and a unit.) 1. Derive an expression that relates height jumped to the acceleration due to gravity when take off speed is constant. 1. Use the expression derived in part a to compare g on the surface of Krypton to g on the surface of the Earth. 2. Derive an expression that relates g on the surface of a spherical planet to the density and radius of the planet (instead of the mass and radius, which is the usual way it is stated). 1. Use the expression derived in part b to determine the radius of Krypton assuming it has the same average density as the Earth. How likely is one to find a terrestrial planet with a radius like this? 2. Use the expression derived in part b to determine the average density of Krypton assuming it has the same radius as the Earth. How likely is one to find a terrestrial planet with a density like this? 3. Is there anything else you would like to say about Superman or Krypton?
{"url":"https://physics.info/gravitation/problems.shtml","timestamp":"2024-11-05T00:30:52Z","content_type":"text/html","content_length":"57748","record_id":"<urn:uuid:b906c93a-ff9f-4d53-821c-d3db7b8db159>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00671.warc.gz"}
Percentages: Calculate the Percentage Percent worksheets where students are given two numbers and must determine what percentage one number is of the other number. Calculate the Percentage: Tens and Hundreds Calculate the Percentage: Fifties Calculate the Percentage: Twenties Calculate the Percentage: Small Fractions Calculate the Percentage: Twelves and Sixteens Calculate the Percentage: Small Numbers 1 Calculate the Percentage: Small Numbers 2 Calculate the Percentage: Small Numbers 3 Calculate the Percentage: Larger Numbers 1 Calculate the Percentage: Larger Numbers 2 Calculate the Percentage: Larger Numbers 3 Calculate the Percentage: Larger Numbers 4 How to Find Percentage A percentage is a fraction with a denominator of 100. Given two numbers, a percentage can be found by either by performing decimal division or by converting the two numbers into an equivalent fraction with 100 as the denominator. These worksheets begin with simple examples of fractions with power of ten denominators that can be easily changed to percentages, and gradually progress through other denominators to exercise a student's percentage changing skills. Both the equivalent fraction form of the percent as well as the basic division problem is shown to reinforce both conceptually conversions to the percentage answers.
{"url":"https://www.dadsworksheets.com/worksheets/percentages-calculate-the-percentage.html","timestamp":"2024-11-10T21:56:58Z","content_type":"text/html","content_length":"123922","record_id":"<urn:uuid:74ee7e97-8e27-4e81-a6d0-338ab8cd1e46>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00362.warc.gz"}
Belongs to (∈) Belongs to A symbol that expresses the phrase “to be a member of” in the set theory is called the belongs to symbol. In the set theory, the elements (or members) are collected on the basis of one or more common properties to form a set. So, each element is a member of that set. Hence, it is simply expressed as the element belongs to the set. An Italian mathematician, Giuseppe Peano used a Greek letter lunate epsilon ($∈$) for expressing the phrase “belongs to” symbolically in set theory. It helps us to express the relationship between an element and its set in mathematical form. Let’s learn how to use the symbol epsilon in set theory from two understandable examples. Basic example The numbers $0$, $2$, $5$, $8$ and $9$ are collected to form a set $N$ in this example. 1. The number $0$ is a member of set $N$. Hence, it is expressed as $0 \,∈\, N$. 2. The number $2$ is an element of set $N$. So, it is written as $2 \,∈\, N$. 3. The number $5$ belongs to set $N$. Therefore, the relationship between them is written as $5 \,∈\, N$. 4. The number $8$ is in set $N$ and it is expressed as $8 \,∈\, N$. 5. The number $9$ lies in set $N$ and it is written as $9 \,∈\, N$. Thus, we use the epsilon symbol in set theory to express the relationship between an element and a set in mathematical form. Advanced example The lowercase letters $a$, $b$, $c$ and $d$ are collected to form a set $Y$ in this example. 1. The letter $a$ is a member of set $Y$. Hence, it is expressed as $a \,∈\, Y$. 2. The letter $b$ is an element of set $Y$. So, it is written as $b \,∈\, Y$. 3. The letter $c$ belongs to set $Y$. Therefore, the relationship between them is written as $c \,∈\, Y$. 4. The letter $d$ is in set $Y$ and it is expressed as $d \,∈\, Y$.
{"url":"https://www.mathdoubts.com/belongs-to/","timestamp":"2024-11-05T03:36:06Z","content_type":"text/html","content_length":"29327","record_id":"<urn:uuid:5214a1e8-703f-4939-b4a4-4cffd43ee8a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00237.warc.gz"}
Coyote Blog Yes, I am like an addict on Tesla but I find the company absolutely fascinating. Books and HBS case studies will be written on this saga some day (a couple are being written right now but seem to be headed for Musk hagiography rather than a real accounting ala business classics like Barbarians at the Gate or Bad Blood). I still stand by my past thoughts here, where I predicted in advance of results that 3Q2018 was probably going to be Tesla's high water mark, and explained the reasons why. I won't go into them all. There are more than one. But I do want to give an update on one of them, which is the growth and investment story. First, I want to explain that I have nothing against electric vehicles. I actually have solar panels on my roof and a deposit down on an EV, though it is months away from being available. What Tesla bulls don't really understand about the short position on Tesla is that most of us don't hate on the concept -- I respect them for really bootstrapping the mass EV market into existence. If they were valued in the market at five or even ten billion dollars, you would not hear a peep out of me. But they are valued (depending on the day, it is a volatile stock) between $55 to $65 The difference in valuation is entirely due to the charisma and relentless promotion by the 21st century's PT Barnum -- Elon Musk. I used to get super excited by Musk as well, until two things happened. One, he committed what I consider outright fraud in bailing out friends and family by getting Tesla to buy out SolarCity when SolarCity was days or weeks from falling apart. And two, he started talking about things I know about and I realized he was totally full of sh*t. That is a common reaction from people I read about Musk -- "I found him totally spellbinding until he was discussing something I am an expert in, and I then realized he was a fraud." Elon Musk spins great technology visions. Like Popular Mechanics magazine covers from the sixties and seventies (e.g. a flying RV! a mile long blimp will change logging!) he spins exciting visions that geeky males in particular resonate with. Long time readers will know I identify as one of this tribe -- my most lamented two lost products in the marketplace are Omni Magazine and the Firefly TV series. So I see his appeal, but I have also seen his BS -- something I think a lot more people have caught on to after his embarrassing Boring Company tunnel reveal. Anyway, after a couple thousand words of introduction, here is the update: In my last post linked above, I argued that Tesla is a growth company that is not investing in growth. Sure, it is seeing growth in current quarters due to investments made over the last decade, but there is little evidence it is actually spending money to do anything new. It stopped managing itself like a growth company trying to maintain its first-mover advantage. Tesla has explicitly chosen to pursue a strategy that needs a TON of capital. Everyone understands, I think, that building a new major automobile franchise takes a ton of investment -- that's why they are not popping up all the time. But Tesla actually has made choices that increase the capital needed even beyond these huge numbers. Specifically, they chose not just to manufacture cars, but to also own the sales and service network and to own the fueling network. Kia was the last major new brand in the US that I can remember, but when it started it relied on 3rd parties to build and operate the dealer/service network and relied on Exxon and Shell to build out and operate the fueling network. So Tesla has pursued a strategy that they need all the capital of Kia and of the Penske auto group and of Exxon. Eek. And for years, they were valiantly trying to pull it off. They created showrooms in malls and created a new online selling process. They built some service locations but as has been proven of late, not enough. They built a supercharger network. It was a gutsy call that seemed to be paying off. And then something weird happened. Somewhere in late 2017 or early 2018 they stopped raising capital and greatly slowed down both R&D and capital investment. • They slowed expanding the service network at the very time that their installed base of cars was going up exponentially and they were getting bad press for slow service. Elon Musk promised that Tesla would create its own body shops but nothing has been done on this promise. • They slowed the Supercharger network expansion at the same time their installed base has dramatically increased and at the same time new competitive networks were begun by major players like • They stopped expanding the Model 3 production line at the same time it was clear the current factory could produce only about 5,000 cars per day (with some quality tradeoffs at that) and Musk continued to promise 10,000 a day • They promised production in China by the end of this year but so far the only investment has been a groundbreaking ceremony in a still muddy field • They promised huge European sales but only just now got European regulatory approval for sales, dragging their feet for some reason on this approval despite lots of new EV competition starting to hit the European market. • They pumped up excitement with new product concepts like the semi and the coupe and the pickup truck but there is no evidence they have a place to build them or even have started to tool up. • Everyone thinks of Tesla as having leadership in battery technology but that is the one area they have actually outsourced, to Panasonic. • Through all of this, through all these huge needs for capital and despite Tesla's souring stock price and fanboy shareholders begging to throw money at the company, they have not raised any capital for a year. Since my initial post, we have seen a few new pieces of news 1. Tesla still has not raised capital and in fact faces a $1 billion bond repayment in just over 30 days 2. Tesla admitted that it has not even started working on a refreshed design for the aging Model S and X, despite increasing EV competition coming at this high end from Audi, Porche, and others. These refreshes should have been started years ago. 3. In fact, Tesla announced it was cutting back on production of the S and X. Ostensibly this was to focus on the Model 3. Most skeptics think this is BS, and the real reason is falling demand. But it doesn't matter -- growth companies with great access to the capital markets don't make these kinds of tradeoffs. This is further proof that Tesla is no longer managing itself like a growth company. These cuts are particularly troubling because the S and X are where Tesla gets most of its gross margins -- the Model 3 margins are much worse. 4. Tesla laid off 7% of its work force. Again, this is not the act of a company that is behind in implementing its growth initiatives, growth initiatives that perhaps 80% of its stock market valuation depends on. Tesla has always had an execution problem, or more rightly an over-promising problem. But it was still actually investing and doing stuff, even if it was disorganized and behind in doing so. Now, however, it is a company valued as an exponential growth company that is no longer managing itself like a growth company. It has billions of investments that are overdue -- in new products, in product refreshes, in the service network, in a second generation supercharger -- that should have been started 2-3 years ago and for which there isn't any major activity even today. As a disclosure, Tesla stock is one of the most dangerous in the world to trade, either way. You really need to understand it before you trade it and no one really understands it. I have a couple of long-dated put options on Tesla that I consider more of a bar bet than anything else. I also have a couple of cheap short-dated calls as I usually do in the runup to the quarterly Tesla earnings call. Musk is great at the last minute stock pump during earnings call week, and the stock often pops only to fall soon afterwards as people dig into the numbers. But again, these are "investments" that are less than 0.1% of my portfolio. Postrcript: When I wrote "Tesla is a growth company that is not investing in growth" I was picturing the Jim Cramer cameo in Ironman -- "That's a weapons company that doesn't make any weapons!" Of course it took a work of fiction to see Jim Cramer advocate for the short side. Doubly ironic given Musk sometimes styles himself as the real life Tony Stark. I have written in the context of both the new marijuana stocks (e.g. Tilray or Canopy) and Tesla in EV's that the market is putting a whole lot of value -- in some cases 90+% of their current market value -- on these companies being first movers in potentially large and lucrative new industries. It is hard to predict early on where in an industry's value chain the profits will be, or if the industry will be profitable at all. Who will make money in marijuana -- the growers? the retailers? the folks that package the raw material into consumer products? The early marijuana entrants are focusing on cultivation, but in tobacco do the cultivators or the cigarette makers who buy from them make the most money? And as anyone at Myspace could tell you, being first is not always a guarantee of success, and in some ways can be a disadvantage. Second movers can avoid all the first-movers costly mistakes. I though of all this seeing the infographic below on changing leaders in the Internet world. Almost all the top 20 companies in the first year are largely irrelevant today -- AOL and Yahoo are technically still in business but only because they have been bought up by Verizon in a group of other dogs they seem intent on collecting. I will begin by saying that few things in government aggravate me more than corporate relocation subsidies. They are an entirely negative sum game. I believe that subsidies are misguided and lead to a misallocation of capital, but at least things like EV subsidies create an EV industry, even if it is uneconomic. But relocation subsidies are payments to create nothing -- their entire purpose is to move economic activity that would happen anyway across some imaginary line on a map. Locally, we had a $100 million subsidy to a developer to move a mall approximately 1 mile. Pure insanity. However, it is hard for me to blame the managers of public companies who seek these subsidies. I own my own company and can easily eschew such pork (if it were ever offered to me) but the CEO of a public company would be failing in their fiduciary duty to their shareholders to not accept government money that the drunken sailors in government are so gleefully trying to stuff in corporate With this money so available, it is important that corporate management make location decisions considering these subsidies but not solely focused on them. The contrast between Amazon and Tesla (including the former SolarCity) helps explain my point. In finding new headquarters locations, Amazon's most important considerations were likely • Ability to attract great management and developer talent who seem to be more attracted to hipster areas with lots of Starbucks and sushi more than to areas with low cost housing. • As they incur regulatory scrutiny, closeness to national government • Access to domestic and international partners • Access to capital Note these criteria do not include access to low cost labor and real estate. These do not really matter much for its headquarters offices. These DO matter for distribution centers and warehouses, which is why these are located not in the center of high cost cities but in low cost suburban or rural areas. In this context, then, splitting its headquarters between New York and Washington DC make a ton of sense. Now let's think about Tesla. Tesla was looking for manufacturing locations for solar panels and cars. This is in an era when few even consider anywhere in the US a viable long-term option, but Tesla selected New York state and southern California. I can tell you from sad personal experience that both these places are among the most expensive and hardest places to do business in the country. Seriously, in SoCal Tesla took over a facility that Toyota couldn't make work. These make absolutely no sense as long-term locations for manufacturing, but Tesla came here none-the-less in part for big fat subsidies and in part to ingratiate two powerful sets of state governments (in addition to subsidies, California reciprocated by giving Tesla a special sweetheart deal upping its zero emission vehicle credits). I am reminded of this because Bloomberg has the whole, sad tale of Tesla in New York here. I am not much on memes but I thought I would try my hand just this once... I promised I would not post any more Tesla for a while, and to some extent I am keeping that promise -- no updates here on the SEC investigation or the 420 tweet. But since I have been critical of Tesla in the past, I thought I would acknowledge that there are good things in Tesla that could and should be saved. The problem is that Tesla is saddled with a bunch of problems that are NOT going to be solved by going private. In fact, going private could only make things worse -- given that Tesla already has too much debt and its debt is rated barely above junk bonds, piling on more debt just to save Elon Musk from short sellers is not a good plan. Here is what I would suggest: 1. Find the right role for Elon Musk. Musk HAS to be part of the company, without him its stock would go to about zero tomorrow. But right now he is CEO, effective head of media relations, factory manager, and chief engineer. Get him out of day to day management (and off Twitter) and hire real operating people who know what they are doing 2. Get rid of the dealerships. Tesla tried to do something different, which is own all the dealerships rather than franchise them out. This is fine if one has some sort of vision for doing sales and service differently, but Tesla really doesn't. It does the same things as other car dealerships but just slower since it has not been able to build out capacity fast enough. And this decision has cost them a tons of growth capital they desperately need, because they have had to build out dealerships most car companies get for "free" because the capital for the dealerships is provided by third-party entrepreneurs. Also, the third-party entrepreneurs bring other things to the table, for example many of them tend to have experience in the car sales business and a high profile in their local markets with government and media. 3. If possible, find a partner for the charging network. All traditional car companies get their fueling networks for free because the network is already built out by the oil companies. Tesla is building its own, and again this is sucking up a lot of capital. It is also dangerous, because Tesla has chosen to pursue a charging standard that may not become the industry standard (this is already happening in Europe) and Tesla risks being stuck with the betamax network. Tesla should see if it can shift this to a third party, perhaps even in joint venture with other EV companies. 4. Do an equity raise. To my mind, it is absolute madness Tesla did not do this earlier in the year. Their stock was trading at $350 and at a $50+ billion valuation at the same time they were burning cash cash at a rate of $3 billion or so a year. Musk says he can skate through without more capital but he has said this before and it was not true. Given the enthusiasm for his stock, there is just no reason to run cash poor when there are millions of Tesla fanboys just waiting to throw money at the company. Even a $5 billion raise would have been only 10% dilution. Musk says he wants to burn the shorts but ask any Tesla short out there what they would most fear, and I think they would all say an equity capital raise. $3-5 billion would get Tesla at least through 2019 no matter how bad the cash burn remained and give the company space to solve its operational problems. 5. Get someone who knows how to build cars building the cars. I have written about this before -- it is always hard when you are trying to be a disruptor of an industry to decide what to disrupt and what industry knowledge to incorporate. In retrospect, Musk's plan to ignore how cars are built and do it a different way is not working. Not only are the cost issues and throughput issues, but there are growing reports of real quality issues in model 3's. This has to be fixed ASAP. 6. Bring some sanity to the long-term product roadmap. This may be a bit cynical, but Tesla seems to introduce a new product every time Musk needs to divert the public's attention, his equivalent of yelling "Squirrel!" There is the semi, a pickup truck, a roadster and probably something else I have forgotten about. Even the model 3 lineup is confusing, with no one really knowing what Tesla is going to focus on, and whether the promised $35,000 model 3 will ever actually be built. This confusion doesn't work well with investors at all, but Tesla has been able to make it work with customers, increasing the buzz around the company because no one ever seems to know what it will do next. But once real competitors start coming out from GM, Volvo, Jaguar, BMW and others, this is not going to work. Customers that are currently captive to Tesla will have other options. Let's start with the semi. The demo was a beautiful product, but frankly there is no way Tesla is going to have the time or the money to actually produce this thing. Someone like Volvo is going to beat them to the punch. They need to find a JV partner who can actually build it. Update: If I had a #7, it would be: Invent a time machine and go back and undo the corrupt SolarCity buyout, in which Tesla bailed out Musk's friends and family and promptly proceeded to essentially shut down the company. Tesla shareholders got nothing from the purchase except a lot of debt. Nearly 8 years ago (can it be so long?) I wrote a series of articles about what I called the electric vehicle mileage fraud at the EPA. Rather than adopt sensible rules for giving electric vehicles an equivalent mpg rating, they used a horrible unscientific methodology that inflated the metric by a factor of three (in part by ignoring the second law of thermodynamics). All the details are still online here. I am not omniscient so I don't know people's true motivations but one is suspicious that the Obama administration wanted to promote electric vehicles and put their thumb on the scale of this metric (especially since the EPA in the Clinton Administration has already crafted a much better methodology). To be fair, smart people screw this up all the time -- even Eric Schmidt screwed it up. Take for example the Tesla model 3, which has been awarded an eye-popping eMPG of between 120 and 131. Multiplying these figures by .365 (as described in my linked article) gets us the true comparative figure of 44 to 48. This means that in terms of total energy consumption in the system, the Tesla is likely better than most gasoline-powered vehicles sold but less energy efficient than top hybrids (the Prius is listed as 53-58 mpg). At the end of the day, electric cars feel cheaper to fuel in part because they are efficient, but perhaps more because there is no little dial with rotating dollar numbers on the electric cables one attaches to charge them (also, there are still places where one can skim electricity for charging without paying). Basically, I have been a voice in the wilderness on this, but I just saw this note on the Tesla Model 3 and its operating costs from Anton Wahlman writing at Seeking Alpha there are attractive and spacious hatchbacks yielding at least 55 MPG for under $25,000, without taxpayer funding needed. Just to be conservative and give the opposite side of the argument the benefit of the doubt, I’ll refer to these as 50 MPG cars, even though they perform a little better. Rounding down is sufficient for this exercise, as you will see below.... To find out [the price to charge a Tesla], you can go to Tesla’s Supercharger price list, which is available online: Supercharging. As you can see in the table above, the average is close to the $0.24 per kWh mark. So how far does that $0.24 take you? The Tesla Model 3 is rated at 26 kWh per 100 miles according to the U.S. Department of Energy: 2018 Tesla Model 3 Long Range. In other words, almost four miles per kWh. It’s close enough that we can round it up to four miles, just to give Tesla some margin in its favor. That squares with the general rule of thumb in the EV world: A smaller energy-efficient EV will yield around 4 miles per kWh, whereas a larger EV will yield around 3 miles per kWh. That means that at $0.24 per kWh, the Tesla Model 3 costs $0.06 per mile to drive. How does that compare to the gasoline cars? At 50 MPG and today’s nationwide average gasoline price of $2.65, that’s $0.05 per mile. In other words, it’s cheaper to drive the gasoline car than the Tesla Model 3. This result that the Tesla is slightly more expensive to fuel than the top hybrids is exactly what we would expect IF the EPA used the correct methodology for its eMPG. However, if you depended on the EPA's current eMPG ratings, this would come as an enormous shock to you. Electric vehicles have other issues, the main one being limited range combined with long refueling times. But there are some reasons to make the switch even if they are not more efficient. 1. They are really fun to drive. Quiet and incredibly zippy. 2. From a macro perspective, they are the easiest approach to shifting fuel. It may be easier to deploy natural gas to cars via electricity, and certainly EV's are the only way to deploy wind or solar to transportation. Tesla agreed to give Elon Musk what is potentially the richest executive compensation package ever. I will give my (*gasp*) cynical reason why I think they did this. I can show you in one chart (Tesla Model 3 production, from Bloomberg): I would argue that Elon Musk is the only one in the world who can run a company with so many spectacular failures to meet commitments and still have investors and customers coming back and begging for more. A relatively large percentage of Teslas get delivered with manufacturing defects and their customers sing their praises (even while circulating delivery defect checklists). Tesla keeps publishing Model 3 production hockey sticks (apparently with a straight face) and consistently miss (each quarter pushing back the forecast one quarter) and investors line up to buy more stock. Tesla runs one of the least transparent major public companies in this country (so much so that people like Bloomberg have to spend enormous efforts just to estimate what is going on there) and no one is fazed. Competitors like Volvo and Volkswagon and Toyota and even GM have started to push their EV technology past Tesla and actually sell more EV's than does Tesla (with the gap widening) and investors still treat Tesla like it has a 10-year unassailable lead on competition. All because Elon Musk can stand up at a venue like SXSW, wave his hands, spin big visions, and the stock goes up $3 billion the next day. Exxon-Mobil has a long history of meeting promises, reveals its capital spending plans in great detail, but misses on earnings by a few cents and loses $40 billion in market cap. GE lost over half its market value when investors got uncomfortable with their lack of transparency and their failures to meet commitments. Not so at Tesla, in large part because Elon Musk is PT Barnum reincarnated, or given the SpaceX business, he is Delos D. Harriman made Disclosure: I don't currently have any position in TSLA but over the last 2 years I have sold short when it reaches around $350 (e.g. after Elon Musk speaks) and buy to cover around $305 (e.g. when actual operational or financial data is released). Sort of the mirror image of BTFD. There are two problems with electric vehicles. Neither are unsolvable in the long-term, but neither are probably going to get solved in the next 5 years. 1. Energy Density. 15 gallons of gasoline weighs 90 pounds and takes up 2 cubic feet. This will carry a 40 mpg car 600 miles. The Tesla Model S 85kwh battery pack weighs 1200 pounds and will carry the car 265 miles (from this article the cells themselves occupy about 4 cubic feet if packed perfectly but in this video the whole pack looks much larger). We can see that even with what Musk claims is twice the energy density of other batteries, the Tesla gets 0.22 miles per pound of fuel/battery while the regular car can get 6.7. That is a difference in energy density of 30x. Some of this is compensated for by heavy and bulky things the electric car does not need (e.g. coolant system) but it is still a major problem in car design. 2. Charge Time. In my mind this is perhaps the single barrier that could, if solved, make electric cars ubiquitous. People complain about electric car range, but really EV range is not that much shorter than the range of traditional cars on a tank of gas. The problem is that it is MUCH faster to refill a tank of gas than it is to refill a battery with a full charge. Traditionally it takes all night to charge an electric car, but 2 minutes at the pump to "charge" a gasoline engine. The fastest current charging claim is Tesla's, which claims that the supercharger sites they have built on many US interstate routes sites will charge 170 miles of range in 30 minutes, or 5.7 miles per minute. A traditional car (the same one used in point 1) can add 600 miles of range in 2 minutes, or 300 miles per minute, or 52 times faster than the electric car. This is the real reason EV range is an issue for folks. Interestingly, Fisker (which failed in its first foray in to electric cars) claims to have a solid state battery technology that gets at both these issues, particularly #2 “Fisker’s solid-state batteries will feature three-dimensional electrodes with 2.5 times the energy density of lithium-ion batteries. Fisker claims that this technology will enable ranges of more than 500 miles on a single charge and charging times as low as one minute—faster than filling up a gas tank.” Forget all the other issues. If they can really deliver on the last part, we will all be driving electric vehicles in 20 years. However, having seen versions of this same article for literally 30 years about someone or other's promised breakthrough in battery technology that never really lived up to the hype, I will wait and see. Well, I got dis-invited yet again from giving my climate presentation. I guess I should be used to it by now, but in this case I had agreed to actually do the presentation at my own personal expense (e.g. no honorarium and I paid my own travel expenses). Since I was uninvited 2 days prior to the event, I ended up eating, personally, all my travel expenses. There are perhaps folks out there in the climate debate living high off the hog from Exxon or Koch money, but if so that is definitely not me, so it came out of my own pocket. I have waited a few days after this happened to cool off to make a point about the state of public discourse without being too emotional about it. I don't want to get into the details of my presentation (you can see it here at Claremont-McKenna College) but it is called "Understanding the Climate Debate: The Lost Middle Ground" (given the story that follows, this is deeply ironic). The point of the presentation is that there is a pretty mainstream skeptic/lukewarmer position that manmade warming via greenhouse gasses is real but greatly exaggerated. It even suggests a compromise legislative approach implementing a carbon tax offset by reductions in some other regressive tax (like payroll taxes) and accompanied by a reduction in government micro-meddling in green investments (e.g. ethanol subsidies, solyndra, EV subsidies, etc). I am not going to name the specific group, because the gentleman running the groups' conference was probably just as pissed off as I at the forces that arrayed themselves to have me banned from speaking. Suffice it to say that this is a sort of trade group that consists of people from both private companies and public agencies in Southern California. Attentive readers will probably immediately look at the last sentence and guess whence the problem started. Several public agencies, including the City of Los Angeles, voiced EXTREME displeasure with my being asked to speak. The opposition, particularly from the LA city representative, called my presentation "the climate denier workshop" [ed note: I don't deny there is a climate] and the organizer who invited me was sent flat Earth cartoons. Now, it seems kind of amazing that a presentation that calls for a carbon tax and acknowledges 1-1.5 degrees C of man-made warming per century could be called an extremist denier presentation. But here is the key to understand -- no one who opposed my presentation had ever bothered to see it. This despite the fact that I sent them both a copy of the CMC video linked above as well as this very short 4-page summary from Forbes. But everyone involved was more willing to spend hours and hours arguing that I was a child of Satan than they were willing to spend 5-minutes acquainting themselves with what I actually say. In fact, I would be willing to bet that the folks who were most vociferous in their opposition to this talk have never actually read anything from a skeptic. It is a hallmark of modern public discourse that people frequently don't know the other side's argument from the other side itself, but rather from its own side (Bryan Caplan, call your office). This is roughly equivalent to knowing about Hillary Clinton's policy positions solely from listening to Rush Limbaugh. It is a terrible way to be an informed adult participating in public discourse, but unfortunately it is a practice being encouraged by most universities. Nearly every professor is Progressive or at least left of center. Every speaker who is not left of center is banned or heckled into oblivion. When a speaker who disagrees with the Progressive consensus on campus is let through the door, the university sponsors rubber rooms with coloring books and stuffed unicorns for delicate students. There are actually prominent academics who argue against free speech and free exchange of diverse ideas on the theory that some ideas (ie all the ones they disagree with) are too dangerous be allowed a voice in public. Universities have become cocoons for protecting young people from challenging and uncomfortable ideas. I will take this all as a spur to do a next generation video or video series for YouTube -- though YouTube has started banning videos not liked by the Left, there is still room there to have a public voice. I just bought a nice new microphone so I guess it is time to get to work. I am presenting in Regina next week (high 22F, yay!) but after that I will start working on a video. Postscript: You know what this reminds me of? Back when I was a kid, forty years ago growing up in Texas, from time to time there would be a book-banning fight in the state. Perhaps there still are such fights. Generally some religious group will oppose a certain classic work of literature because it taught some bad moral lesson, or had bad words in it, or something. But you know what often became totally clear in such events? That the vast vast majority of the offended people had not actually read the book, or if they had, they could not remember any of it. They were participating because someone else on their side told them they should be against the book, probably also someone else who had never even read the thing. But I don't think that was the point. The objective was one of virtue-signalling, to reinforce ties in their own tribe and make it clear that they did not like some other tribe. At some point the content of the book became irrelevant to how the book was perceived by both tribes -- which is why I call this "post-modern" in my title. Phoenix businesses add hundreds of jobs every week. However, the only jobs that every get subsidized are in sexy businesses. That is because the subsidies themselves make zero sense, from an economic or public policy standpoint. The point is not to create jobs, but to create press releases and talking points for politicians and their re-election campaigns. And there is little that is sexier to politicians spending taxpayer money to get themselves re-elected than solar and Apple computer. Which brings us to this plant in Mesa (a suburb of Phoenix), which I am calling the Graveyard of Cronyism. This plant was built by First Solar to build solar panels. I would have to quit my day job and work full-time to figure out all the ways this plant was subsidized by taxpayers -- special feed-in tariffs for First Solar customers, government tax breaks for solar panel purchases, direct government subsidies and grant programs for solar panel purchases, the DOE loan guarantee program for solar... etc. In addition, the City of Mesa committed $10 million in infrastructure improvements to lure First Solar to the site. I can't find what economic development incentives there were but there must have been tax abatements. In addition, the company was promised a further $20 million in economic development funds from the County, but fortunately (unlike most such deals) the funds were tied to hitting employment milestones and were never paid. First Solar never produced a single panel at the plant before it realized it had no need for it. More recently, Apple and sapphire glass manufacturer GT Advanced bought the empty plant from First Solar. And again there was much rejoicing among politicians locally. Think of it -- Two great press release opportunities for politicians in just three years for the same plant! I never feel like we get the whole story on the development deals offered for these things but this is what we Brewer and the Arizona Legislature approved tax breaks related to sales taxes on energy at manufacturing plants. The state also put the Apple/GT plant into a special tax zone that pays a 5 percent commercial property tax rate. Most Arizona companies pay a 19 percent rate this year and an 18.5 percent next. [In addition,] Apple was slated to received [sic] $10 million from the Arizona Competes Fund for the Mesa factory. The Arizona Commerce Authority — the privatized state economic development agency which administers the $25 million sweeten-the-deal fund along with Gov. Jan Brewer — said neither Apple nor GT Advanced (Nasdaq: GTAT) have received any money. Well, it turns out that artificial sapphire sounds really cool (a pre-requisite for crony deals) but it is not so great for cell phones. Apple went another way and did not use the technology on iPhone 6 -- not just for timing reasons but because there are real issues with its performance. So a second crony buys the plant and does not even move in. What's next? I am thinking the best third tenant at the sexy-crony nexus would be an EV battery plant, or even better yet Tesla. It is too bad Fiskar motors went out of business so soon or they would be the perfect next crony fail for this site. Frequent readers of this blog will know that I am enormously skeptical of most fuel and efficiency numbers for electric vehicles. Electric vehicles can be quite efficient, and I personally really enjoy the driving feel of an electric car, but most of the numbers published for them, including by the government, are garbage. I have previously written a series of articles challenging the EPA's MPGe methodology for electric cars. In just a bit, I am going to challenge some numbers in a recent WSJ article on electric vehicles, but first let me give you an idea of why I don't trust many people on this topic. Below is a statement from Fueleconomy.gov, which bills itself as the official government source for fuel economy information (this is a public information, not a marketing site). In reference to electric vehicles, it writes this: Energy efficient. Electric vehicles convert about 59–62% of the electrical energy from the grid to power at the wheels—conventional gasoline vehicles only convert about 17–21% of the energy stored in gasoline to power at the wheels The implication, then, is that electric vehicles are 3x more energy efficient than cars with gasoline engines. I hope engineers and scientists can see immediately why this statement is total crap, but for the rest, here is the problem in short: Electricity has to be produced, often from a fossil fuel. That step, of converting the potential energy in the fuel to use-able work, is the least efficient step of the entire fuel to work process. Even in the most modern of plants it runs less than a 50% conversion efficiency. So the numbers for the gasoline cars include this inefficient step, but for the electric vehicle it has been shuffled off stage, back to the power plant which is left out of the calculation. Today I want to investigate this statement, which startled me: Factor in the $200 a month he reckons he isn't paying for gasoline to fill up his hulking SUV, and Mr. Beisel says "suddenly the [Nissan Leaf] puts $2,000 in my pocket." Yes, he pays for electricity to charge the Leaf's 24-kilowatt-hour battery—but not much. "In March, I spent $14.94 to charge the car" and a bit less than that in April, he says. This implies that on a cost-per-mile basis, the EV is over 13x more efficient than gasoline cars. Is this a fair comparison? For those who do not want to read a lot of math, I will preview the answer: the difference in fuel cost per mile is at best 2x, and is driven not by using less fossil fuel (the electric car likely uses a bit more, when you go all the way back to the power plant) but achieves its savings by using lower cost, less-refined fossil fuels (e.g. natural gas in a large power plant instead of gasoline in a car). Let's start with his estimate of $14.94. Assuming that is the purchased power into his vehicle charger, that the charger efficiency is 90%, and the cost per KwH in Atlanta is around $0.11, this implies that 122.24 use-able KwH are going into the car. Using an estimate of 3.3 miles per KwH for the Leaf, we get 403 miles driven per month or 3.7 cents per mile in electricity costs. This is very good, and nothing I write should imply that the Leaf is not an efficient vehicle. But its efficiency advantage is over-hyped. Now let's take his $200 a month for his Ford Expedition, which has an MPG around 15. Based on fuel prices in Atlanta of $3.50 a gallon, this implies 57 gallons per month and 857 miles driven. The cost is 23.3 cents per mile. Already we see one difference -- the miles driven assumptions are different. Either he, like a lot of people, don't have a reliable memory for how much he spent on gas, or he has changed his driving habits with the electric car (not unlikely given the shorter range). Either way, the total dollar costs he quotes are apples and oranges. The better comparison is 23.3 cents per mile for the Expedition vs. 3.7 cents a mile for the Leaf, a difference of about 6x. Still substantial, but already less than half the 13x difference implied by the article. But we can go further, because in a Nissan Leaf, he has a very different car from the Ford Expedition. It is much smaller, can carry fewer passengers and less cargo, cannot tow anything, and has only 25% of the Expedition's range. With an electric motor, it offers a very different driving experience. A better comparison would be to a Toyota Prius, the c version of which gets 50MPG. It is similar in most of these categories except that it has a much longer range, but we can't fix that comparison, so just keep that difference in mind. Let's look at the Prius for the same distances we calculated with his Leaf, about 403 miles. That would require 8.1 gallons in a Prius at $3.50, which would be $28.20 in total or 7 cents a mile. Note that while the Leaf still is better, the difference has been reduced to just under 2x. Perhaps more importantly, the annual fuel savings has been reduced from over $2200 vs. the Expedition that drove twice as many miles to $159 a year vs. the Prius driving the same number of miles. So the tradeoff is $159 a year savings but with much limited range (forgetting for a moment all the government crony-candy that comes with the electric car). $159 is likely a real savings but could be swamped by differences in long-term operating costs. The Prius has a gasoline engine to maintain which the Leaf does not, though Toyota has gotten those things pretty reliable. On the other hand the Leaf has a far larger battery pack than the Prius, and there are real concerns that this pack (which costs about $15,000 to manufacture) may have to be replaced long before the rest of the car is at end of life. Replacing a full battery pack after even 10 years would add about $1200 (based on discounted values at 8%) a year to operating costs, swamping the fuel cost advantage. Also note that a 2x difference in fuel costs per mile does not imply a 2x difference in fuel efficiency. Gasoline is very expensive vs. other fuels on a cost per BTU basis, due to taxes that are especially high for gasoline, blending requirements, refining intensity, etc.) Gasoline, as one person once said to me way back when I worked at a refinery, is the Filet Mignon of the barrel of oil -- if you can find a car that will feed on rump steak instead, you will save a lot of money even if it eats the same amount of meat. A lot of marginal electric production (and it is the margin we care about for new loads like electric cars) is natural gas, which is perhaps a third (or less) the cost of gasoline per BTU. My guess is that the key driver of this 2x cost per mile difference is not using less fuel per se, but the ability to use a less expensive, less-refined fuel. Taking a different approach to the same problem, based on the wells-to-wheels methodology described in my Forbes article (which in turn was taken directly from the DOE), the Nissan Leaf has a real eMPG of about 42 (36.5% of the published 115), less than the Prius's at 50. This confirms the findings above, that for fossil fuel generated electricity, the Leaf uses a bit more fossil fuels than the Prius but likely uses much less expensive fuels, so is cheaper to drive. If the marginal electrical fuel is natural gas, the Leaf also likely generates a bit less CO2. From environmental blog the Thin Green Line: McDonald's has been a frequent target on this blog, and many others related to health and environmental issues. But mark it on your calendar: This post is in praise of Micky D's, for installing EV charging stations at a new West Virginia location. Yes, it's just about the strangest place you could pick, given that the Huntington, WV, location is not on a throughway connecting EV early-adopter towns like New York, D.C., or San Francisco. The location clearly has more to do with its proximity to partner American Electric Power's Columbus, Ohio, headquarters "” but we'll give kudos where kudos are due. With 58 million people eating at McDonald's everyday, the burger chain isn't a bad spot to enable electric vehicle drivers to charge up. 99% of West Virginia's electricity comes from coal, so its interesting to see environmentalists championing the switch from gasoline to coal. Notwithstanding the fact that the fossil fuel use of electric vehicles is being grossly under-estimated, charging up your EV in WV is a great way to take positive steps to increase your CO2 footprint. After several posts yesterday, I rewrote my thoughts on EV's and the new EPA mileage numbers. I am more convinced than ever that this standard borders on outright fraud, particularly when the DOE published what should be the correct methodology way back in the Clinton Administration and the EPA has ignored this advice and gone with a methodology that inflates the MPG (equivilant) of EV's by a factor of nearly 3. For example, the list the Nissan Leaf with an MPGe of 99, but by the DOE methodology the number should be 36. The full article is in Forbes.com and is here. An excerpt: The end result is startling. Using the DOE's apples to apples methodology, the MPGe of the Nissan Leaf is not 99 but 36! Now, 36 is a good mileage number, but it is pretty pedestrian compared to the overblown expectations for electric vehicles, and is actually lower than the EPA calculated mileage of a number of hybrids and even a few traditional gasoline-powered vehicles like the Honda Supporters of the inflated EPA standards have argued that they are appropriate because they measure cars on their efficiency of using energy in whatever form is put in their tank (or batteries). But this is disingenuous. The whole point of US fuel economy standards is not power train efficiency per se, but to support an energy policy aimed at reducing fossil fuel use. To this end, the more sophisticated DOE standard is a much better reflection of how well the Nissan Leaf affects US fossil fuel use. The only reason not to use this standard is because the EPA, and the Administration in general, has too many chips on the table behind electric vehicles, and simply can't afford an honest accounting. Despite years and hundreds of millions of dollars of effort on electric vehicles, competitors are coming out of the woodwork to beat it to market with an all-electric sedan -- and, from the specs, seem to be beating it on price and features as well. Miles Electric has confirmed that it's working on a family sedan-sized all electric car for release in North America sometime next year. The car -- which will be released under a different, unknown brandname -- will be a first for the company, which specializes in neighborhood cars that only go up to about 25 miles per hour. The sedan will have a top speed of around 80 miles per hour, and a 100 mile range. It will also require 8-12 hours to fully recharge its dead lithium-ion battery. Miles is currently running the vehicle though crash tests, and expects to see about 300 of them on the road in California sometime next year. The going rate for one of these? About $45,000. Radical shifts in technology often obsolete first mover and scale advantages. The winners in the market for diesel electric locomotives (GM and GE) were totally different players from those who dominated the steam locomotive market (Alco, Baldwin, Lima and others). It will be interesting to see if such a change occurs in the auto market. The one the government did not support, plan for, or subsidize. It increasingly looks like hybrids, particularly newer plug-in hybrids, will be the high MPG, low-emission technology winner in the foreseeable future. The US and California governments, among others, have subsidized (and at times mandated) pure electric vehicles, hydrogen vehicles, natural gas powered vehicles, and fuel cell powered vehicles. While some governments have come along with ex-post-facto promotions of hybrids (e.g. ability to use the carpool lane), hybrids have been developed and won in the market entirely without government help and in places like California, effectively in the face of government opposition (because they were stuck on zero emission vehicles, low-emission vehicles were opposed) Plug in hybrids have many of the advantages of electric vehicles without the range problems. They use standard gasoline so they avoid the new fuel distribution issues of natural gas and hydrogen. And fuel cell technology may be great one day but is not there yet. I was reminded of all this by this article by Stephen Bainbridge on why the EV-1 failed. Update: This reminds me of the 19th century transcontinental railroads - UP, SP, NP, GN etc. Only one of these transcontinentals did not get any federal land grants or government financing -- the Great Northern of James J. Hill -- and not coincidently the GN was the only one not to go bankrupt in the close of the century. Why First-Mover Advantage in a New Industry Isn't Always An Advantage Relocation Subsidies, Short-Term Thinking, And Why Bezos is Smarter than Musk Fixing Tesla The Electric Vehicle Mileage Fraud, Updated: Tesla Model 3 Energy Costs Higher than A Prius, Despite Crazy-High eMPG Rating Why Tesla Agreed to Pay Elon Musk So Much Here Are the Two Problem With EV's Well, I Was Uninvited to Speak on Climate -- A Post-Modern Story of Ignorance and Narrow-Mindedness The Graveyard of Cronyism Bringing Skepticism (and Math) to Electric Vehicle Fuel Numbers Environmentalists Praising Use of Coal More Thoughts on EV MPG GM's Design Problems in a Nutshell And the Winning Low Emissions Technology is...
{"url":"https://coyoteblog.com/coyote_blog/tag/ev","timestamp":"2024-11-14T01:15:33Z","content_type":"application/xhtml+xml","content_length":"140388","record_id":"<urn:uuid:1f47098e-c306-4bda-bc2e-314a56aa1f7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00318.warc.gz"}
2022 – Feature Column In 1915, the paper “Note on an Operation of the Third Grade” by Albert A. Bennett appeared in the Annals of Mathematics. A terse two-page note, it was largely neglected until the early 2000s… Hyperoperations, Distributivity, and the Unreasonable Effectiveness of Multiplication Anil Venkatesh Adelphi University Iterated Operations Everyone knowsRead More → With: 1 Comment Decoding, Gerrymanders, and Markov Chains By: David Austin Tagged: cryptography, gerrymandering, Markov chains, persi diaconis Decoding, Gerrymanders, and Markov Chains David Austin Grand Valley State University In 2009, crypto miner James Howells of southern Wales mistakenly threw away a hard drive containing 8,000 bitcoins. That’s over $100 million even in today’s sinking crypto landscape. Thirteen years later, Howells has a plan, backed by venture capital,Read More → Predicting friendships and other fun machine learning tasks with graphs Tagged: data science, graph theory, Machine learning, social media Social media platforms connect users into massive graphs, with accounts as vertices and friendships as edges... Predicting friendships and other fun machine learning tasks with graphs Noah Giansiracusa Bentley University Artificial intelligence (AI) breakthroughs make the news headlines with increasing frequency these days. At least for the time being, AIRead More → Eight-dimensional spheres and the exceptional $E_8$ What is the $E_8$ lattice that appears in Viazovska's proof? What makes it special? How do you use it to pack spheres? Eight-dimensional spheres and the exceptional $E_8$ Ursula Whitcher Mathematical Reviews (AMS) In Helsinki this summer, Ukrainian mathematician Maryna Viazovska was awarded a Fields Medal "for the proof thatRead More → Applied Algebra: A Variety Show By: Courtney Gibbons Tagged: algebraic geometry, games, groebner bases, ring theory, sudoku I’m pretty sure the etiquette of puzzle creation insists that a “good” puzzle has a unique solution—but bear with me! I promise I’m breaking the rules of etiquette for a good reason! Applied Algebra A Variety Show Courtney Gibbons Hamilton College My interest in applied algebra was a long timeRead More → Statistical Concepts and Intersectionality Tagged: intersectionality, Kimberlé Crenshaw, law, simpson's paradox We can formulate this situation into an example of Simpson’s Paradox. When employee outcomes were examined overall, there was no evidence of discrimination between men and women. However, if employee outcomes were to be further broken down by race, there would have been a very clear discrepancy between the BlackRead More → Wordle is a game of chance Tagged: games, information theory, language, probability, wordle Many of the proposed strategies use the notions introduced by Claude Shannon to solve problems of communication… Wordle is a game of chance William Casselman University of British Columbia The game Wordle, which is found currently on the New York Times official Wordle site, can be played by anybody withRead More → Designing supersymmetry Tagged: Adinkras, physics, supersymmetry Studying supersymmetry in physically realistic situations requires a tremendous amount of physical and mathematical sophistication. We’re going to simplify as much as possible: all the way down to zero spatial dimensions! Designing supersymmetry Ursula Whitcher Mathematical Reviews (AMS) Mathematicians and physicists both love symmetry, but depending on who you’re talkingRead More → Geometric Decompositions A remarkable theorem involving decompositions is that if one has two plane simple polygons of the same area, it is possible to decompose either of the polygons into polygonal pieces that can be reassembled to form the other polygon… Geometric Decompositions Joe Malkevitch York College (CUNY) Introduction When looking atRead More → The Origins of Ordinary Least Squares Assumptions Tagged: linear regression, ordinary least squares When we start to think more about it, more questions arise. What makes a line “good”? How do we tell if a line is the “best”? The Origins of Ordinary Least Squares Assumptions Some Are More Breakable Than Others Sara Stoudt Bucknell University Introduction Fitting a line to a setRead More →
{"url":"https://mathvoices.ams.org/featurecolumn/category/2022/","timestamp":"2024-11-05T15:12:12Z","content_type":"text/html","content_length":"105175","record_id":"<urn:uuid:e90b2b2c-9ba1-41da-8491-87de50135bd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00390.warc.gz"}
Constraints and Constraints and concepts (since C++20) This page describes the core language feature adopted for C++20. For named type requirements used in the specification of the standard library, see named requirements. For the Concepts TS version of this feature, see here. Class templates, function templates, and non-template functions (typically members of class templates) may be associated with a constraint, which specifies the requirements on template arguments, which can be used to select the most appropriate function overloads and template specializations. Named sets of such requirements are called concepts. Each concept is a predicate, evaluated at compile time, and becomes a part of the interface of a template where it is used as a constraint: #include <string> #include <cstddef> #include <concepts> // Declaration of the concept "Hashable", which is satisfied by any type 'T' // such that for values 'a' of type 'T', the expression std::hash<T>{}(a) // compiles and its result is convertible to std::size_t template<typename T> concept Hashable = requires(T a) { std::hash<T>{}(a) } -> std::convertible_to<std::size_t>; struct meow {}; // Constrained C++20 function template: template<Hashable T> void f(T) {} // Alternative ways to apply the same constraint: // template<typename T> // requires Hashable<T> // void f(T) {} // template<typename T> // void f(T) requires Hashable<T> {} // void f(Hashable auto /*parameterName*/) {} int main() using std::operator""s; f("abc"s); // OK, std::string satisfies Hashable // f(meow{}); // Error: meow does not satisfy Hashable Violations of constraints are detected at compile time, early in the template instantiation process, which leads to easy to follow error messages: std::list<int> l = {3, -1, 10}; std::sort(l.begin(), l.end()); // Typical compiler diagnostic without concepts: // invalid operands to binary expression ('std::_List_iterator<int>' and // 'std::_List_iterator<int>') // std::__lg(__last - __first) * 2); // ~~~~~~ ^ ~~~~~~~ // ... 50 lines of output ... // Typical compiler diagnostic with concepts: // error: cannot call std::sort with std::_List_iterator<int> // note: concept RandomAccessIterator<std::_List_iterator<int>> was not satisfied The intent of concepts is to model semantic categories (Number, Range, RegularFunction) rather than syntactic restrictions (HasPlus, Array). According to ISO C++ core guideline T.20, "The ability to specify meaningful semantics is a defining characteristic of a true concept, as opposed to a syntactic constraint." A concept is a named set of requirements. The definition of a concept must appear at namespace scope. The definition of a concept has the form. template < template-parameter-list > concept concept-name attr (optional) = constraint-expression; attr - sequence of any number of attributes // concept template<class T, class U> concept Derived = std::is_base_of<U, T>::value; Concepts cannot recursively refer to themselves and cannot be constrained: template<typename T> concept V = V<T*>; // error: recursive concept template<class T> concept C1 = true; template<C1 T> concept Error1 = true; // Error: C1 T attempts to constrain a concept definition template<class T> requires C1<T> concept Error2 = true; // Error: the requires-clause attempts to constrain a concept Explicit instantiations, explicit specializations, or partial specializations of concepts are not allowed (the meaning of the original definition of a constraint cannot be changed). Concepts can be named in an id-expression. The value of the id-expression is true if the constraint expression is satisfied, and false otherwise. Concepts can also be named in a type-constraint, as part of. In a type-constraint, a concept takes one less template argument than its parameter list demands, because the contextually deduced type is implicitly used as the first argument of the concept. template<class T, class U> concept Derived = std::is_base_of<U, T>::value; template<Derived<Base> T> void f(T); // T is constrained by Derived<T, Base> A constraint is a sequence of logical operations and operands that specifies requirements on template arguments. They can appear within requires expressions or directly as bodies of concepts. There are three types of constraints: 1) conjunctions 2) disjunctions 3) atomic constraints The constraint associated with a declaration are determined by normalizing a logical AND expression whose operands are in the following order: This order determines the order in which constraints are instantiated when checking for satisfaction. A constrained declaration may only be redeclared using the same syntactic form. No diagnostic is required: // These first two declarations of f are fine template<Incrementable T> void f(T) requires Decrementable<T>; template<Incrementable T> void f(T) requires Decrementable<T>; // OK, redeclaration // Inclusion of this third, logically-equivalent-but-syntactically-different // declaration of f is ill-formed, no diagnostic required template<typename T> requires Incrementable<T> && Decrementable<T> void f(T); // The following two declarations have different constraints: // the first declaration has Incrementable<T> && Decrementable<T> // the second declaration has Decrementable<T> && Incrementable<T> // Even though they are logically equivalent. template<Incrementable T> void g(T) requires Decrementable<T>; template<Decrementable T> void g(T) requires Incrementable<T>; // ill-formed, no diagnostic required The conjunction of two constraints is formed by using the && operator in the constraint expression: template<class T> concept Integral = std::is_integral<T>::value; template<class T> concept SignedIntegral = Integral<T> && std::is_signed<T>::value; template<class T> concept UnsignedIntegral = Integral<T> && !SignedIntegral<T>; A conjunction of two constraints is satisfied only if both constraints are satisfied. Conjunctions are evaluated left to right and short-circuited (if the left constraint is not satisfied, template argument substitution into the right constraint is not attempted: this prevents failures due to substitution outside of immediate context). template<typename T> constexpr bool get_value() { return T::value; } template<typename T> requires (sizeof(T) > 1 && get_value<T>()) void f(T); // #1 void f(int); // #2 void g() f('A'); // OK, calls #2. When checking the constraints of #1, // 'sizeof(char) > 1' is not satisfied, so get_value<T>() is not checked The disjunction of two constraints is formed by using the || operator in the constraint expression. A disjunction of two constraints is satisfied if either constraint is satisfied. Disjunctions are evaluated left to right and short-circuited (if the left constraint is satisfied, template argument substitution into the right constraint is not attempted). template<class T = void> requires EqualityComparable<T> || Same<T, void> struct equal_to; Atomic constraints An atomic constraint consists of an expression E and a mapping from the template parameters that appear within E to template arguments involving the template parameters of the constrained entity, called its parameter mapping. Atomic constraints are formed during constraint normalization. E is never a logical AND or logical OR expression (those form conjunctions and disjunctions, respectively). Satisfaction of an atomic constraint is checked by substituting the parameter mapping and template arguments into the expression E. If the substitution results in an invalid type or expression, the constraint is not satisfied. Otherwise, E, after any lvalue-to-rvalue conversion, shall be a prvalue constant expression of type bool, and the constraint is satisfied if and only if it evaluates to The type of E after substitution must be exactly bool. No conversion is permitted: template<typename T> struct S constexpr operator bool() const { return true; } template<typename T> requires (S<T>{}) void f(T); // #1 void f(int); // #2 void g() f(0); // error: S<int>{} does not have type bool when checking #1, // even though #2 is a better match Two atomic constraints are considered identical if they are formed from the same expression at the source level and their parameter mappings are equivalent. template<class T> constexpr bool is_meowable = true; template<class T> constexpr bool is_cat = true; template<class T> concept Meowable = is_meowable<T>; template<class T> concept BadMeowableCat = is_meowable<T> && is_cat<T>; template<class T> concept GoodMeowableCat = Meowable<T> && is_cat<T>; template<Meowable T> void f1(T); // #1 template<BadMeowableCat T> void f1(T); // #2 template<Meowable T> void f2(T); // #3 template<GoodMeowableCat T> void f2(T); // #4 void g() f1(0); // error, ambiguous: // the is_meowable<T> in Meowable and BadMeowableCat forms distinct atomic // constraints that are not identical (and so do not subsume each other) f2(0); // OK, calls #4, more constrained than #3 // GoodMeowableCat got its is_meowable<T> from Meowable Constraint normalization Constraint normalization is the process that transforms a constraint expression into a sequence of conjunctions and disjunctions of atomic constraints. The normal form of an expression is defined as • The normal form of an expression (E) is the normal form of E; • The normal form of an expression E1 && E2 is the conjunction of the normal forms of E1 and E2. • The normal form of an expression E1 || E2 is the disjunction of the normal forms of E1 and E2. • The normal form of an expression C<A1, A2, ... , AN>, where C names a concept, is the normal form of the constraint expression of C, after substituting A1, A2, ... , AN for C's respective template parameters in the parameter mappings of each atomic constraint of C. If any such substitution into the parameter mappings results in an invalid type or expression, the program is ill-formed, no diagnostic required. template<typename T> concept A = T::value || true; template<typename U> concept B = A<U*>; // OK: normalized to the disjunction of // - T::value (with mapping T -> U*) and // - true (with an empty mapping). // No invalid type in mapping even though // T::value is ill-formed for all pointer types template<typename V> concept C = B<V&>; // Normalizes to the disjunction of // - T::value (with mapping T-> V&*) and // - true (with an empty mapping). // Invalid type V&* formed in mapping => ill-formed NDR • The normal form of any other expression E is the atomic constraint whose expression is E and whose parameter mapping is the identity mapping. This includes all fold expressions, even those folding over the && or || operators. User-defined overloads of && or || have no effect on constraint normalization. Requires clauses The keyword requires is used to introduce a requires-clause, which specifies constraints on template arguments or on a function declaration. template<typename T> void f(T&&) requires Eq<T>; // can appear as the last element of a function declarator template<typename T> requires Addable<T> // or right after a template parameter list T add(T a, T b) { return a + b; } In this case, the keyword requires must be followed by some constant expression (so it's possible to write requires true), but the intent is that a named concept (as in the example above) or a conjunction/disjunction of named concepts or a requires expression is used. The expression must have one of the following forms: • a primary expression, e.g. Swappable<T>, std::is_integral<T>::value, (std::is_object_v<Args> && ...), or any parenthesized expression • a sequence of primary expressions joined with the operator && • a sequence of aforementioned expressions joined with the operator || template<class T> constexpr bool is_meowable = true; template<class T> constexpr bool is_purrable() { return true; } template<class T> void f(T) requires is_meowable<T>; // OK template<class T> void g(T) requires is_purrable<T>(); // error, is_purrable<T>() is not a primary expression template<class T> void h(T) requires (is_purrable<T>()); // OK Partial ordering of constraints Before any further analysis, constraints are normalized by substituting the body of every named concept and every requires expression until what is left is a sequence of conjunctions and disjunctions on atomic constraints. A constraint P is said to subsume constraint Q if it can be proven that P implies Q up to the identity of atomic constraints in P and Q. (Types and expressions are not analyzed for equivalence: N > 0 does not subsume N >= 0). Specifically, first P is converted to disjunctive normal form and Q is converted to conjunctive normal form. P subsumes Q if and only if: • every disjunctive clause in the disjunctive normal form of P subsumes every conjunctive clause in the conjunctive normal form of Q, where • a disjunctive clause subsumes a conjunctive clause if and only if there is an atomic constraint U in the disjunctive clause and an atomic constraint V in the conjunctive clause such that U subsumes V; • an atomic constraint A subsumes an atomic constraint B if and only if they are identical using the rules described above. Subsumption relationship defines partial order of constraints, which is used to determine: • the best viable candidate for a non-template function in overload resolution • the address of a non-template function in an overload set • the best match for a template template argument • partial ordering of class template specializations • partial ordering of function templates If declarations D1 and D2 are constrained and D1's associated constraints subsume D2's associated constraints (or if D2 is unconstrained), then D1 is said to be at least as constrained as D2. If D1 is at least as constrained as D2, and D2 is not at least as constrained as D1, then D1 is more constrained than D2. template<typename T> concept Decrementable = requires(T t) { --t; }; template<typename T> concept RevIterator = Decrementable<T> && requires(T t) { *t; }; // RevIterator subsumes Decrementable, but not the other way around template<Decrementable T> void f(T); // #1 template<RevIterator T> void f(T); // #2, more constrained than #1 f(0); // int only satisfies Decrementable, selects #1 f((int*)0); // int* satisfies both constraints, selects #2 as more constrained template<class T> void g(T); // #3 (unconstrained) template<Decrementable T> void g(T); // #4 g(true); // bool does not satisfy Decrementable, selects #3 g(0); // int satisfies Decrementable, selects #4 because it is more constrained template<typename T> concept RevIterator2 = requires(T t) { --t; *t; }; template<Decrementable T> void h(T); // #5 template<RevIterator2 T> void h(T); // #6 h((int*)0); // ambiguous concept, requires. Defect reports The following behavior-changing defect reports were applied retroactively to previously published C++ standards. DR Applied to Behavior as published Correct behavior CWG 2428 C++20 could not apply attributes to concepts allowed See also Requires expression(C++20) yields a prvalue expression of type bool that describes the constraints
{"url":"https://docs.w3cub.com/cpp/language/constraints","timestamp":"2024-11-03T02:53:20Z","content_type":"text/html","content_length":"30028","record_id":"<urn:uuid:c8c077b8-f60e-4151-9a9c-ca8fccba5be3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00842.warc.gz"}
6.9 Inductance Learning Objectives Learning Objectives By the end of this section, you will be able to do the following: • Calculate the inductance of an inductor • Calculate the energy stored in an inductor • Calculate the emf generated in an inductor Induction is the process in which an emf is induced by changing magnetic flux. Many examples have been discussed so far, some more effective than others. Transformers, for example, are designed to be particularly effective at inducing a desired voltage and current with very little loss of energy to other forms. Is there a useful physical quantity related to how effective a given device is? The answer is yes, and that physical quantity is called inductance. Mutual inductance is the effect of Faraday’s law of induction for one device upon another, such as the primary coil in transmitting energy to the secondary in a transformer. See Figure 6.39, where simple coils induce emfs in one another. In the many cases where the geometry of the devices is fixed, flux is changed by varying current. We therefore concentrate on the rate of change of current, $ΔI/Δt,ΔI/Δt, size 12{ΔI} {}$ as the cause of induction. A change in the current $I1I1 size 12{I rSub { size 8{1} } } {}$ in one device, coil 1 in the figure, induces an $emf2emf2 size 12{"emf" rSub { size 8{2} } } {}$ in the other. We express this in equation form as 6.34 $emf2=−MΔI1Δt,emf2=−MΔI1Δt, size 12{"emf" rSub { size 8{2} } = - M { {ΔI rSub { size 8{1} } } over {Δt} } } {}$ where $MM size 12{M} {}$ is defined to be the mutual inductance between the two devices. The minus sign is an expression of Lenz’s law. The larger the mutual inductance $M,M, size 12{M} {}$ the more effective the coupling. For example, the coils in Figure 6.39 have a small $MM size 12{M} {}$ compared with the transformer coils in Figure 6.28. Units for $MM$ are $(V⋅s)/A=Ω⋅s,(V⋅s)/A=Ω⋅s,$ which is named a henry (H), after Joseph Henry. That is, $1 H=1Ω⋅s.1 H=1Ω⋅s.$ Nature is symmetric here. If we change the current $I2I2 size 12{I rSub { size 8{2} } } {}$ in coil 2, we induce an $emf1emf1 size 12{"emf" rSub { size 8{1} } } {}$ in coil 1, which is given by 6.35 $emf1=−MΔI2Δt,emf1=−MΔI2Δt, size 12{"emf" rSub { size 8{1} } = - M { {ΔI rSub { size 8{2} } } over {Δt} } } {}$ where $MM size 12{M} {}$ is the same as for the reverse process. Transformers run backward with the same effectiveness, or mutual inductance $M.M. size 12{M} {}$ A large mutual inductance $MM size 12{M} {}$ may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance $MM size 12{M} {}$ is to counterwind coils to cancel the magnetic field produced. (See Figure 6.40.) Self-inductance, the effect of Faraday’s law of induction of a device on itself, also exists. When, for example, current through a coil is increased, the magnetic field and flux also increase, inducing a counter emf, as required by Lenz’s law. Conversely, if the current is decreased, an emf is induced that opposes the decrease. Most devices have a fixed geometry, and so the change in flux is due entirely to the change in current $ΔIΔI size 12{ΔI} {}$ through the device. The induced emf is related to the physical geometry of the device and the rate of change of current. It is given by 6.36 $emf=−LΔIΔt,emf=−LΔIΔt, size 12{"emf"= - L { {ΔI} over {Δt} } } {}$ where $LL size 12{L} {}$ is the self-inductance of the device. A device that exhibits significant self-inductance is called an inductor, and given the symbol in Figure 6.41. The minus sign is an expression of Lenz’s law, indicating that emf opposes the change in current. Units of self-inductance are henries (H), just as for mutual inductance. The larger the self-inductance $LL size 12{L} {}$ of a device, the greater its opposition to any change in current through it. For example, a large coil with many turns and an iron core has a large $LL size 12{L} {}$ and will not allow current to change quickly. To avoid this effect, a small $LL size 12{L} {}$ must be achieved, such as by counterwinding coils as in Figure 6.40. A 1 H inductor is a large inductor. To illustrate this, consider a device with $L=1.0 HL=1.0 H size 12{L=1 "." 0`H} {}$ that has a 10 A current flowing through it. What happens if we try to shut off the current rapidly, perhaps in only 1.0 ms? An emf, given by $e emf=−L(ΔI/Δt),$ will oppose the change. Thus an emf will be induced given by $e emf=−L(ΔI/Δt)=(1.0 H)[(10 A)/(1.0 ms)]=10,000 V.$ . The positive sign means this large voltage is in the same direction as the current, opposing its decrease. Such large emfs can cause arcs, damaging switching equipment, and so it may be necessary to change current more slowly. There are uses for such a large induced voltage. Camera flashes use a battery, two inductors that function as a transformer, and a switching system or oscillator to induce large voltages. (Remember that we need a changing magnetic field, brought about by a changing current, to induce a voltage in another coil.) The oscillator system will do this many times as the battery voltage is boosted to over one thousand volts. (You may hear the high-pitched whine from the transformer as the capacitor is being charged.) A capacitor stores the high voltage for later use in powering the flash. (See Figure 6.42.) It is possible to calculate $LL size 12{L} {}$ for an inductor given its geometry (size and shape) and knowing the magnetic field that it produces. This is difficult in most cases because of the complexity of the field created. So, in this text, the inductance $LL size 12{L} {}$ is usually a given quantity. One exception is the solenoid, because it has a very uniform field inside, a nearly zero field outside, and a simple shape. It is instructive to derive an equation for its inductance. We start by noting that the induced emf is given by Faraday’s law of induction as $e emf=−N(ΔΦ/Δt)$ and, by the definition of self-inductance, as $e emf=−L(ΔI/Δt).$ Equating these yields 6.37 $emf=−NΔΦΔt=−LΔIΔt.emf=−NΔΦΔt=−LΔIΔt. size 12{"emf"= - N { {ΔΦ} over {Δt} } = - L { {ΔI} over {Δt} } } {}$ Solving for $LL size 12{L} {}$ gives 6.38 $L=NΔΦΔI.L=NΔΦΔI. size 12{L=N { {ΔΦ} over {ΔI} } } {}$ This equation for the self-inductance $LL size 12{L} {}$ of a device is always valid. It means that self-inductance $LL size 12{L} {}$ depends on how effective the current is in creating flux; the more effective, the greater $Δ ΔΦ/ΔI$ is. Let us use this last equation to find an expression for the inductance of a solenoid. Since the area $A A$ of a solenoid is fixed, the change in flux is $Δ Φ = Δ ( B A ) = A Δ B . Δ Φ = Δ ( B A ) = A Δ B .$ To find $Δ B , Δ B ,$ we note that the magnetic field of a solenoid is given by $B=μ0nI=μ0NIℓ.B=μ0nI=μ0NIℓ.size 12{B=μ rSub { size 8{0} } ital "nI"=μ rSub { size 8{0} } { { ital "NI"} over {ℓ} } } {}$ (Here, $n n=N/ℓ,$ where $N N$ is the number of coils and $ℓ ℓ$ is the solenoid’s length.) Only the current changes, so that $ΔΦ=AΔB=μ0NAΔIℓ.ΔΦ=AΔB=μ0NAΔIℓ.size 12{ΔΦ=AΔB=μ rSub { size 8{0} } ital "NA" { {ΔI} over {ℓ} } } {}$ Substituting $Δ Φ Δ Φ$ into $L=NΔΦΔIL=NΔΦΔI size 12{L=N { {ΔΦ} over {ΔI} } } {}$ gives 6.39 $L=NΔΦΔI=Nμ0NAΔIℓΔI.L=NΔΦΔI=Nμ0NAΔIℓΔI. size 12{L=N { {ΔΦ} over {ΔI} } =N { {μ rSub { size 8{0} } ital "NA" { {ΔI} over {ℓ} } } over {ΔI} } } {}$ This simplifies to 6.40 $L=μ0N2Aℓ(solenoid).L=μ0N2Aℓ(solenoid). size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ This is the self-inductance of a solenoid of cross-sectional area $A A$ and length $ℓ , ℓ ,$ Note that the inductance depends only on the physical characteristics of the solenoid, consistent with its Example 6.7 Calculating the Self-Inductance of a Moderate Size Solenoid Calculate the self-inductance of a 10.0 cm long, 4.00 cm diameter solenoid that has 200 coils. This is a straightforward application of $L=μ0N2Aℓ,L=μ0N2Aℓ,size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ since all quantities in the equation except $LL size 12{L} {}$ are known. Use the following expression for the self-inductance of a solenoid: 6.41 $L=μ0N2AℓL=μ0N2Aℓ size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ The cross-sectional area in this example is $A=π r 2 =(3.14 . . .) (0.0200 m) 2 =1.26 × 10 −3 m 2 , A=π r 2 =(3.14 . . .) (0.0200 m) 2 =1.26 × 10 −3 m 2 ,$$N N$ is given to be 200, and the length $ℓ ℓ$ is 0.100 m. We know the permeability of free space is $μ0=4π×10−7T⋅m/A.μ0=4π×10−7T⋅m/A.$ Substituting these into the expression for $L L$ gives 6.42 $L = (4π×10−7 T⋅m/A)(200)2(1.26×10−3 m2)0.100 m = 0.632 mH. L = (4π×10−7 T⋅m/A)(200)2(1.26×10−3 m2)0.100 m = 0.632 mH.$ This solenoid is moderate in size. Its inductance of nearly a millihenry is also considered moderate. One common application of inductance is used in traffic lights that can tell when vehicles are waiting at the intersection. An electrical circuit with an inductor is placed in the road under the place a waiting car will stop. The body of the car increases the inductance and the circuit changes, sending a signal to the traffic lights to change colors. Similarly, metal detectors used for airport security employ the same technique. A coil or inductor in the metal detector frame acts as both a transmitter and a receiver. The pulsed signal in the transmitter coil induces a signal in the receiver. The self-inductance of the circuit is affected by any metal object in the path. Such detectors can be adjusted for sensitivity and also can indicate the approximate location of metal found on a person. See Figure 6.43. Energy Stored in an Inductor Energy Stored in an Inductor We know from Lenz’s law that inductances oppose changes in current. There is an alternative way to look at this opposition that is based on energy. Energy is stored in a magnetic field. It takes time to build up energy, and it also takes time to deplete energy; hence, there is an opposition to rapid change. In an inductor, the magnetic field is directly proportional to current and to the inductance of the device. It can be shown that the energy stored in an inductor $EindEind size 12{E rSub { size 8{"ind"} } } {}$ is given by 6.43 $Eind=12LI2.Eind=12LI2. size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$ This expression is similar to that for the energy stored in a capacitor. Example 6.8 Calculating the Energy Stored in the Field of a Solenoid How much energy is stored in the 0.632 mH inductor of the preceding example when a 30.0 A current flows through it? The energy is given by the equation $Eind=12LI2,Eind=12LI2,size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$ and all quantities except $EindEind size 12{E rSub { size 8{"ind"} } } {}$ are known. Substituting the value for $LL size 12{L} {}$ found in the previous example and the given current into $Eind=12LI2Eind=12LI2 size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$ gives 6.44 $Eind = 12LI2 = 0.5(0.632×10−3 H)(30.0 A)2=0.284 J. Eind = 12LI2 = 0.5(0.632×10−3 H)(30.0 A)2=0.284 J.$ This amount of energy is certainly enough to cause a spark if the current is suddenly switched off. It cannot be built up instantaneously unless the power input is infinite.
{"url":"https://texasgateway.org/resource/69-inductance?book=79106&binder_id=78826","timestamp":"2024-11-04T10:30:47Z","content_type":"text/html","content_length":"160048","record_id":"<urn:uuid:ac9d579b-63f0-48df-948d-3a31a0731943>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00415.warc.gz"}
The Power of Christmas [Transum: The answers to each of the Starters and online exercises can be found lower down the page when signed in as a teacher, parent or tutor.] answers: each get 1 or 64 or 729..... Catalan's conjecture. Noel and Merrie received some Christmas presents. The number of presents Noel received was a power of 3. The number of presents Merrie received was a power of 2. The number of Christmas presents they received were consecutive numbers. How many presents did they receive? Sign in to your Transum subscription account to see the answers Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page. Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website. Educational Technology on Amazon Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access to it here is a concise URL for a version of this page without the comments: However it would be better to assign one of the student interactive activities below. Here is the URL which will take them to our Indices Pairs game. Many more ChrisMaths activities available. Chrismassy background music is Christmas Village from YouTube Studio
{"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_December16.ASP","timestamp":"2024-11-07T20:11:07Z","content_type":"text/html","content_length":"25563","record_id":"<urn:uuid:8d984d19-85f8-4907-b6ef-fb3de2983712>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00083.warc.gz"}
Winter Term 2017/2018 Hosts: Prof. Dr. R. Klein (FU), Prof. Dr. R. Kornhuber (FU), Prof. Dr. C. Schütte (FU/ZIB) Location: Freie Universität Berlin, Institut für Mathematik, Arnimallee 6, 14195 Berlin-Dahlem, Room: 031 ground floor Time: The seminar takes place on Thursday at 4:00 pm Thursday, 19.10.2017: Cluster discussion Discrete/continuous deterministic/stochastic//hybrid models Data-driven multiscale modelling Thursday, 26.10.2017: Cluster discussion Beyond equilibrium Seamless Numerics Thursday, 02.11.2017: Cluster discussion Atmospheric Moist Processes Coarse Graining / homogenization Thursday, 16.11.2017: Lecture Wiebke Schubotz, Max Planck Institute for Meteorology, Hamburg High definition clouds and precipitation - an overview of the HD(CP)² project The project HD(CP)² (High Definition Clouds and Precipitation for advancing Climate Prediction) addresses the lack of understanding of cloud and precipitation (CP) processes, which is one of the foremost problems of climate simulations and climate predictions. In its first funding phase, the project leveraged rapid developments in simulation and measurement science (through its modeling and observation modules) and thus provided new insights to resolve the CP roadblock. This resulted in a significantly improved representation of clouds and precipitation in the ICON (Icosahedral non-hydrostatic) model that is used for hindcast simulations in HD(CP)². This model is currently utilized on a scale of 150m horizontal resolution over regions so diverse as central Europe, the Tropical and the Northern Atlantic. In its second funding phase, the work of the modeling and observation modules is utilized in several synthesis modules that investigate various topics such as the fast cloud adjustments to aerosols, convective organization of clouds or the influence of land surface heterogeneity. Data from observation campaigns is made available through the project own data base SAMD to the scientific community. Thursday, 30.11.2017: Colloquium 1) Thomas von Larcher, Freie Universität Berlin: "What is .. Scale similarity and self organisation in turbulent flows?" Self organisation in turbulent flows leads to the emergence of coherent vortices at different scales. Such coherent structures have been highlighted by flow visualisation methods, for example by defining vortices as areas where the vorticity magnitude is greater than the rate of strain (so-called Q-criterion). To enable self-similar extrapolation of structures to small, unresolved scales, quantitative description of self-similar structures needs to be achieved. One challenging aspect is that the geometry of coherent structures can be variable; with increasing vorticity level, one typically sees an evolution from ribbon-like structures to elongated tubes. In order to be used in this context, pattern recognition techniques need to be able to detect structures despite being stretched or rotated. Furthermore, intense small-scale structures are not randomly distributed in space and time but rather form clusters of inertial-range extent, leading to an intermittent flow organization. With increasing Reynolds number, the intermittency becomes more pronounced and fluctuations in velocity gradients become more extreme, with longer tails in their probability distribution. Non-local scale interactions appear to also impact intermittency, to an extent that scales with the Reynolds number. Studying the organization in turbulent flows using data-driven methodologies will be part of project B07. 2) Nikki Vercauteren, Freie Universität Berlin: "What is .. Turbulence closure models based on scale similarity principles?" In the range of turbulent flow prediction tools, Large Eddy Simulations (LES) stand in the middle, between direct numerical simulations (DNS) where all the scales of motion are resolved, and Reynolds Averaged Navier-Stokes (RANS) methods where all the turbulent scales are modeled. In LES, all the large, energy containing scales that one can computationally afford to capture on a numerical grid (the resolved scales) are simulated, and the dynamics of the small turbulent eddies (subgrid scales) that cannot be captured and their effect on the larger scales are parameterized based on resolved, or filtered, quantities. With scale similarity in the inertial range of turbulence, a subgrid-scale (SGS) model which respects the scale-similarity should in principle be applicable at different filter scales. This principle is exploited in the dynamic SGS model to determine numerical coefficients in the model. The most widely used version is the dynamic Smagorinsky model, in which the dynamic procedure is applied to determine the appropriate value of the Smagorinsky model coefficient. Usually the appropriate coefficient is determined as the one that most accurately represents energy transfer across scales and calculating it involves averaging over directions of statistical homogeneity of the flow (for example over flow trajectories). With a refined dynamic procedure, scale-dependent coefficients are used to mitigate the assumption of scale invariance; this has proven to be useful in the vicinity of the lower boundary where the subgrid scales account for a large portion of the flow and in stably stratified conditions. Another, structural approach is to treat the turbulent flow as a set of flow structures moving in a Lagrangian frame and to track their interactions. This is used in coherent vortex simulation methods (CVS), in which a wavelet-filer decomposes the Navier-Stokes equations solutions Wavelets are then dynamically selected to track the flow evolution with a reduced number of modes. Quantifying self-similarity in such coherent vortex approaches would give a way to extrapolate ensemble of coherent flow structures from a coarse grid to generate unresolved fluctuations, thereby defining a new turbulence closure based on self-similarity principles. This is part of the aims of project B07. Thursday, 11.01.2018: Lecture Yuri Podladchikov, University of Lausanne Direct numerical simulations and parametrization of Thermo-Hydro-Mechanical-Chemical fully coupled flows Unavoidable upscaling from millimeters to hundreds of kilometers of subsurface flows of porous fluids requires validation by natural observations, laboratory or numerical experiments. We use hints from effective media relationships applied to homogenization of pore-scale models to develop fully coupled continuum models and investigate them at wide range of scales by systematic numerical simulations. The most important features of the solutions are the spontaneous development of spatial and temporal localization into traveling shock or solitary waves or self-similar spreading solutions typical for degenerate parabolic equations. We identify the key features of numerical solutions to collapse the numerical data to simple functional relationships between averaged properties of the flows essential for building of the upscaled models. Thursday, 25.01.2018: Colloquium Vyacheslav Boyko, Freie Universität Berlin: "What is .. dynamical system identification with FEM-BV-VARX?" The system identification (SI) is applied as an alternative to the analytical modeling. The objective is to determine a mathematical model that is able to characterize the output dynamics induced by a given input. Depending on the data and the research task several questions can be answered. Is there a relationship between the variables, how can this relationship be quantified, what amount of output signal energy can be described by a specific input, what type of nonlinearity describes the system, can we predict the output and how accurate it will be, and etc.? In this context one important step is the parameter estimation. To accomplish this the FEM-BV-VARX method will be introduced. One of its strength is to solve the regression and classification task simultaneously, providing a solution as a set of systems operating locally in time. The aim within this talk is to explain the general idea of SI starting from linear system theory, going to nonlinear-spatio-temporal SI and analysis. In case to be able to identify the letter one the theory, methods and examples are presented in a way, to give an idea of the iterative SI process, its difficulties and capabilities using FEM-BV-VARX. Thursday, 01.02.2018: Poster Session Thursday, 22.02.2018: Lecture Edriss S. Titi, Texas A&M University ALSO The Weizmann Institute of Science On Recent Advances of the 3D Euler Equations by Means of Examples In this talk we will use a basic example of shear flow to demonstrate some of the recent advances in the three-dimensional Euler equations. Specifically, this example was introduced by DiPerna and Majda to show that weak limit of classical solutions of Euler equations may, in some cases, fail to be a weak solution of Euler equations. We use this shear flow example to provide non-generic, yet nontrivial, examples concerning the immediate loss of smoothness and ill-posedness of solutions of the three-dimensional Euler equations, for initial data that do not belong to C1,α . Moreover, we show by means of this shear flow example the existence of weak solutions for the three-dimensional Euler equations with vorticity that is having a nontrivial density concentrated on non-smooth surface (vortex sheet). This is very different from what has been proven for the two-dimensional Kelvin-Helmholtz (Birkhoff-Rott) problem where a minimal regularity implies the real analyticity of the interface. Furthermore, we use this shear flow to provide explicit examples of non-regular solutions of the three-dimensional Euler equations that conserve the energy, an issue which is related to the Onsager conjecture. Eventually, we will discuss the recent remarkable work of De Lellis and Székelyhidi concerning the wild weak solutions of Euler equations and their non-uniqueness. In particular, we propose the following ruling out criterion for non-physical weak solutions of Euler equations: "Any weak solution which is not a vanishing viscosity limit of weak solutions of the Navier-Stokes equations should be ruled out". We will use this shear flow, and other solutions of Euler equations with certain spatial symmetry, to provide nontrivial examples for the use of this ruling out criterion. This is a joint work with Claude Bardos. Tuesday, 27.02.2018, Wednesday, 28.02.2018
{"url":"https://www.mi.fu-berlin.de/en/sfb1114/events/CRC-Colloquium/colloquium-lectures/Winter-Term-2017-2018.html","timestamp":"2024-11-11T11:24:24Z","content_type":"text/html","content_length":"36790","record_id":"<urn:uuid:e0ca6ea6-a5d8-427b-9c85-de8ea8854b92>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00727.warc.gz"}
Finding and replacing characters Finding and replacing characters Hi everyone, I am struggling to find a way to replace characters for sorting purpose. for example, I want to replace the following characters: where one zero is added after the letter “A” whenever there are three numbers and two zeros are added after “A” whenever there are three numbers. Also, nothing should be done if there are four numbers after “A”. I really appreciate it if you can help me. @2-mins-summary , You will need to use capture groups and conditional substitutions. I have also added negative lookahead and negative lookbehind assertions, though it can probably be done without them, because I wanted to limit it to 1-4 digits. You weren’t 100% clear as to whether these 1-4 digits would always be after an A, or always after a letter, or always starting in the second column of the line, or some combination, so I made mine restrict it to any group of 1-4 digits, anywhere on the line, as long as there isn’t a digit before or a digit after (which would have meant it was really 5+ digits). FIND = (?<!\d)\d(\d)?(\d)?(\d)?(?!\d) REPLACE = (?1:0)(?2:0)(?3:0)$0 SEARCH MODE = Regular Expression This converted Basically, the logic puts each individual digit after the first digit into a numbered group (1-3), as long as it’s exactly 1 thru exactly 4 digits with non-digits surrounding; then the replacement will say “if there isn’t a digit for the Nth group, use a 0 at the beginning instead” 3 times, for the three optional digits – so it will insert no 0s before a four-digit number, one 0 before a three-digit number, two 0 before a two-digit number, and three 0 before a one-digit number. If you need it to be more restrictive, you will need to be more clear about examples that shouldn’t match; if you need it to be more permissive, you need to give more examples of data that should match that my expression does not. In other words, my answer is only as good as the example you gave and the assumptions I made (intentionally or unknowingly) based on the example data you Useful References edit: fixed typos in the original FIND @peterjones Really appreciate it. I mean it always should be after an A. Could you please help me with it? @2-mins-summary said in Finding and replacing characters: it always should be after an A A quick mod to Peter’s for this would be to start off his Find expression with A\K it always should be after an A FIND = (?<=A)\d(\d)?(\d)?(\d)?(?!\d) Everything else stays the same I just had to change the negative lookbehind that said “anything that doesn’t contain a digit” to be a positive lookbehind that said “only match if the character before is an A” So the B’s don’t change but the A’s do for example: Team A1 54 Team A36 63 Team A282 88 Team A1482 98 Team A0001 54 Team A0036 63 Team A0282 88 Team A1482 98 Where where zeros are added after letter A always if the numbers are less than 4. Thank you @peterjones Thank you very much
{"url":"https://community.notepad-plus-plus.org/topic/22930/finding-and-replacing-characters","timestamp":"2024-11-01T19:30:50Z","content_type":"text/html","content_length":"85321","record_id":"<urn:uuid:4f134770-47bb-4b96-9f23-5eca5f1826ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00602.warc.gz"}
Stokes stream function - Mono Mole Stokes stream function The Stokes stream function is a mathematical representation of the trajectories of particles in a steady flow of fluid over an object. In other words, a plot of the Stokes stream function results in streamlines seen in the diagram below. To derive the Stokes stream function, we make the following assumptions: • The flow of the fluid is axisymmetric, i.e. defined along the polar axis (z-axis), and therefore the velocity components are independent of Φ. • The fluid is incompressible and therefore has a constant density. Eq12 of the previous article becomes: Show that $abla\cdot&space;u=0$. The divergence of a function in spherical coordinate is: If the flow is axisymmetric, the divergence of an axisymmetric flow function in spherical coordinates is exactly the LHS of eq13 and we can write eq13 as: Eq14 is needed later for the derivation of the differential equation E^2(E^2ψ=0). George Stokes developed the solution to eq13 by defining the Stokes stream function ψ where: Show that eq15 and eq16 satisfy eq13. Substitute eq15 and eq16 in eq13: Since $\frac{\partial&space;}{\partial&space;r}\left&space;(&space;\frac{\partial&space;\psi}{\partial&space;\theta}&space;\right&space;)=&space;\frac{\partial&space;}{\partial&space;\theta}\left& space;(&space;\frac{\partial&space;\psi}{\partial&space;r}&space;\right&space;)$ , eq15 and eq16 satisfy eq13. With reference to the diagram at the top of the page, the flow velocity u, which is defined in the z-direction, varies at different distances from the surface of the sphere. On the surface of the At r = ∞, we assume that the flow velocity of the fluid is uniform. The flow velocity at any point in fluid at can be deconstructed into its radial and polar components (see above diagram) where: Substitute eq15 in eq18 and eq16 in eq19, we have, Integrating eq21, we get: At r = ∞, r^2 » a^2, so Eq17 and eq22 express the Stokes stream function for an incompressible fluid at the boundaries of r = a and r = ∞ respectively. In the next few articles, we shall define the Stokes stream function at a ≤ r ≤ ∞.
{"url":"https://monomole.com/stokes-stream-function/","timestamp":"2024-11-03T09:28:32Z","content_type":"text/html","content_length":"104073","record_id":"<urn:uuid:443c05ca-b4d1-4bfa-a335-d1ecb390156a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00554.warc.gz"}
Multi-vari studies and families of variation Multi-vari Studies Families of Variation Multi-vari analysis is applied in the beginning of the ANALYZE phase to reduce the focus from all inputs (x's) creating the variation to a much smaller group of variables. Use Multi-vari studies among the FOV's to find specific areas to hone in. Take this example below. The MSA has passed at this point and the Study Variation has been quantified. The remaining variation of the total is related to the PROCESS. This is NOT the same as multivariate analysis. Within the process, the team found came up with 5 sources of variation to examine. The GB/BB then ran Multi-Vari charts with ANOVA tests on four of the sources and used F-test and 2 sample t-test on another source. From here the data found a significant source of variation from Plant to Plant. Within that, it was found that Plant C was highest contributor to variation. The GB/BB continued to mine the data further to examine the same original FOV's with only Plant C. At this point, the GB/BB may have found that certain machines, operators, a particular part, or a shift was the primary contributor. Recall, the focus of Six Sigma is on VARIATION reduction, not only shifting the mean. This could even result is slightly reducing the performance of the mean if it results in drastically reduced If there was Machine A that ran an average of 23,492 pcs/hr with a standard deviation of 4891 pcs/hr and Machine B that ran an average of 23,400 pcs/hr with a standard deviation of 10 pcs/hr then the latter is probably preferred. Although both are important the team's goal is to stabilize and make the process more repeatable, controlled, and predictable before trying to shift the mean to a more desirable target. In other words, make the process more PRECISE before making the process more ACCURATE but ultimately both are desired. The purpose of FOV Almost every set long term data contains rational subgroups. It is very important that a GB/BB understands how to dissect the data to understand the variation created by the families of variation. Remember, by this stage the Measurement System variation has been quantified and the remaining variation is the Process Variation. Each subgroup being analyzed contributes to the total Process Variation. The key is to analyze the FOV to identify the vital few inputs (KPIV's) that are creating most of the process variation. Quantifying and visually showing the team members the magnitude of the variation created by each of the subgroups is extremely relevant to the team so they can focus on right areas to reduce variation and improve the mean. Frequently it isn't as simple as that, and variables can have interaction and confounding effects on one another where a DOE becomes important. Using simple Multi-vari Charts These charts are a graphical method of presenting ANOVA (analysis of variance) data in comprehensive visual manner. These charts are often used to gain a qualitative understanding of the input (x's) contributions on the process and interactions of the inputs prior to more time-consuming numerical analysis. Team members tend to find these charts easier to grasp than statistical results and often have a deeper level of interest in the data when it is presented in this format. The chart displays the means at each factor level for every factor but also show the spread of the data. They are quick and inexpensive to generate with a potential high reward of discovering something lurking in the data. Multi-vari charts are a graphical representation of potential Key Process Input Variables (KPIV's) and their relationship to the Effects (Y's). They are used to drill down into the "vital few" inputs that are creating most of the variation and then the team can focus on the highest impact improvements. Multi-vari studies classify variation sources* as: • Positional – variation within a single unit (or piece) • Cyclical – variation between (among) unit-to-unit repetition over short time period • Temporal – variation over longer periods of time (drifts, trends) *Material variation is sometimes cited as another key variation source, but Six Sigma certification questions are usually looking for Positional, Cyclical, and/or Temporal. It is not so important to categorize into one of the above but more important to recognize and study the sources of variation in as many ways as possible. As mentioned earlier, multi-vari charts are generally inexpensive ways to find quick insight about where, and where not, to focus improvement efforts. Other methods such as surveys and SPC can take more resources to find this same information. There is no reason a GB/BB should not take the time to run these charts. The analysis is easy and quick once you get familiar with the software; the data collection, organization, is hard and time consuming. Putting the hard work up front to collect enough, meaningful data, will pay dividends as you proceed to the IMPROVE phase. You will know where to prioritize improvements to get the most for your resources. You can't fix everything...tackle the vital few sources of variation. The Process 1) Create the Families of Variation tree (which are rational sub-groups making up the Process Variation). Such as: • part-part, piece-piece • shift-shift, • machine-machine, • form-form, • lot-lot, • batch-batch, • facility-facility, • operator-operator, • tool-tool, • mold-mold, • month-month • heat-heat, • supplier-supplier, • house-house, • car-car • across length, or top to bottom of a piece (form of positional variation) • across width of a piece (form of positional variation) Determine which rational subgroups will show relationships and interactions of the key inputs. 2) Use as many graphical techniques as possible, within reason, to illustrate sources of variation (such as Boxplots, Scatter Plots, SPC, etc.). 3) Create Multi-Vari charts for all of three. This will eliminate subjective reasons or validate them and help to show relative impact of the sources. This helps a GB/BB narrow their statistical tests to a smaller set of variables. 4) Analyze the graphs. What are they telling you? You may need to collect more data where you suspect there is a lot of variation and narrow that collection to the suspect groups and try to confirm the results. Remember a good sampling plan ensures that a broad spectrum of data is collected that includes relevant sources of variation noise. In other words, collect data on several lots, batches, shifts, machines, tools, etc. Depending on how easy and economical the data can be collected, strive to get as much as possible within each rational subgroup. The chart below helps visualize the relative consistent from facility-to-facility but the variation is wide within each facility. The variation among facilities appears to be less of a concern (see the spread of the red line) than the variation within some of the facilities (see the spread of the data points within each facility). ANOVA will quantify the variation between facilities and within facilities to confirm the graph. The graph is more powerful to show the team, the statistics are your friend as the proof. An F-value of the Facilities that is below the F-critical value indicates the variation within the Facilities is greater than the variation between them. The opposite is true if the F-value is greater than the F-critical value. A more discriminate investigation of positional variation may include examining mold-mold variation within a particular facility, or press-press variation within a particular facility. About Multi-vari Charts MULTI-VARI CHART show the following three categories of which can reflect process time related elements. 1) Cyclical (such as Batch to Batch, Lot to Lot, Piece to Piece) 2) Temporal (time comparisons) 3) Positional (such as across width or across the diameter) These sources of variation are between (or among) unit-to-unit repetition over a short time period such as: • 1st Shift - 2nd Shift - 3rd Shift • Lot to Lot (within a shift, or an hour) • Hour to Hour There are 3 factors (Plant X, Y, and Z) and 2 levels (Part 1 and Part 2). The y-axis is the pieces/hr that each part ran at (the output) at the respective plants. The blue dots are the mean values (in pieces/hr) of the two parts at each plant. These are sources of variation that occur over periods of time (time frames). With this type of analysis, the goal is to identify drifts or trends related to time events such as: • Hour-Hour • Day-Day • Week-Week • Winter-Spring-Summer-Fall • Month to Month.... • Q1-Q2-Q3-Q4 The chart indicates a more consistent output at 1PM than the other two times (tighter spread) but efficiency Is declining from 10AM to 3PM with a few concerning low outliers at 3PM. A F-test for variances will like not pass as the three times frames visually appear to have significantly different variation. Perhaps the low outlier at 10AM and two lower three lower outliers at 3PM are false reading (or special cause). Removing them would show a fairly similar mean and variance for all three time frames. Multi-vari Study - Download Click here to a .pdf presentation with over 1,000 slides on multi-vari analysis and other Six Sigma and Lean Manufacturing principles...plus a 180+ practice certification questions. Multi-vari studies have limitations as do most tools. Understanding these and the pitfalls within each tool are important for a GB/BB. The following are a list of potential pitfalls using multi-vari studies: 1. Confounding of input is present or multicollinearity. A DOE should be performed to further examine the interactions of the inputs. 2. Interactions may exist within the data but are not shown with studying one "x" at a time. 3. The data collected may be too narrow of a range and not represent all the proper input "x's" behaviors that influence "Y", the output. Search for Six Sigma related job openings Templates, Tables, and Calculators
{"url":"https://www.six-sigma-material.com/Multi-vari.html","timestamp":"2024-11-02T23:24:56Z","content_type":"text/html","content_length":"59162","record_id":"<urn:uuid:26ea39cb-d376-41e6-a137-5a8378f99661>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00696.warc.gz"}
An isosceles triangle has sides A, B, and C with sides B and C being equal in length. If side A goes from (2 ,9 ) to (8 ,5 ) and the triangle's area is 48 , what are the possible coordinates of the triangle's third corner? | HIX Tutor An isosceles triangle has sides A, B, and C with sides B and C being equal in length. If side A goes from #(2 ,9 )# to #(8 ,5 )# and the triangle's area is #48 #, what are the possible coordinates of the triangle's third corner? Answer 1 $\left(\frac{161}{13} , \frac{235}{13}\right) , \mathmr{and} , \left(\frac{31}{13} , - \frac{53}{13}\right)$. Let the third corner be at #(x,y)#. Thus, the vertices of the #Delta# are, #(x,y), (2,9) and (8,5)#. Hence, the area of the #Delta# is given by, #1/2|D|," where, "# But, this area is #48. :. |(2x+3y-31)|=48#. #:. 2x+3y-31=+-48, i.e., # # 2x+3y=79............(1), or, 2x+3y=-17............(2)#. Also, the length of side #B=sqrt{(x-2)^2+(y-9)^2}#, &, that of #C=sqrt{(x-8)^2+(y-5)^2}#. Then, #B=C.................."[Given]"#, #rArr x^2+y^2-4x-18y+85=x^2+y^2-16x-10y+89#. # rArr 12x-8y=4, or, 3x-2y=1....................(3)#. Solving #(1), &, (3)," gives, "(x,y)=(161/13,235/13)," whereas, "# # (2), &, (3), (x,y)=(-31/13,-53/13)#. Thus, the third corner is #(161/13,235/13), or, (31/13,-53/13)#. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/an-isosceles-triangle-has-sides-a-b-and-c-with-sides-b-and-c-being-equal-in-leng-25-8f9afa420f","timestamp":"2024-11-02T02:54:11Z","content_type":"text/html","content_length":"577959","record_id":"<urn:uuid:8011cc2a-e734-4ab7-9941-a6514ee057f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00400.warc.gz"}
Cognition and Individual Differences lab A Bayesian perspective on the Reproducibility Project: Psychology We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors - a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis - for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable. title = {{A} {B}ayesian perspective on the {R}eproducibility {P}roject: {P}sychology}, author = {Etz, Alexander and Vandekerckhove, Joachim}, year = {2016}, journal = {PLoS ONE}, volume = {11}, pages = {e0149794}
{"url":"https://cidlab.com/paper/36","timestamp":"2024-11-11T17:08:27Z","content_type":"text/html","content_length":"8037","record_id":"<urn:uuid:07dd3292-0edd-4ca5-9a08-bd12938dbae8>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00151.warc.gz"}
15-853: Algorithms in the 15-853: Algorithms in the "Real World" Carnegie Mellon University, Computer Science Department Fall 2002 Instructors: Guy Blelloch and Bruce Maggs Time: Monday and Wednesday 10:30 - 11:50 (1st class Wednesday Sept. 11) Place: 4615a Wean Hall Credit: 12 Units Prerequisites: An advanced undergrad course in algorithms (15-451 or equivalent will suffice). Office Hours: Guy on Mondays 3-4pm, or any other time I'm free if you drop by. Course Overview: This course covers how algorithms and theory are used in "real-world" applications. The course will cover both the theory behind the algorithms and case studies of how the theory is applied. It is organized by topics and the topics change from year to year. This year we plan to cover the following topics. Error Correcting Codes Error correcting codes are perhaps the most successful application of algorithms and theory to real-world systems. Most of these systems, including DVDs, DSL, Cell Phones, and wireless, are based on early work on cyclic codes, such as the Reed-Solomon codes. We will cover cyclic codes and their applications, and also talk about more recent theoretical work on codes based on expander graphs. Such codes could well become part of the next generation of applications, and also are closely related to other theoretical areas. Graph Separators Most graphs in practice have small separators, i.e. they can be separated into two almost equal sized parts by removing a relatively small number of edges or vertices. Such graphs include the link structure of the web, the internet connectivity graphs, graphs arising from finite-element meshes, map graphs, and many more. Many algorithms can make use of small separators to greatly improve efficiency. This is true both in theory and in practice. We will cover the state-of-the-art in algorithms for finding graph separators, and for making use of graphs with small separators. Facility location, path planning, distribution are all of huge importance in industry. Distributors can save 100s of millions of dollars a year by improving the location of facilities, or the scheduling of their fleets of trucks, trains or planes. We will look at case studies of how algorithms are currently used for such problems, and also study recent theory on this class of problems. Privacy in Data Gathering data that contains private information is becoming much more prevalent in today's world. Even before 9/11 such data was being collected by many private and public entities. We will look at algorithms that can be used to process this data to gather various statistics without revealing private data. Algorithms for Indexing and Searching Searching large databases, such as the web, has become. There are many interesting algorithmic techniques that can be used to improve the efficiency and quality of such searches. We will look at how standard search engines store and retrieve data, how Google's pagerank works, and at various techniques for clustering data, such as Latent Semantic Indexing. Algorithms for Networking and Data Distribution Many interesting algorithms are used in networking. These include algorithms for finding fast routes, algorithms for load-balancing users across servers, and algorithms for finding the best servers based on locality. A small sample of companies that sell products that use various algorithms: │ Optimization │Geometry and Meshing │ Biology │ Cryptography │ │ CPLEX │ Fluent │ Celera │Algorithmic Research │ │CAPS Logistics│ Geomagic │ Curagen │ RSA Security │ │ IBM OSL │ Pointwise │ HGSI │ Entrust │ │ Astrokettle │ Ansys │ MLNM │ Cryptomathic │ │ APC │ FEGS │ Hyseq │ Netegrity │ │Carmen Systems│ CFDRC │ Genset │ InterTrust │ │Lindo Systems │ Marc │ Incyte │ Zero Knowledge │ │ LogicTools │ Femsys │ Variagenics │ Mach 5 │ │ │ AVL │ │ │ │ Trick's List │ Owen's List │Netsci's list│ Rivest's List │ Requirements and Grading Criteria We will spend 2 to 4 lectures on each topic. Each student will be expected to complete a set of assignments, take the final, and help either grade a homework or take scribe notes for one of the SCRIBE NOTES AND GRADING The scribe notes and granding will work as follows. We will be asking for 2 or 3 volunteers to grade each assignment, and 2 volunteers to take scribe notes for certain lectures (the ones that have not be given in previous years). All students need to volunteer for ONE of these two tasks during the semester. TAKE HOME FINAL: The 48-hour take home final will be given out over a period of 4 days starting Dec 11 (Dec 11, 12, 13, 14). You can pick it up on any of those 4 days at noon, and return it by noon two days later. READINGS: Readings will vary from topic to topic and you should look at the Readings, Notes and Slides page to see what they are. Grade partitioning: Relevant Books See the lists within each of the topic pages Help on giving presentations: Guy Blelloch, guyb@cs.cmu.edu.
{"url":"https://www.cs.cmu.edu/afs/cs/project/pscico-guyb/realworld/www/index02.html","timestamp":"2024-11-04T18:53:55Z","content_type":"text/html","content_length":"11896","record_id":"<urn:uuid:698ca0a4-82e3-42da-aefa-aa571b1e662e>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00085.warc.gz"}
Mathematical Database - Teaching Module - Countability - www.mathdb.org Mathematical Database – Teaching Module – Countability Teaching Module: Countability Time spent Brief Description Teachers’ Guide Form 4-7 students who are interested in mathematics. 1.5 hours for each lesson. In this module we attempt to guide students to explore the infinite by first introducing to them the notions of sets and mappings. This is followed by the concept of countability and some related The module consists of a total of 8 lessons. In lesson 1 the notion of sets is introduced and vast amounts of examples are given. In lessons 2 and 3 students are guided to compare the “size” of sets, along which the notions of one-to-one correspondence and mappings are introduced. After that we put forward the concept of countability in lessons 4 and 5 as a first step to deal with the infinite, where students will be guided to appreciate how infinite sets differ considerably from finite ones. Some further applications and explorations of countability will be dealt with in lessons 6 to 8. In particular, we will show that there can be no set with “maximum size” . The teachers’ guide of this teaching module is avaliable here.
{"url":"https://www.mathdb.org/module/countability/content.htm","timestamp":"2024-11-06T18:14:05Z","content_type":"text/html","content_length":"28520","record_id":"<urn:uuid:a78401be-77fb-409c-ab2e-b68c7133ad8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00015.warc.gz"}
If the Quadratic equations x2−5x+6=0 and x2+kx+4k=0 have a root... | Filo Question asked by Filo student If the Quadratic equations and have a root in common then sum of the possible values of is Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 3 mins Uploaded on: 4/2/2023 Was this solution helpful? Found 4 tutors discussing this question Discuss this question LIVE 8 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Integration View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If the Quadratic equations and have a root in common then sum of the possible values of is Updated On Apr 2, 2023 Topic Integration Subject Mathematics Class Class 11 Answer Type Video solution: 1 Upvotes 58 Avg. Video Duration 3 min
{"url":"https://askfilo.com/user-question-answers-mathematics/if-the-quadratic-equations-and-have-a-root-in-common-then-34373732343734","timestamp":"2024-11-14T20:54:40Z","content_type":"text/html","content_length":"311233","record_id":"<urn:uuid:ab30ec72-32a3-4058-b91e-3150577d8a31>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00388.warc.gz"}
How to use this tool? This free online converter lets you convert code from Matlab to Kotlin in a click of a button. To use this converter, take the following steps - 1. Type or paste your Matlab code in the input box. 2. Click the convert button. 3. The resulting Kotlin code from the conversion will be displayed in the output box. The following are examples of code conversion from Matlab to Kotlin using this converter. Note that you may not always get the same code since it is generated by an AI language model which is not 100% deterministic and gets updated from time to time. Example 1 - Is String Palindrome Program that checks if a string is a palindrome or not. Example 2 - Even or Odd A well commented function to check if a number if odd or even. Key differences between Matlab and Kotlin Characteristic Matlab Kotlin Syntax Matlab uses a syntax that is similar to traditional programming languages, but with a Kotlin uses a modern, concise syntax that is similar to Java, but with additional features focus on mathematical operations and matrix manipulation. such as null safety and extension functions. Paradigm Matlab is primarily a procedural language, but also supports object-oriented programming. Kotlin is a modern, multi-paradigm language that supports both object-oriented and functional Typing Matlab is dynamically typed, meaning that variable types are determined at runtime. Kotlin is statically typed, meaning that variable types are determined at compile time. Performance Matlab is optimized for numerical computations and has good performance for these types of Kotlin is a general-purpose language and does not have the same level of optimization for operations. numerical computations as Matlab. Libraries and Matlab has a large number of built-in libraries and toolboxes for numerical computation, Kotlin has a growing ecosystem of libraries and frameworks, particularly for Android frameworks data analysis, and visualization. development, but may not have as many options for scientific computing as Matlab. Community and Matlab has a large and active community, with many resources available for learning and Kotlin has a growing community, particularly in the Android development space, but may not support troubleshooting. have as many resources available for scientific computing as Matlab. Learning curve Matlab has a relatively low learning curve for those with a background in mathematics or Kotlin has a moderate learning curve for those with a background in Java or other engineering, but may be more challenging for those without this background. object-oriented languages, but may be more challenging for those without this background.
{"url":"https://www.codeconvert.ai/matlab-to-kotlin-converter","timestamp":"2024-11-10T22:26:14Z","content_type":"text/html","content_length":"32696","record_id":"<urn:uuid:70512e40-a972-4cc7-944a-591b1d2f2bfe>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00802.warc.gz"}
How to perform Bader charge analysis for metallic interface? I want to calculate charge density difference (CDD) for Al and Fe interface. For the direct calculation of CDD it is necessary to determine charge density of constituent blocks in the same real space points (FFT grid) where the whole system is determined. This requirement implies the calculation of isolated surface slabs and interface in the same supercell, with the same FFT grid. This calculation is not feasible since for isolated slabs in the same supercell a too large gap should be introduced, and a consequently too large number of plane waves should be involved. Computational cost prohibits such a calculation. Can you please suggest solution for this problem? How to perform Bader charge analysis for metallic interface?. Available from: https://www.researchgate.net/post/How_t ... _interface [accessed Aug 11, 2017].
{"url":"https://henkelmanlab.org/forum/viewtopic.php?t=3162&sid=cfc3a16e51ba99aef405a7950482e723","timestamp":"2024-11-10T18:47:48Z","content_type":"text/html","content_length":"18904","record_id":"<urn:uuid:42470f99-562a-4187-824f-c17d0d07b076>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00283.warc.gz"}
Fractions Worksheets top of page Coloured Fractions Dive into the world of fractions with our free "Coloured Fractions" worksheet, designed specifically for Grade 3 and Grade 4 students. This engaging and educational resource is perfect for helping young learners understand and visualize fractions in a fun and interactive way. Download the "Coloured Fractions" worksheet for free by clicking the button at the bottom of the page! What is the "Coloured Fractions" Worksheet? The "Coloured Fractions" worksheet is a vibrant and interactive tool that helps students grasp the concept of fractions through colouring. Each shape on the page is divided into different fractional parts, and students are given specific instructions on how to colour each part. How to Use the "Coloured Fractions" Worksheet Step-by-Step Instructions: 1. Download and Print: □ Click the button at the bottom of the page to download your free "Coloured Fractions" □ worksheet. □ Print the worksheet for use in the classroom or at home. 2. Understand the Instructions: □ Each shape on the worksheet is divided into fractional parts. □ Below each shape, there are instructions detailing how to colour each fraction of the shape. 3. Colour the Fractions: □ Students follow the instructions to colour each part of the shape according to the specified fraction. □ For example, if a circle is divided into 8 parts with the instructions "1/8 orange, 2/8 brown, 3/8 purple, and 2/8 yellow," students will colour one part orange, two parts brown, three parts purple, and two parts yellow. 4. Check Understanding: □ After colouring, students can review their work to ensure they have correctly followed the fractional instructions. Benefits of Using the "Coloured Fractions" Worksheet • Visual Learning: □ Helps students visualize fractions, making the concept more tangible and easier to understand. • Engaging Activity: □ Combines art and math, making learning fractions enjoyable and interactive. • Skill Reinforcement: □ Reinforces students' understanding of fractions through a hands-on activity. • Classroom Flexibility: □ Can be used as part of a math lesson, in math centers, or as a homework activity. Why Download the "Coloured Fractions" Worksheet? For Students: • Improved Understanding: □ Enhances comprehension of fractions through a fun and engaging colouring activity. • Confidence Boost: □ Helps build confidence in handling fractions by providing clear, step-by-step instructions. For Teachers: • Teaching Aid: □ A valuable resource for teaching fractions in an interactive and enjoyable manner. • Versatile Use: □ Suitable for individual practice, group work, or as an additional classroom activity. • Easy Preparation: □ Simply download, print, and you’re ready to go, saving valuable preparation time. Get Your Free "Coloured Fractions" Worksheet Today! Help your students master fractions with our free "Coloured Fractions" worksheet. This resource is ideal for Grade 3 and Grade 4 students who are learning about fractions in their math lessons. Click the button below to download your copy of the "Coloured Fractions" worksheet and bring a splash of colour to your students' learning experience! bottom of page
{"url":"https://www.smartboardingschool.com/coloured-fractions","timestamp":"2024-11-10T02:02:14Z","content_type":"text/html","content_length":"1050491","record_id":"<urn:uuid:39c2e360-ee6b-4c83-a35e-55b1dd2b6439>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00589.warc.gz"}
Peter Ritchie's Blog - By Reference in C# I became aware recently that there were many C# compiler errors that do not have a corresponding documentation page. That documentation is open-source and I chose to spend some time contributing some pages for the community. Looking at a language feature from the perspective of its compile-time errors is rather enlightening, so I'd though I'd write a bit about these features in hopes of offering a better understanding for my readers. C# compiler errors can be categorized (arbitrarily) by different areas of C# syntax, and I started to focus on one category at a time. One of those areas involves referenced variables. C# has always has ref arguments, but ref return, ref locals, ref structs, and ref fields have been additions to the syntax. The declaration of a variable in C# influences its syntax in a couple ways: binding and accessibility. Accessibility is whether an identifier is visible at compile-time in a given context. Binding is how an identifier or name is bound at run-time to resources like data and code. Binding uniquely affect the compile-time correctness of any particular usage of a ref variable. Binding affects the compile-time usage of an identifier because of the run-time lifetime of the resources it is bound to. You're probably familiar with a static method accessing instance data and the errors caused in this context. ref variables have a similar context when they are stack allocated. Heap-allocated objects (objects bound to the heap) can have their lifetime extended to be long-lived because the heap shares the same lifetime as the application. Variables bound to stack-allocated resources cannot have their lifetime extended beyond a specific scope. The stack is a sequential collection of elements with elements implicitly partitioned by a shared scope. The most recognized scope is probably a method call or method/lambda body. Local variables bound to stack elements do not have a lifetime beyond the method call. A reference to a stack object cannot be assigned to a variable or expression with a broader scope. How far the value of an expression can leave the confines of its declaration scope is called "escape scope". Sometimes the escape scope is the same as the declaration scope. The compiler verifies compatible escape scopes during assignment. For example: void M(ref int ra) int number = 0; ref int rl = ref number; if (ra == 0) int x = number; rl = ref x; x is local to the if body, it is bound to the stack, and its escape scope is narrower than ref rl because ref rl is declared in the outer scope. Since ref rl is an alias to another variable, it cannot reference a variable bound to a resource that will go out of scope before it does. rl = ref x results in a compiler error. If rl were not a reference to a value type, the assignment would be okay because x would be bound to heap and have a broader escape scope. The compiler also verifies compatible escape scopes when returning values. For example: ref int M(ref int ra) int number = 0; ref int rl = ref number; if (ra == 0) ref int x = ref number; return ref x; return ref ra; return ref x results in a compiler error because the escape scope of x is local to the method. The error message here may not be as clear as the first because it doesn't mention the narrower escape There are the basic escape scopes. A calling method scope, a current method scope, and a return-only scope. The calling method scope is a scope outside of the containing method/lambda. References can reach this scope via either a ref parameter or a return. The current method scope is a scope within a containing method/lambda. The return-only scope is a special case for ref struct types that can only leave the method scope via a return and not through a ref or out parameter. comments powered by Disqus
{"url":"https://blog.peterritchie.com/posts/By-Reference-in-csharp","timestamp":"2024-11-10T18:15:45Z","content_type":"text/html","content_length":"15565","record_id":"<urn:uuid:8e08c20b-460c-4e16-908f-74271837f12b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00001.warc.gz"}
Implicit Representation of Sparse Hereditary Families For a hereditary family of graphs F, let F[n] denote the set of all members of F on n vertices. The speed of F is the function f(n)=|F[n]|. An implicit representation of size ℓ(n) for F[n] is a function assigning a label of ℓ(n) bits to each vertex of any given graph G∈F[n], so that the adjacency between any pair of vertices can be determined by their labels. Bonamy, Esperet, Groenland, and Scott proved that the minimum possible size of an implicit representation of F[n] for any hereditary family F with speed 2^Ω^(n^2) is (1+o(1))log[2]|F[n]|/n (=Θ(n)). A recent result of Hatami and Hatami shows that the situation is very different for very sparse hereditary families. They showed that for every δ>0 there are hereditary families of graphs with speed 2^O(nlogn) that do not admit implicit representations of size smaller than n^1/2-δ. In this note we show that even a mild speed bound ensures an implicit representation of size O(n^c) for some c<1. Specifically we prove that for every ε>0 there is an integer d≥1 so that if F is a hereditary family with speed f(n)≤2^(1/4-ε)n^2 then F[n] admits an implicit representation of size O(n^1-1/dlogn). Moreover, for every integer d>1 there is a hereditary family for which this is tight up to the logarithmic factor. All Science Journal Classification (ASJC) codes • Theoretical Computer Science • Geometry and Topology • Discrete Mathematics and Combinatorics • Computational Theory and Mathematics • (Induced)-universal graphs • 05C62 • 05C78 • 52C10 • Hereditary properties • Implicit representation • Shatter function • VC-dimension Dive into the research topics of 'Implicit Representation of Sparse Hereditary Families'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/implicit-representation-of-sparse-hereditary-families","timestamp":"2024-11-05T22:47:16Z","content_type":"text/html","content_length":"50079","record_id":"<urn:uuid:effddfcd-e434-44b6-a656-27d04278da99>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00628.warc.gz"}
Maths Assignment Class VIII | Quadrilateral Ch-11 ASSIGNMENT ON CONIC SECTIONs CH-10, CLASS-11 Important questions other than NCERT Book, useful for the examination point of view useful for DAV and CBSE board with solution hints. Math Assignment Class VIII Understanding Quadrilateral Important extra questions on Quadrilateral class 8 strictly according to the DAV board and CBSE Board, necessary for board examinations. Understanding Quadrilateral chapter - 11, class - 8 Question 1 (i) Two adjacent angles of a parallelogram are (5x + 65)^o and (85 – 2x)^o. Find the measure of each angle of a parallelogram. (ii) Two adjacent angles of a parallelogram are of measure (4x + 2)^o and (3x - 4)^o. Find the measure of each angle of the parallelogram. (iii) Two adjacent angles of a parallelogram are in the ratio 1 : 2. Find all the angles of parallelogram. Question 2 A pair of adjacent sides of a rectangle is in the ratio of 5 : 12. If the length of the diagonal is 26 cm. Find the lengths of sides and perimeter of the rectangle. Question 3ABCD is a rectangle in which DP and BQ are perpendiculars from D and B respectively on diagonal AC. Show that : (i) △ADP ≅ △CBQ (ii) ∠ADP = ∠CBQ (iii) DP = BQ Question 4 In the given figure ABCD is a parallelogram. If ∠DAB= 75^o and ∠DBC= 60^o, Calculate ∠CDB and ∠ADB Question 5 Question 6 Two adjacent angles of a rhombus are in the ratio 2 : 3. Find all the angles of the rhombus. Question 7 In the given figure both RIsK and ClUE are parallelogram. Find the value of x. Question 8 PQRS is a rhombus. If ∠PSQ = 55^o, find all the angles of the rhombus.Question 9 In the given parallelogram PQRS, O is the mid point of SQ. Find ∠S, ∠R, PQ, QR and diagonal PR. Question 10 In the figure, l || m and 't' is the transversal. The angle bisectors of interior angles intersect at P and Q. Find the values of angles x and y. Question 11 Find the measure of ∠x in the given figure.Question 12 The lengths of a pair of adjacent sides of a rectangle are in the ratio 3 : 4. If its diagonal is 50 cm, find the lengths of the sides and hence, the perimeter of the rectangle. Question 13 Exterior angle of a parallelogram is 100^o. Find the angles of parallelogram ? Question 15 The diagonals of a rectangle ABCD intersect at O. If ∠BOC = 68^o, find ∠ODA. Question 16 ABCD is a rhombus and its diagonals intersect at O. (i) Is △BOC ≌ △DOC ? State the congruence condition used. (ii) Also state, if ∠BCO = ∠DCO Question 17 In the given figure the exterior angle ABX of parallelogram ABCD is 70^o and O is the mid point of diagonal BD, Find ∠BAD and ∠ADC. Also find the length of diagonal AB, when OC = 3 cm. Question 18 In given figure, ABCD is a rectangle and diagonals intersect at O. If ∠AOB = 108^o, find (i) ∠ABO (ii) ∠ADO (iii) ∠OCB Question 19 In the given figure, PQRS is a rectangle, If ∠PRQ = 30^o , find the value of ∠PQS. Question 20 The lengths of the diagonals of a rhombus are in ratio 6 : 8. If its perimeter is 40 cm. Find the length of the shorter diagonal. Question 21 Find the measure of x in the given figure Question 23. A diagonal and a side of a rhombus are of equal length. Find the measure of the angles of the rhombus.
{"url":"https://www.cbsemathematics.com/2024/03/maths-assignment-class-viii-quadrilateral.html","timestamp":"2024-11-03T02:51:35Z","content_type":"application/xhtml+xml","content_length":"171918","record_id":"<urn:uuid:482ecaaa-ba57-4e53-bf2a-62c0a7b09c53>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00678.warc.gz"}
Weierstrass zeta function values at half periods: Introduction to the Weierstrass utility functions (subsection WeierstrassUtilities/05) The Weierstrass invariants have the following values at infinities: The Weierstrass function values at half-periods can be evaluated at closed forms for some values of arguments , : The Weierstrass zeta function values at half-periods can also be evaluated at closed forms for some values of arguments , : The Weierstrass half‐periods , the Weierstrass function values at half-periods , and the Weierstrass zeta function values at half-periods are vector‐valued functions of and that are analytic in each vector component, and they are defined over . The Weierstrass invariants is a vector‐valued function of and that is analytic in each vector component, and it is defined over (for ). The Weierstrass invariants with is a periodic function with period : The other Weierstrass utility functions , , and are not periodic functions. The Weierstrass half‐periods and Weierstrass zeta function values at half-periods have mirror symmetry: The Weierstrass invariants and the Weierstrass function values at half-periods have standard mirror symmetry: The Weierstrass invariants have permutation symmetry and are homogeneous: The Weierstrass invariants are the invariants under the change of variables and with integers , , , and , satisfying the restriction (modular transformations): This property leads to similar properties of the Weierstrass function values at half-periods and the Weierstrass zeta function values at half-periods : The Weierstrass half‐periods and invariants have the following double series expansions: where is a Klein invariant modular function. The last double series can be rewritten in the following forms: The Weierstrass invariants , the Weierstrass function values at half-periods , and the Weierstrass zeta function values at half-periods have numerous q‐series representations, for example: where . The following rational function of and is a modular function if considered as a function of : The Weierstrass utilities have some other forms of series expansions, for example: where is the divisor sigma function. The Weierstrass half‐periods and invariants have the following integral representations: The Weierstrass utilities can have product representations. For example, the Weierstrass function values at half-periods can be expressed through the following products: where . The Weierstrass utilities satisfy numerous identities, for example: The first derivatives of Weierstrass half‐periods and the Weierstrass and zeta function values at half-periods and with respect to variable and have the following representations: where are the values of the derivative of the Weierstrass elliptic function at half-period points . The first derivatives of Weierstrass invariants with respect to the variables and can be represented in different forms: The -order derivatives of Weierstrass invariants with respect to the variables and have the following representations: The indefinite integrals of Weierstrass invariants with respect to the variable have the following representations: The Weierstrass half‐periods satisfy the following differential equations: The Weierstrass invariants satisfy the following differential equations: The Weierstrass zeta function values at half-periods satisfy the following differential equations:
{"url":"https://functions.wolfram.com/EllipticFunctions/WeierstrassZetaHalfPeriodValues/introductions/WeierstrassUtilities/05/","timestamp":"2024-11-05T06:41:59Z","content_type":"text/html","content_length":"84970","record_id":"<urn:uuid:b83fac4a-f7ba-4c9a-a719-8ca3f32f68bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00181.warc.gz"}
Diastolic Indices of Non-Sedated Healthy Cats - WSAVA2005 - VIN Diastolic Indices of Non-Sedated Healthy Cats World Small Animal Veterinary Association World Congress Proceedings, 2005 E.C. Soares^1; M.H.M.A. Larsson^2; A.G.T. Daniel^3; M.M. Fantazzini^4; F.L. Yamaki^1; Roberto Carvalho e Pereira^1 Doppler echocardiography is a very useful and popular method of evaluating left ventricular systolic and diastolic function. In cats, the later has a major concern, as the type of cardiomyopathy may be defined according to the mitral flow pattern. The purpose of this study is to determine values as: E and A waves velocities, E/A ratio, E wave deceleration time, and IVRT in healthy non-sedated cats, as well as correlate these indices to the heart rate. The study group consisted of 40 healthy, adult cats housed in the cattery of the School of Veterinary Medicine of São Paulo University. Doppler examination was performed with the cats restrained in left lateral recumbency. Andersen-Darling test was used to test variables for normalcy. Mean, standard deviation and median were calculated for E wave velocity, A wave velocity, E wave deceleration time and IVRT. Pearson correlation was used to verify the influence of heart rate on E wave velocity, A wave velocity and IVRT, while Spearman correlation was used for E wave deceleration time x heart rate and E/A ratio x heart rate. The mean value of peak early diastolic flow was 0.65 m/s, with a standard deviation of 0.13 m/s and median of 0.64 m/s (n=24); the mean value of the E wave deceleration time was 72.8 ms, with standard deviation of 9.4 ms and median of 75 ms (n=24); the mean value of the A wave was 0.45 m/s, with a standard deviation of 0.09 m/s and median of 0.46 m/s (n=24); the mean value of E/A ratio was 1.46, with standard deviation of 0.39 and median of 1,42 (n=24), and the mean value of the IVRT was 72.8 ms, with a standard deviation of 9.4 ms and median of 75 ms (n=40). Good correlation between heart rate and E wave velocity, E/A ratio and IVRT was observed, whereas E wave deceleration time and A wave velocity were not affected by the heart rate. The establishment of these values will allow to define the pattern of the diastolic abnormality, usually related to cardiomyopathies in cats. 1. Bright, J.M.; Herrtage, M. E.; Schneider, J.F. Pulsed Doppler assessment of left ventricular diastolic function in normal and cardiomyopathic cats. J. Am. Anim. Hosp. Assoc., v. 35, p. 285-291, 2. Harrison, M. R.; Clifton, G. D.; Pennell, A. T.; Demaria, A. N.; Cater, A. Effect of heart rate on left ventricular diastolic transmitral flow velocity patterns assessed by doppler echocardiography in normal subjects. Am. J. Cardiol., v. 67, p. 622-627, 1991. 3. Nishimura, R. A.; Abel, M. D.; Hatle, L.K.; Tajik, A . J. Assessment of diastolic function of the heart: background and current applications of Doppler echocardiography. Part II: Clinical studies. Mayo Clin Proc., v. 64, p. 181-204, 1989b. Speaker Information (click the speaker's name to view other papers and abstracts submitted by this speaker)
{"url":"https://www.vin.com/apputil/content/defaultadv1.aspx?pId=11196&meta=Generic&catId=30767&id=3854353&ind=82&objTypeID=17","timestamp":"2024-11-11T20:41:28Z","content_type":"text/html","content_length":"156043","record_id":"<urn:uuid:bbaf053f-2566-48bd-90ad-0f7cbec96378>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00250.warc.gz"}
The Elastica Model for Image Restoration: An Operator-Splitting Approach Applied and Computational Mathematics Seminar Friday, February 21, 2020 - 2:00pm for 1 hour (actually 50 minutes) Roland Glowinski – University of Houston, Hong Kong Baptist University – roland@math.uh.edu – https://www.math.uh.edu/~roland/ The most popular model for Image Denoising is without any doubt the ROF (for Rudin-OsherFatemi) model. However, since the ROF approach has some drawbacks (the stair-case effect being one of them) practitioners have been looking for alternatives. One of them is the Elastica model, relying on the minimization in an appropriate functional space of the energy functional $J$ defined by $$ J(v)=\varepsilon \int_{\Omega} \left[ a+b\left| \nabla\cdot \frac{\nabla v}{|\nabla v|}\right|^2 \right]|\nabla v| d\mathbf{x} + \frac{1}{2}\int_{\Omega} |f-v|^2d\mathbf{x} $$ where in $J(v)$: (i) $\Omega$ is typically a rectangular region of $R^2$ and $d\mathbf{x}=dx_1dx_2$. (ii) $\varepsilon, a$ and $b$ are positive parameters. (iii) function $f$ represents the image one intends to denoise. Minimizing functional $J$ is a non-smooth, non-convex bi-harmonic problem from Calculus of Variations. Its numerical solution is a relatively complicated issue. However, one can achieve this task rather easily by combining operator-splitting and finite element approximations. The main goal of this lecture is to describe such a methodology and to present the results of numerical experiments which validate it.
{"url":"https://math.gatech.edu/seminars-colloquia/series/applied-and-computational-mathematics-seminar/roland-glowinski-20200221","timestamp":"2024-11-06T19:07:51Z","content_type":"text/html","content_length":"32202","record_id":"<urn:uuid:fb245cd2-90bc-4d83-9fd7-537215788368>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00424.warc.gz"}
Part C: Folding Paper (40 minutes) - Annenberg Learner Private: Learning Math: Geometry Private: What Is Geometry? Part C: Folding Paper (40 minutes) Session 1, Part C In This Part: • Constructions • Constructing Triangles • Concurrencies in Triangles • More Constructions Geometers distinguish between a drawing and a construction. Drawings are intended to aid memory, thinking, or communication, and they needn’t be much more than rough sketches to serve this purpose quite well. The essential element of a construction is that it is a kind of guaranteed recipe. It shows how a figure can be accurately drawn with a specified set of tools. A construction is a method, while a picture merely illustrates the method. The most common tools for constructions in geometry are a straightedge (a ruler without any markings on it) and a compass (used for drawing circles). In the problems below, your tools will be a straightedge and patty paper. You can fold the patty paper to create creases. Since you can see through the paper, you can use the folds to create geometric objects. Though your “straightedge” might actually be a ruler, don’t measure! Use it only to draw straight segments. See Note 4 below. Throughout this part of the session, use just a pen or pencil, your straightedge, and patty paper to complete the constructions described in the problems. Here is a sample construction with patty paper to get you started: To construct a perpendicular line, consider that a straight line is a 180° angle. Can you cut that angle in half (since perpendicular lines form right angles, or 90° angles)? To construct a parallel line, you may need to construct another line before the parallel to help you. Problem C1 Draw a line segment. Then construct a line that is a. perpendicular to it b. parallel to it c. the perpendicular bisector of the segment (A perpendicular bisector is perpendicular to the segment and bisects it; that is, it goes through the midpoint of the segment, creating two equal Problem C2 Draw an angle on your paper. Construct its bisector. (An angle bisector is a ray that cuts the angle exactly in half, making two equal angles.) Constructing Triangles Problem C3 Illustrate each of these definitions with a sketch using four different triangles. Try to draw four triangles that are different in significant ways — different side lengths, angles, and types of triangles. The first one in definition (a) is done as an example. a. A triangle has three altitudes, one from each vertex. (An altitude of a triangle is a line segment connecting a vertex to the line containing the opposite side and perpendicular to that side.) b. A triangle has three medians. (A median is a segment connecting any vertex to the midpoint of the opposite side.) c. A triangle has three midlines. (A midline connects two consecutive midpoints.) Problem C4 Draw five triangles, each on its own piece of patty paper. Use one triangle for each construction below. a. Carefully construct the three altitudes of the first triangle. b. Carefully construct the three medians of the second triangle. c. Carefully construct the three midlines of the third triangle. d. Carefully construct the three perpendicular bisectors of the fourth triangle. e. Carefully construct the three angle bisectors of the fifth triangle. In this video segment, participants construct the altitudes, medians, and midlines of their triangles. Compare your solutions to Problem C4 with those in this video segment. What are the similarities and differences in your results? What conjectures can you make about the constructions you’ve just completed? You can find this segment on the session video approximately 15 minutes and 40 seconds after the Annenberg Media logo. Problems C3 and C4 and the Video Segment problems taken from Connected Geometry, developed by Educational Development Center, Inc. p. 32. © 2000 Glencoe/McGraw-Hill. Used with permission. Concurrencies in Triangles When three or more lines meet at a single point, they are said to be concurrent. The following surprising facts are true for every triangle: Triangles are the only figures where these concurrencies always hold. (They may hold for special polygons, but not for just any polygon of more than three sides.) We’ll revisit these points in a later session and look at some explanations for why some of these lines are concurrent. You’ll explore the derivation of such terms as incenter and circumcenter later in Session 5 of this course. Problem C5 For each construction in parts a-d, start with a freshly drawn segment on a clean piece of patty paper. Then construct the following shapes: a. an isosceles triangle with your segment as one of the two equal sides b. an isosceles triangle whose base is your segment c. a square based on your segment d. an equilateral triangle based on your segment More Constructions Problem C6 Start with a square sheet of paper. a. Construct a square with exactly one-fourth the area of your original square. How do you know that the new square has one-fourth the area of the original square? b. Construct a square with exactly one half the area of your original square. How do you know that the new square has one half the area of the original square? c. Construct a square with exactly three-fourths the area of your original square. Problem C7 Recall that the centroid is the center of mass of a geometric figure. How could you construct the centroid of a square? Take It Further Problem C8 When you noticed concurrencies in the folds, were you sure that the segments were concurrent? What would convince you that, for example, the medians of every triangle really are concurrent? Note 4 If you are working in a group, you may choose to do all of the construction problems as a group activity. Watch for someone with appropriate solutions (for example, folding the two endpoints to each other, rather than measuring). Ask that person to share the solution and explain why it will always work on any segment. That’s the goal for these problems: to come up with general methods that will always work and that don’t rely on measurement. At the end, leave at least 10 minutes to share methods, even if not everyone is done. Then make a list of conjectures that come from that problem. Problem C1 Start by drawing a line segment. Then do the following: Fold the paper so that one of the endpoints of the line segment lies somewhere on the line segment. The crease created defines a line perpendicular to the original line segment. Use the process above to construct a perpendicular line. Then use the same process to construct a line perpendicular to the new line, making sure that this second perpendicular is a different line from the original. Since this third line and the original are each perpendicular to the second line, they are parallel. c. Fold the paper so that the endpoints of the line segment overlap. Draw a line segment along the crease, intersecting the original line segment. This new line segment is perpendicular to the original one and bisects it, because we used the same process that we used to construct the midpoint in the sample construction. Problem C2 Draw an angle on a piece of paper. Next, fold the paper so that the two sides of the angle overlap. The crease created defines a bisector of the angle. Problem C3 For parts (a)-(c), draw several triangles, at least one of which has an obtuse angle (to see that the definitions make sense in general). Then draw in the altitudes. Repeat with medians. Repeat with a. altitudes: b. medians: c. midlines: Problem C4 Draw five triangles on separate pieces of patty paper, and then do the following: a. Pick a side. Fold the paper so that the crease is perpendicular to the side [see Problem C1(a)] and so that it goes through the vertex opposite the side. You may have to extend the line segments of your triangle if the triangle has an angle larger than 90°. (See illustration for an example of what this looks like.) Connect the side with the vertex along the crease. The line segment drawn is the altitude corresponding to the side chosen. Now repeat with the other two sides. b. Pick a side. Fold the paper so that the endpoints of the chosen side overlap. The midpoint of the side is the point where the side intersects the crease. Using a straightedge, connect the midpoint of the side with the vertex opposite it. Repeat with the other two sides. c. Pick a side. Find the midpoint of the side by following the construction of question (b). Repeat this construction with the other two sides. Using a straightedge, connect the consecutive d. Pick a side. Construct a perpendicular bisector of the chosen side using the construction from Problem C1(c). Repeat with the other two sides. e. Pick an angle. Fold the paper so that the two sides of this angle overlap. The crease defines a ray that bisects the chosen angle. Repeat with the other two angles. Problem C5 Draw a line segment, and then do the following: Make a crease that goes through one of the endpoints of the original line segment. The crease will extend to the edges of the paper. b. Construct a perpendicular bisector of the line segment. Choose any point on the perpendicular bisector and connect it with the endpoints of the original line segment. The resulting triangle is isosceles and has the original line segment as its base. c. Extend the line segment to form a line, being sure to mark the original endpoints of the line segment. Use this line to construct a perpendicular line through the endpoints of the original line segment [see Problem C1(a)]. Mark the point on the perpendicular line where the second endpoint falls on this line. (This defines one of the equal, perpendicular sides.) Perform the same process on the second perpendicular line to define the third side of the square. d. Construct the perpendicular bisector of the segment. Flip your patty paper, and then mark that spot on the bisector. Connect the marked spot with the two endpoints of the original line segment. Problem C6 a. Fold the paper in half to make a rectangle. Fold it in half again by bisecting the longer sides of the rectangle. The resulting square has one-fourth the area of the original one. There are exactly four squares that fit exactly on top of each other, so they must have the same area. Since together they completely make up the original square, each must be one-fourth of the original b. Find the midpoints of all four sides. Connect the consecutive midpoints. The resulting square has one-half the area of the original square. To see this, connect the diagonals of the new square. You will see four triangles inside the square and four triangles outside, all of which have the same area. Half the area of the original square is inside the new square. c. In order to obtain a square with exactly three-fourths the area of the original square (sides = 1) we need to calculate the sides of the new square: a • a = 3/4 a2 = 3/4 a = √3/2 So we are looking to construct a square whose sides are equal to √3/2. Problem C7 One way to do this is to use a straightedge to draw the two diagonals of the square. The centroid is the point of their intersection. Another is to draw the perpendicular bisectors of two consecutive sides of the square [the same construction as Problem C6(a)]. The intersection of these bisectors is the same centroid. Problem C8 Noticing what appear as concurrencies in the folds may lead one to conjecture that concurrencies occur in general. Keep in mind any one construction that suggests this is a special case. Therefore, in order to convince ourselves that they do occur in general, we need to construct a formal proof.
{"url":"https://www.learner.org/series/learning-math-geometry/what-is-geometry/part-c-folding-paper-40-minutes/","timestamp":"2024-11-06T04:37:09Z","content_type":"text/html","content_length":"123731","record_id":"<urn:uuid:62bac407-e348-4f4b-9e63-fc7c6d906b8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00853.warc.gz"}
Infinite ergodic theory for heterogeneous diffusion processes We show the relation between processes which are modeled by a Langevin equation with multiplicative noise and infinite ergodic theory. We concentrate on a spatially dependent diffusion coefficient that behaves as D(x)∼|x-x|2-2/α in the vicinity of a point x, where α can be either positive or negative. We find that a nonnormalized state, also called an infinite density, describes statistical properties of the system. For processes under investigation, the time averages of a wide class of observables are obtained using an ensemble average with respect to the nonnormalized density. A Langevin equation which involves multiplicative noise may take different interpretation, Itô, Stratonovich, or Hänggi-Klimontovich, so the existence of an infinite density and the density's shape are both related to the considered interpretation and the structure of D(x). Bibliographical note Publisher Copyright: © 2019 American Physical Society. The support of Israel Science Foundation's Grant No. 1898/17 is acknowledged. We thank Guenter Radons, Jakub Ślęzak, and Takuma Akimoto for the discussion and comments. Funders Funder number Israel Science Foundation 1898/17 Dive into the research topics of 'Infinite ergodic theory for heterogeneous diffusion processes'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/infinite-ergodic-theory-for-heterogeneous-diffusion-processes-3","timestamp":"2024-11-11T16:35:13Z","content_type":"text/html","content_length":"55547","record_id":"<urn:uuid:a0d4f97e-0380-426b-bd03-0c032a9f8cea>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00665.warc.gz"}
Some tiles associated with the 6th unit cubic Pisot number I. Compact Tiles If the modifications of the grouped element technique are applied to one of the order 5 asymmetric metasymmetric tiles, even though it has no symmetries, a set of 32 attractors, including the original one, is found, all of which have a similarity dimension of 2. 8 of these generate 6 partial postcomposition derivatives of the symmetric and demisymmetric tiles, and one is disconnected, leaving 23 novel tiles. Eight of these tiles have a fairly compact appearance, but 4 of them are partial postcomposition derivatives of the demisymmetric tiles. The remaining 4 are shown below. Four copies of these tiles combine to make a symmetric tile, so tilings of these tiles have a multiple of four copies in the unit cell. (Dissecting one of the four copies replaces one copy by five, so dissection doesn't result in exceptions to this rule.) © 2015, 2016 Stewart R. Hinsley
{"url":"http://www.stewart.hinsley.me.uk/Fractals/IFS/Tiles/Cubic/6thcubic/some5b.php","timestamp":"2024-11-08T08:57:47Z","content_type":"text/html","content_length":"2338","record_id":"<urn:uuid:e82b28ad-59c5-4a77-bd64-c04b1c80dd21>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00178.warc.gz"}
lambda Function Builder John Mount The CRAN version of the R package wrapr package now includes a concise anonymous function constructor: l(). To use it please do the following: attach wrapr and ask it to place a definition for l() in your environment: ## [1] "LEFT_NAME" "OTHER_SYMBOL" "X" "Y" ## [5] "angle" "d" "d2" "df" ## [9] "f" "inputs" "l" "plotb" ## [13] "variable" "variable_name" "variable_string" "x" Note: throughout this document we are using the letter “l” as a stand-in for the Greek letter lambda, as this non-ASCII character can cause formatting problems in some situations. You can use l() to define functions. The syntax is: l(arg [, arg]*, body [, env=env]). That is we write a l()-call (which you can do by cutting and pasting) and list the desired function arguments and then the function body. For example the function that squares numbers is: ## function (x) ## x^2 We can use such a function to square the first four positive integers as follows: ## [1] 1 4 9 16 Dot-pipe style notation does not need the l() factory as it treats pipe stages as expressions parameterized over the variable “.”: ## [1] 1 4 9 16 And we can also build functions that take more than one argument as follows: ## function (x, y) ## x + 3 * y
{"url":"http://cran.stat.auckland.ac.nz/web/packages/wrapr/vignettes/lambda.html","timestamp":"2024-11-14T07:59:41Z","content_type":"text/html","content_length":"12600","record_id":"<urn:uuid:789507d7-42c8-46f6-adc1-8e2cff1e6493>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00131.warc.gz"}
mp_arc 04-18 04-18 Adami R., Figari R., Finco D., Teta A. On the asymptotic behaviour of a quantum two-body system in the small mass ratio limit (49K, LATeX 2e) Jan 23, 04 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We consider a quantum system of two particles in dimension three interacting via a smooth potential. We characterize the asymptotic dynamics in the limit of small mass ratio for an initial state given in product form, with an explicit control of the error. An application to the decoherence effect produced on the heavy particle is also discussed. Files: 04-18.tex
{"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=04-18","timestamp":"2024-11-12T09:15:17Z","content_type":"text/html","content_length":"1454","record_id":"<urn:uuid:cffafb9d-eb8e-42fc-9de4-0b74935d523a>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00549.warc.gz"}
Why are cans shaped the way they are? The purpose of a food can is to store food. It costs money to manufacture, store, and ship these containers. One would imagine, therefore, that over time a lot of thought has gone into their design and production. One would hope that, as billions of food and beverage cans are manufactured every year (yes, that's a “B”), the current designs are optimal for their purpose. If not, as a planet, we’re wasting a lot of energy manufacturing sub-optimal designs. So then why are cans cylindrical tubes? And why do they have the aspect ratios they do? (Ratio of height to diameter) Why cylindrical tubes? If the goal was to purely maximize the volume of food that could be stored in a container, the result would be a spherical can. A sphere is the shape with the minimum surface area to volume ratio. It could contain the most amount of food for the least amount of can material. However, it would be totally impractical! It would not stay still on a shelf, making display and storage hard. How would you hold it? How would you open it? How could you manufacturer and fill it? When stored in packing boxes, even with hexagonal close packing there would be unused gaps in the storage boxes. Clearly optimizing purely based on minimizing the material needed for an individual can is not optimal. If we wanted to use a shape that packed perfectly efficiently, we’d use some kind of cuboid. These would sit and stack nicely on shelves too. They’d be easier to manufacture than spheres, but the edges would be stress points. You occasionally seen cuboid-like containers (corned-beef, spam and sardines are the first that come to mind). Rather than sharp edges, these have filleted (rounded) edges to reduce stress concentrations and to make them easier to manufacture. Taken to the limit, a cube would be the most efficient of the square cornered containers. I’ll not prove it here, but a cube is the format that minimizes the surface area to volume ratio of a cuboid. But we don’t see many cubes on shelves. Let's look at cylinders now … Cylinders are relatively easy to manufacture. In the past they were manufactured by cutting a rectangle of material and wrapping this around with a single seam weld to which two circular end caps were attached. (Modern aluminum cans are made in two pieces; the first is a combined bottom/side punched and extruded into a cup shape, and the second is a circular cap attached to the end). Being round in section, they have no corners in the hoop-plane and so minimize the stress concentrations when the can is under pressure (storing carbonated beverages, or during the cooking of the food in the cans). See Why do pipes burst for more details on this calculation. Cylindrical cans stay put when you put them on a shelf, and they are relatively easy to open with a simple can opener (and when opened still stay put and the contents do not spill out). When packed in boxes, even though not as efficient as cuboids, they pack more efficient than spheres (Approx 79% cf. 74%). But what aspect ratio (Height to Diameter ratio) is the best to use, and why? Let’s pause for a second whilst I audit my pantry. This is not an exclusive list of sizes, nor is it intended to be a complete list. It just happens to be a selection of random cans that were there when I looked. Here is a table of their diameters and heights from left to right: Name Diameter Height Chicken Broth 75 mm 105 mm Condensed Soup 65 mm 98 mm Tuna 85 mm 40 mm Condensed Milk #1 75 mm 94 mm Condensed Milk #2 74 mm 77 mm Chunky Soup 85 mm 107 mm Almonds 84 mm 57 mm Now for a little math refresher The volume of a cylinder is the area of the cap, multiplied by the height: The surface area of a cylinder is area of two of the circular end caps, plus the rectangle that wraps around the edge forming the sides. If we divide the volume of each can by the surface area of each can, this will tell us the ratio of these values, but it will not tell us what the optimal shape is for the volume of food held. To do this, we need our old friend Calculus. Optimal can size To work out the optimal size, we keep the volume of the cylinder fixed, then find the relationship between the radius and the height. Recall the simple formula for the Volume, and we can re-arrange this to get an equation for the height: We're trying to minimize the surface area, and here is the equation for this, into which we will substitute the equation for the height: Next we find the first derivative of this and set it to zero to find the turning points in the function: (The second derivative is positive, confirming that this turning point is a minimum). We now have a relationship between the volume and the radius for the minimum surface area. Finally substituting back for the valuation of h reveals the result: The most efficient can is one where the height is twice the radius (which of course is the diameter). Plotting it out Below is a graph of how much material (surface area) cans of variable aspect ratio require. On the x-axis is plotted the ratio of h/r (the height to radius ratio). As we can see, the graph has a minimum at h=2r (when the height is the diameter). This is what we calculated with our Calculus. Tall skinny "Pringle" can like cans are on the right, and short flat "Pancake" like cans are on the On the y-axis I've plotted the efficiency of the can by showing a normalized value of the excess material needed (as a percentage of the minimum surface area). Cans that have a higher, or lower, aspect ratio than the optimal require more surface area to contain the same volume (and thus more material). Plotting out my cans Here is my table of cans with additional columns added. For each row I've added the the h/r ratio, and also a column to show how inefficient that can is compared to the optimal can that would hold the same volume of goods: Name Diameter Height h/r Waste Chicken Broth 75 mm 105 mm 2.80 1.21% Condensed Soup 65 mm 98 mm 3.02 1.80% Tuna 85 mm 40 mm 0.94 6.95% Condensed Milk #1 75 mm 94 mm 2.51 0.55% Condensed Milk #2 74 mm 77 mm 2.08 0.02% Chunky Soup 85 mm 107 mm 2.52 0.57% Almonds 84 mm 57 mm 1.36 1.75% Here's the data plotted on the curve: Out of all the cans in my pantry, the tuna is the most inefficient, and the condensed milk #2, Eagle Brand, is pretty close to the perfect ratio. Well done guys! Your product is sold in the most efficient can possible. So why not everyone? The math above isn't hard. I'm sure more people than the good folks at Eagle Brand have done the calculations for optimal efficiency. Why don't others make their cans the same aspect ratio and save all that inefficient use of material? I'm not an engineer at any of their plants, but here is some speculation on my part. Believe as many as you like (some are more convincing than others!): • For the almond tin, we can give some leniency. This container is designed to allow fingers to dip in and pick up nuts to eat. If the diameter were too narrow (or the container too deep), then fingers would not be able to get inside. • Aesthetically, a slightly taller can looks nicer. The Golden ratio is approx 1.6, so a can with a height of approx 1.6x it's diameter (3.2x the radius) would be very appealing. This corresponds very closely to the condensed soup can ratio. • Cans (especially drinks ones) are designed to be held in the hand. If the diameter is too large, it's hard to hold them (either to drink or open them). There's an obvious maximum diameter that an adult hand can hold, after that, if you want to increase the volume you have to make the container taller. • To open a can, a cutting device has to revolve around the rim. The larger the diameter of a can, the more effort and work is required to open it. • Our above, simple, calculations make the very basic assumption that all parts of the can are equal. This, clearly, is not the case. The end caps of the can could be made of different thickness material that costs a different amount of money per unit area. There is no reason why it should cost the same. • Related to the above, the end caps are circular, whilst the sides are cut from rectangular stock. The sides can be cut with no wastage. Circular caps need punching out of stock leaving material behind. This, obviously, will be recycled, but the manufacturer has had to pay for processing the raw material into the sheets from which the circular blanks will be cut. This is a waste of energy. Even if they were the same material stock as the side, they would cost more to manufacture. • There is more to manufacturing than just the cost of the raw material. There is the cost of welding (both the seam of the can, and the two end caps). Depending on the relative price of generating a weld per unit length (and/or the cost of circular welds against straight welds), compared to the cost of raw material, a different shape could be more efficient.These days most cans are punched from aluminium blanks, not welded, but a similar argument can be applied: There are more costs in manufacturing than just the costs of the raw material. • Familiarity. The original cans may have been manufactured with an arbitrary aspect ratio, but now we've been programmed to search for items of this shape when we shop. A soup can of the wrong shape might not attract our attention (or bizarrely might be less attractive) to shoppers. Changing a can shape at this stage (without accompanying large budget media awareness campaigns) could have a negative impact on sales • Cans are not manufactured or sold singly, but are packed in boxes/crates. Depending on can size and crate size, and the configuration of packing, some configurations might use different quantities of packing material (though typically, this will be cheaper than the can material). You can find a complete list of all the articles here.^ Click here to receive email alerts on new articles.
{"url":"https://datagenetics.com/blog/august12014/index.html","timestamp":"2024-11-08T11:35:28Z","content_type":"application/xhtml+xml","content_length":"17702","record_id":"<urn:uuid:30f1a840-43ea-44dc-adcf-7f544d982d7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00236.warc.gz"}
Benutzer:Dirk Huenniger/wb2pdf/parser – Wikibooks, Sammlung freier Lehr-, Sach- und Fachbücher The task is to parse pages written by humans in mediawikis markup language. In particular this language allows for the use of HTML. And as it is written by humans tags are often closed in the wrong order. In particular you will often see: In before writing a parser it is very interesting to know whether there is a backus naur form (BNF) for this language. It is well known that the language has to be context free for a BNF to exist. We will proof that a language allowing this kind of improper bracketing is not context-free. We start with the pumping lemma as given in Wikipedia. If a language L is context-free, then there exists some integer p ≥ 1 such that any string s in L with |s| ≥ p (where p is a pumping length) can be written as s = uvxyz with substrings u, v, x, y and z, such that 1. |vxy| ≤ p, 2. |vy| ≥ 1, and 3. uv^ nxy^ nz is in L for every integer n ≥ 0. We define the Language L by. ${\displaystyle h\in L{\text{ iff }}\exists n\in \mathbb {N} :h=(^{n}[^{n})^{n}]^{n}}$ with ${\displaystyle (^{n}}$ we mean the string that conisits of the character '(' repeated n times. Thus we write string concatenation a multiplication. We show that assuming that L is context free causes a contradiction. So let L be context-free and p ≥ 1. We take the string. Therefor a BNF doesn't exist. ${\displaystyle g:=(^{p}[^{p})^{p}]^{p}}$ Since |vxy| ≤ p. The string vxy consists only of at most two kinds of different brackets. In particular there are only tree possibilities. '([' or '[)' or ')]'. Either v is nonempty or y is nonempty. In either case. ${\displaystyle uv^{2}xy^{2}zotin L}$ . Since for a string to be part of L it must have the same number of opening and closing brackets for each kind, which can not be fulfilled in this case. Thus this is a contradiction. So the language is not context-free. For further processing it is very useful to have correctly bracketed expressions. So we need to correct the brackting in some way. I will first start with a few examples to give you an Idea on our way of doing this. So we keep a stack of brackets currently open. And if we meet a closing bracket that whos matching opening bracket is on the stack but no its top. We close all bracktes above it on the stack. Close the bracket in question. Remove it from the stack and open all other brackets we just closed again. Once we got a token stream which is properly bracketet is very easy to transform it into a tree of nested environments. Up to now we looked at brackets consisting only of a single chacter. In case of HTML tags we also have to consider attributs. That is a Map of key value pairs assoviated with each opening bracket. Still there are other possibilities of data that can be associated with an opening bracket, which we won't explain in detail. data StartData = TagAttr String (Map String String) | ... In our parsetree we want to have nested enviroments. Which brings us to a definition like: data EnvType = Italic | Bold | Tag | ... Next we need to take into account very generic bracket. Here we make use of the parsec parser combinator library. Within this library a parse a basically that tries to fit the beginning of an input stream and consumses some of it if it fits ands return a result for further processing or just realizates that it does not fit to the beginning of that particular imput stream. So we can try many parsers until one succeds. That is trying to find out whether the current input stream begin with an opening bracket and to which enviroments that bracket belongs. As we go further we will ceratainly also have to consider closing brackets. And we will also have to deal with the case that the current stream does not start with any opening or closing bracket. In this case we will always insert a single Character in output token stream. This way we ensure that we will always produce a result, even if the inputfile does not conform to any particular grammer. But first of all lets define our first version of a generic brackets: data MyParser tok = MyParser{start :: GenParser tok () StartData, end :: GenParser tok () (), allowed :: [EnvType], self :: EnvType} So we got start for opening brackets. It Parser for a stream of type tok, were usually tok=Char and in case it matches the begining of the stream returns a result of type StartDate (as defined above) consuming some of the input stream. The parser end is the closing bracket. It is does not return a result or more precisely return a result of the null type (). And of course it has got an environment self associated with it. Unfornutatly an enviroments are not allowed to be arbitarily nested. Particularly a string that should be considerd as an opening bracket in one environment should be considert as plain chareters in an other environmnet. So we have to define in which environment that particlar instance of MyParser is allowed to match. This brings us to the defintion data MyParser tok = MyParser{start :: GenParser tok () StartData, end :: GenParser tok () (), allowed :: [EnvType], self :: EnvType, allowed= [EnvType]} The central algorithms of the parser is mything. mything may either return RRight (s,i) or BBad (s,i) where s is some kind of stack we will learn about later and i is the resulting parsetree. If mything is call from toplevel in a reasonable way it can hopefully be prove that it will terminate within a reasonable time return RRight (s,i). If it is called on an inner level it might also return BBad (s,i). Our desription of the grammer is based on environment. The parser tries to generate a list cosisting of tokens and subenvironment where each subenvironment contains a list of the same type, namely Anything. Our grammer definition lets us define a parser that has to match at the begining of an environment as well as one that has to match at the end. It also allows us to define a bad parser for each environment. If the bad parser matches within an environment the environment is disregared and we backtrack to the begging of the enviroment and try other parsers from that position. So in mything we first of all have to check weather bad matches (mtnotbad) if this is the case we return Bad (s,i) and we are done The environments currenty open are kept on the stack s. So mtnotbad has just to take the the bad parser of top of s and try to match it and if it matchs return BBad otherwise RRight. So if the bad parser didn't match we try closing brackes. We do so by calling the mtcl function. We call iom2 to get the index of match we first of all reduce the list of currently open environment to the list of parsers for closing of the environments. We apply indexofmatch2 to the list. Either we get pzero than there was not matching closing bracket or we get the index of the match on the stack. That is the index of the enviroment just about to close on the stack. So iom2 return either pzero or the stackindex to the closing environment. If its on top of the stack we are done, otherwise we have to ensure that nither the current enviroment nor the closing environement are preseving enviroment (that is something like verbatim in latex, so if somebody writes \end{itemize} within verbatim latex shall not close the enviroment but rather path out the string unchanged). If this is not mtcl retruns pzero, signaling that not closing bracket was found. If that is all ok we call mything from mtcl and thus cause a recursion. We pass mything the new stack with the just closed enviromentent removed and a new current parsetree with prober tokens inserted to singal necessary closing and opening of brackets. If we close and environment that is not on top of the stack, we first of all have to close all brackets higher on the stack, than we can close the bracket we wanted to close and than we have to reopen all brackets we just had to close except the one we wanted to close. the opening brackets are handeld by obrk, the closing one by cbrk. We have to consider that we are contructing the parsetree in reverse order. Usually one would add any new toke to the end of the list of allready parsed tokes. But this is not effienct in Haskell we can only at to the front of a list in an efficient manner. So we build the list in reverse order so we first have to add closing brackets to the front and than we have to add opening brackets to the front. After pasing the whole text mything will finally return and its result will be returned by the initial call the mything and that well return the final result. But we also have to consider the case that mtcl does not match in the iniital call to mything in this case we will try opening bracket (mtps) if this also does not match we match a single character and return call mything recursively. So are left to consider mtps in detail. It v is the current lenght of the stack. mtps just tries to match the starting parser adds and opening bracket to the result and calls mything recusively. If mything returns BBad and the current stackindex is in the stack return with bad we know that the enviroment we are trying to open will cause its bad parser to match later. So we know this opening bracket should be disconsiderd. So we return pzero. In case the current stackindex is not in the stack return by the call to mything with BBad, we know that our current bracket was not problem and we thrown on the error by returning BBAd ourselfs. In case of RRight we can just return that.
{"url":"https://de.m.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf/parser","timestamp":"2024-11-05T19:25:35Z","content_type":"text/html","content_length":"46868","record_id":"<urn:uuid:3ad318d3-4866-4eb0-bcab-bfceadb1ccec>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00801.warc.gz"}
What Is the Rule of 72? An Introduction For Investors When it comes to saving for retirement, the power of compounding interest should never be underestimated. And as a responsible investor, it can be helpful to know how long it would take to double your investment at a fixed rate of return. The Rule of 72 can be used as a quick rule of thumb to help determine this answer. Consider this a back-of-a-napkin tool that can be used easily and often and anywhere at any time. What Is the Rule of 72? The Rule of 72 is a formula that estimates the amount of time it will take for an investment to double in value when earning a fixed annual rate of return. 72 / interest rate = years to double Divide 72 by the annual rate of return. This should give you an idea of many years you can expect it to take for your investment to double in value. It’s important to note that this is not an exact science, and there are scenarios in which a different formula may provide a more accurate answer. How Does the Rule of 72 Work? As an example, say someone invests $50,000 in a mutual fund with an estimated annual six percent rate of return. If we used the Rule of 72, the formula would appear as: 72 / 6 = 12 Based on this formula, the investor may expect their original investment to be worth $100,000 in around 12 years. Use this estimation method to better understand the effects of compound interest on your investment dollars. Determine Compound Interest The Rule of 72 can also be used to estimate how much compound interest your investment has already earned. For example, say you invested $25,000 and it took 10 years to grow to $50,000. You can rearrange the formula to determine your average rate of return throughout those 10 years. In this case, the formula would appear as: 72 / 10 = 7.2 In this example, your average rate of return was 7.2 percent. Considerations for the Rule of 72 Before using this formula in the real world, there are a few important considerations to keep in mind. Remember, this is a back-of-a-napkin tool and is not as accurate as one might need. It’s an Estimation Only The Rule of 72 can help provide a general estimation, but it is not precise or perfect. Past performance of the market does not guarantee future returns. Therefore, while you can guess an average rate of return based on market performance or other benchmarks, there is no guarantee. Precision Is Limited Additionally, studies have found that the Rule of 72 tends to work best for average rates of return between six percent and 10 percent.^1 Outside of this window, a more precise formula may be Best for Long-Term Investors If you’re nearing retirement, you’ll likely want a very precise picture of what your income and savings will look like. This is crucial to identifying potential income gaps and developing a tax-efficient withdrawal plan. Because of this, broad estimations like the Rule of 72 may not be suitable for your needs. Additionally, shorter periods of time before retirement include less space for market corrections should a downturn occur. The Rule of 72 is a simple, helpful tool that investors can use to estimate how long an investment with a fixed rate of return may take to double. Following this formula can allow you to quickly gauge the potential future value of your investment - although performance is never guaranteed. While you can quickly get an estimate using the Rule of 72, work with a trusted financial professional when making decisions that can affect your portfolio. This content is developed from sources believed to be providing accurate information, and provided by Twenty Over Ten. It may not be used for the purpose of avoiding any federal tax penalties. Please consult legal or tax professionals for specific information regarding your individual situation. The opinions expressed and material provided are for general information, and should not be considered a solicitation for the purchase or sale of any security.
{"url":"https://thehgg.com/blog/what-is-the-rule-of-72-an-introduction-for-investors","timestamp":"2024-11-10T07:41:08Z","content_type":"text/html","content_length":"34274","record_id":"<urn:uuid:e4a33943-0d52-454b-82f3-7bb4add87077>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00417.warc.gz"}
Plot receiver operating characteristic (ROC) curves and other performance curves Since R2022b plot(rocObj) creates a receiver operating characteristic (ROC) curve, which is a plot of the true positive rate (TPR) versus the false positive rate (FPR), for each class in the ClassNames property of the rocmetrics object rocObj. The function marks the model operating point for each curve, and displays the value of the area under the ROC curve (AUC) and the class name for the curve in the plot(ax,rocObj) creates the plot on the axes specified by ax instead of the current axes. plot(___,Name=Value) specifies additional options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, AverageCurveType= "macro",ClassNames=[] computes the average performance metrics using the macro-averaging method and plots the average ROC curve only. curveObj = plot(___) returns a ROCCurve object for each performance curve. [curveObj,graphicsObjs] = plot(___) also returns graphics objects for the model operating points and diagonal line. Plot ROC Curve Load a sample of predicted classification scores and true labels for a classification problem. trueLabels is the true labels for an image classification problem and scores is the softmax prediction scores. scores is an N-by-K array where N is the number of observations and K is the number of trueLabels = flowersData.trueLabels; scores = flowersData.scores; Load the class names. The column order of scores follows the class order stored in classNames. classNames = flowersData.classNames; Create a rocmetrics object by using the true labels in trueLabels and the classification scores in scores. Specify the column order of scores using classNames. rocObj = rocmetrics(trueLabels,scores,classNames); rocObj is a rocmetrics object that stores performance metrics for each class in the property. Compute the AUC for all the model classes by calling auc on the object. a = 1x5 single row vector 0.9781 0.9889 0.9728 0.9809 0.9732 Plot the ROC curve for each class. The plot function also returns the AUC values for the classes. The filled circle markers indicate the model operating points. The legend displays the class name and AUC value for each curve. Plot the macro average ROC curve. Plot Precision-Recall Curve and Detection Error Tradeoff (DET) Graph Create a rocmetrics object and plot performance curves by using the plot function. Specify the XAxisMetric and YAxisMetric name-value arguments of the plot function to plot different types of performance curves other than the ROC curve. If you specify new metrics when you call the plot function, the function computes the new metrics and then uses them to plot the curve. Load a sample of true labels and the prediction scores for a classification problem. For this example, there are five classes: daisy, dandelion, roses, sunflowers, and tulips. The class names are stored in classNames. The scores are the softmax prediction scores generated using the predict function. scores is an N-by-K array where N is the number of observations and K is the number of classes. The column order of scores follows the class order stored in classNames. scores = flowersData.scores; trueLabels = flowersData.trueLabels; classNames = flowersData.classNames; Create a rocmetrics object. The rocmetrics function computes the FPR and TPR at different thresholds. rocObj = rocmetrics(trueLabels,scores,classNames); Plot the precision-recall curve for the first class. Specify the y-axis metric as precision (or positive predictive value) and the x-axis metric as recall (or true positive rate). The plot function computes the new metric values and plots the curve. curveObj = plot(rocObj,ClassNames=classNames(1), ... Plot the detection error tradeoff (DET) graph for the first class. Specify the y-axis metric as the false negative rate and the x-axis metric as the false positive rate. Use a log scale for the x -axis and y-axis. f = figure; plot(rocObj,ClassNames=classNames(1), ... f.CurrentAxes.XScale = "log"; f.CurrentAxes.YScale = "log"; title("DET Graph") Plot Confidence Intervals Compute the confidence intervals for FPR and TPR for fixed threshold values by using bootstrap samples, and plot the confidence intervals for TPR on the ROC curve by using the plot function. This examples requires Statistics and Machine Learning Toolbox™. Load a sample of true labels and the prediction scores for a classification problem. For this example, there are five classes: daisy, dandelion, roses, sunflowers, and tulips. The class names are stored in classNames. The scores are the softmax prediction scores generated using the predict function. scores is an N-by-K array where N is the number of observations and K is the number of classes. The column order of scores follows the class order stored in classNames. scores = flowersData.scores; trueLabels = flowersData.trueLabels; classNames = flowersData.classNames; Create a rocmetrics object by using the true labels in trueLabels and the classification scores in scores. Specify the column order of scores using classNames. Specify NumBootstraps as 100 to use 100 bootstrap samples to compute the confidence intervals. rocObj = rocmetrics(trueLabels,scores,classNames,NumBootstraps=100); Plot the ROC curve and the confidence intervals for TPR. Specify ShowConfidenceIntervals=true to show the confidence intervals. The shaded area around each curve indicates the confidence intervals. rocmetrics computes the ROC curves using the scores. The confidence intervals represent the estimates of uncertainty for the Specify one class to plot by using the ClassNames name-value argument. Input Arguments ax — Target axes Axes object Target axes, specified as an Axes object. If you do not specify the axes and the current axes are Cartesian, then plot uses the current axes (gca). For more information on creating an Axes object, see axes and gca. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Example: plot(rocObj,YAxisMetric="PositivePredictiveValue",XAxisMetric="TruePositiveRate") plots the precision (positive predictive value) versus the recall (true positive rate), which represents a precision-recall curve. AverageCurveType — Method for averaging ROC or other performance curves "none" (default) | "micro" | "macro" | "weighted" | string array | cell array of character vectors Since R2024b Method for averaging ROC or other performance curves, specified as "none", "micro", "macro", "weighted", a string array of method names, or a cell array of method names. • If you specify "none" (default), the plot function does not create the average performance curve. • If you specify multiple methods as a string array or a cell array of character vectors, then the plot function plots multiple average performance curves using the specified methods. • If you specify one or more averaging methods and specify ClassNames=[], then the plot function plots only the average performance curves. plot computes the averages of performance metrics for a multiclass classification problem, and plots the average performance curves using these methods: • "micro" (micro-averaging) — plot finds the average performance metrics by treating all one-versus-all binary classification problems as one binary classification problem. The function computes the confusion matrix components for the combined binary classification problem, and then computes the average metrics (as specified by the XAxisMetric and YAxisMetric name-value arguments) using the values of the confusion matrix. • "macro" (macro-averaging) — plot computes the average values for the metrics by averaging the values of all one-versus-all binary classification problems. • "weighted" (weighted macro-averaging) — plot computes the weighted average values for the metrics using the macro-averaging method and using the prior class probabilities (the Prior property of rocObj) as weights. The algorithm type determines the length of the vectors in the XData, YData, and Thresholds properties of a ROCCurve object, returned by plot, for the average performance curve. For more details, see Average of Performance Metrics. Example: AverageCurveType="macro" Example: AverageCurveType=["micro","macro"] Data Types: char | string | cell ClassNames — Class labels to plot rocObj.ClassNames (default) | categorical array | character array | string array | logical vector | numeric vector | cell array of character vectors Class labels to plot, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. The values and data types in ClassNames must match those of the class names in the ClassNames property of rocObj. (The software treats character or string arrays as cell arrays of character vectors.) • If you specify multiple class labels, the plot function plots a ROC curve for each class. • If you specify ClassNames=[] and specify one or more averaging methods using AverageCurveType, then the plot function plots only the average ROC curves. Example: ClassNames=["red","blue"] Data Types: single | double | logical | char | string | cell | categorical ShowConfidenceIntervals — Flag to show confidence intervals of y-axis metric false or 0 (default) | true or 1 Flag to show the confidence intervals of the y-axis metric (YAxisMetric), specified as a numeric or logical 0 (false) or 1 (true). The ShowConfidenceIntervals value can be true only if the Metrics property of rocObj contains the confidence intervals for the y-axis metric. Example: ShowConfidenceIntervals=true Using confidence intervals requires Statistics and Machine Learning Toolbox™. Data Types: single | double | logical ShowDiagonalLine — Flag to show diagonal line true or 1 | false or 0 Flag to show the diagonal line that extends from [0,0] to [1,1], specified as a numeric or logical 1 (true) or 0 (false). The default value is true if you plot a ROC curve or an average ROC curve, and false otherwise. In the ROC curve plot, the diagonal line represents a random classifier, and the line passing through [0,0], [0,1], and [1,1] represents a perfect classifier. Example: ShowDiagonalLine=false Data Types: single | double | logical ShowModelOperatingPoint — Flag to show model operating point true or 1 | false or 0 Flag to show the model operating point, specified as a numeric or logical 1 (true) or 0 (false). The default value is true for a ROC curve, and false otherwise. Example: ShowModelOperatingPoint=false Data Types: single | double | logical XAxisMetric — Metric for x-axis "FalsePositiveRate" (default) | name of performance metric | function handle Metric for the x-axis, specified as a character vector or string scalar of the built-in metric name or a custom metric name, or a function handle (@metricName). • Built-in metrics — Specify one of the following built-in metric names by using a character vector or string scalar. Name Description "TruePositives" or "tp" Number of true positives (TP) "FalseNegatives" or "fn" Number of false negatives (FN) "FalsePositives" or "fp" Number of false positives (FP) "TrueNegatives" or "tn" Number of true negatives (TN) "SumOfTrueAndFalsePositives" Sum of TP and FP or "tp+fp" "RateOfPositivePredictions" or Rate of positive predictions (RPP), (TP+FP)/(TP+FN+FP+TN) "RateOfNegativePredictions" or Rate of negative predictions (RNP), (TN+FN)/(TP+FN+FP+TN) "Accuracy" or "accu" Accuracy, (TP+TN)/(TP+FN+FP+TN) "TruePositiveRate", "tpr", or True positive rate (TPR), also known as recall or sensitivity, TP/(TP+FN) "FalseNegativeRate", "fnr", or False negative rate (FNR), or miss rate, FN/(TP+FN) "FalsePositiveRate" or "fpr" False positive rate (FPR), also known as fallout or 1-specificity, FP/(TN+FP) "TrueNegativeRate", "tnr", or True negative rate (TNR), or specificity, TN/(TN+FP) "PositivePredictiveValue", Positive predictive value (PPV), or precision, TP/(TP+FP) "ppv", "prec", or "precision" "NegativePredictiveValue" or Negative predictive value (NPV), TN/(TN+FN) "f1score" F1 score, 2*TP/(2*TP+FP+FN) Expected cost, (TP*cost(P|P)+FN*cost(N|P)+FP*cost(P|N)+TN*cost(N|N))/(TP+FN+FP+TN), where cost is a 2-by-2 misclassification cost matrix containing [0,cost(N|P); cost(P|N),0]. cost(N|P) is the cost of misclassifying a positive class (P) as a negative class (N), and cost(P|N) is the cost of misclassifying a negative class as "ExpectedCost" or "ecost" a positive class. The software converts the K-by-K matrix specified by the Cost name-value argument of rocmetrics to a 2-by-2 matrix for each one-versus-all binary problem. For details, see Misclassification Cost Matrix. The software computes the scale vector using the prior class probabilities (Prior) and the number of classes in Labels, and then scales the performance metrics according to this scale vector. For details, see Performance Metrics. • Custom metric stored in the Metrics property — Specify the name of a custom metric stored in the Metrics property of the input object rocObj. The rocmetrics function names a custom metric "CustomMetricN", where N is the number that refers to the custom metric. For example, specify XAxisMetric="CustomMetric1" to use the first custom metric in Metrics as a metric for the x-axis. • Custom metric — Specify a new custom metric by using a function handle. A custom function that returns a performance metric must have this form: metric = customMetric(C,scale,cost) □ The output argument metric is a scalar value. □ A custom metric is a function of the confusion matrix (C), scale vector (scale), and cost matrix (cost). The software finds these input values for each one-versus-all binary problem. For details, see Performance Metrics. ☆ C is a 2-by-2 confusion matrix consisting of [TP,FN;FP,TN]. ☆ scale is a 2-by-1 scale vector. ☆ cost is a 2-by-2 misclassification cost matrix. The plot function names a custom metric "Custom Metric" for the axis label. The software does not support cross-validation for a custom metric. Instead, you can specify to use bootstrap when you create a rocmetrics object. If you specify a new metric instead of one in the Metrics property of the input object rocObj, the plot function computes and plots the metric values. If you compute confidence intervals when you create rocObj, the plot function also computes confidence intervals for the new metric. The plot function ignores NaNs in the performance metric values. Note that the positive predictive value (PPV) is NaN for the reject-all threshold for which TP = FP = 0, and the negative predictive value (NPV) is NaN for the accept-all threshold for which TN = FN = 0. For more details, see Thresholds, Fixed Metric, and Fixed Metric Values. Example: XAxisMetric="FalseNegativeRate" Data Types: char | string | function_handle YAxisMetric — Metric for y-axis "TruePositiveRate" (default) | name of performance metric | function handle Metric for the y-axis, specified as a character vector or string scalar of the built-in metric name or custom metric name, or a function handle (@metricName). For details, see XAxisMetric. Example: YAxisMetric="FalseNegativeRate" Data Types: char | string | function_handle Output Arguments curveObj — Object for performance curve ROCCurve object | array of ROCCurve objects Object for the performance curve, returned as a ROCCurve object or an array of ROCCurve objects. plot returns a ROCCurve object for each performance curve. Use curveObj to query and modify properties of the plot after creating it. For a list of properties, see ROCCurve Properties. graphicsObjs — Graphics objects graphics array Graphics objects for the model operating points and diagonal line, returned as a graphics array containing Scatter and Line objects. graphicsObjs contains a Scatter object for each model operating point (if ShowModelOperatingPoint=true) and a Line object for the diagonal line (if ShowDiagonalLine=true). Use graphicsObjs to query and modify properties of the model operating points and diagonal line after creating the plot. For a list of properties, see Scatter Properties and Line Properties. More About Receiver Operating Characteristic (ROC) Curve Area Under ROC Curve (AUC) One-Versus-All (OVA) Coding Design Adjusted Scores for Multiclass Classification Problem Version History Introduced in R2022b R2024b: Plot the operating point for all curves plot(rocobj,ShowModelOperatingPoint=true) plots the operating point for all curves in the plot, including averaged curves and non-ROC curves. Previously, plot indicated the operating point only for ROC curves, and not for averaged curves.
{"url":"https://uk.mathworks.com/help/deeplearning/ref/rocmetrics.plot.html","timestamp":"2024-11-03T00:03:45Z","content_type":"text/html","content_length":"152903","record_id":"<urn:uuid:290ae3d7-a59f-4504-ba3b-dc2669968087>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00496.warc.gz"}
Grade 9 Factorization Of Polynomials Worksheet Grade 9 Factorization Of Polynomials Worksheet Cbse class 9 mathematics worksheet polynomials 1 worksheets have become an integral part of the education system. Multiplying monomials worksheet multiplying and dividing monomials sheet adding and subtracting polynomials worksheet multiplying monomials with polynomials worksheet multiplying binomials worksheet multiplying polynomials simplifying polynomials like terms factoring trinomials. Polynomials Class 9 Worksheet With Answers In 2020 Polynomials Math Questions This Or That Questions Adding and subtracting polynomials. Grade 9 factorization of polynomials worksheet. Printable worksheets and tests. Some of the worksheets displayed are factoring trinomials a 1 date period factoring polynomials gcf and quadratic expressions factoring polynomials factoring practice factoring quadratic expressions factoring polynomials 1 algebra 1 factoring polynomials name. Please click the following links to get math printable math worksheets for grade 9. Resources academic maths algebra polynomials factoring polynomials worksheet. These worksheets for grade 9 polynomials class assignments and practice tests have been prepared. Enrich your practice with these division of polynomials worksheets involving division of monomials by monomials polynomials by monomials and polynomials by polynomials using methods like factorization synthetic division long division and box method. Start new online test. These worksheets focus on the topics typically covered in algebra i. Download free printable worksheets for cbse class 9 polynomials with important topic wise questions students must practice the ncert class 9 polynomials worksheets question banks workbooks and exercises with solutions which will help them in revision of important concepts class 9 polynomials. You will also find all the answers and solutions. Students teachers and parents can download all cbse educational material and extremely well prepared worksheets from this website. Worksheets are very critical for every student to practice his her concepts. Grade 9 national curriculum factorization of polynomials. Factoring is a process of splitting the algebraic expressions into factors that can be multiplied. Division of polynomials worksheets. Oct 12 20 07 48 am. Adding and subtracting polynomials worksheet. Solving quadratic equations by factoring. Included here are factoring worksheets to factorize linear expressions quadratic expressions monomials binomials and polynomials using a variety of methods like grouping synthetic division and box method. Showing top 8 worksheets in the category factoring of polynomials. Also find exercises in the word format. Start new online practice session. Printable worksheets and online practice tests on factorization of polynomials for class 9. In this lesson you will go through 15 different exercises related to Factoring Polynomials Coloring Activity Factoring Polynomials Polynomials Factoring Polynomials Activity Algebra 1 Worksheets Monomials And Polynomials Worksheets Algebra Worksheets Polynomials Pre Algebra Worksheets Factoring Polynomials Matching Polynomials Factoring Polynomials Algebra Interactive Notebooks Algebraic Identities A B 2 A B 2 Factoring Polynomials Simplifying Algebraic Expressions Algebraic Expressions Algebra 1 Worksheets Monomials And Polynomials Worksheets Quadratics Polynomials Polynomial Functions Free Egg Cellent Factoring Factoring Trinomials With A Coefficient 1 Algebra Worksheets Factor Trinomials Learning Mathematics More Factoring Over Real Numbers Polynomials Worksheets Factoring Polynomials Polynomials Ncert Extra Questions For Class 9 Maths Polynomials This Or That Questions Math Http Www Aplustopper Com Solving A Quadratic Equation By Factoring Maths Solutions Math Formulas Quadratics Factoring Trinomials Activity Advanced Factoring Polynomials Factoring Trinomials Activity Teaching Algebra Factorize Each Polynomial Using Algebraic Identities Polynomials Algebra Worksheets Factoring Polynomials Factoring Polynomials Trinomials Activity Beginner School Algebra Factoring Polynomials High School Algebra There Are 3 Sheets In This File The Factoring Cheat Sheet Is A Summary Of The Main Factoring Identities And Pattern Of Maths Exam Polynomials College Algebra Rd Sharma Class 9 Maths Solutions Factorization Of Polynomials Maths Solutions Math Polynomials Factoring Easy Trinomials Pic High School Algebra School Algebra Algebra Worksheets 5 Adding And Subtracting Polynomial Worksheets Adding And Subtracting Polynomials Polynomials Subtracting Integers Worksheet 9 8th Grade Factoring Trinomials Worksheet Algebra Worksheets Factoring Quadratics Quadratics Factoring Trinomials Worksheet Answers Luxury Factoring General Trinomials In 2020 Factoring Polynomials Factor Trinomials Algebra Worksheets
{"url":"https://kidsworksheetfun.com/grade-9-factorization-of-polynomials-worksheet/","timestamp":"2024-11-11T17:52:53Z","content_type":"text/html","content_length":"135564","record_id":"<urn:uuid:c7f02f3b-2c89-4da1-9c54-a1476dc9fbca>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00277.warc.gz"}
<<= Back Next =>> You Are On Multi Choice Question Bank SET 628 31401. A reinforced concrete beam is designed for the limit states of collapse in flexure a shear. Which of the following limit states of serviceability have to be checked ? 1. Deflection 2. Cracking 3. Durability Select the correct answer using the codes given below : 31402. Two men, one stronger than the other have to lift a load of 1200 N which is suspended from a light rod of length 3 m. The load is suspended between the two persons positioned at the two ends of the rod. The weaker of the two persons can carry a load up to 400 N only. The distance of the load to be suspended from the stronger person such that the weaker person has the full share of 400 N 31403. In a closed theodolite traverse, the sum of the latitudes is + 5.080 m and the sum of the departures is -51.406 m. The sum of the traverse legs is 20.525 km. The accuracy of traverse is nearly equal to 31404. Consider the following statements : 1. Pumps in series operation allow the head to increase. 2. Pumps in series operation increase the flow rate. 3. Pumps ill parallel operation increase the flow rate. 4. Pumps in parallel operation allow the head to increase. Which of these statements are correct ? 31406. What is the wave velocity for a uniform train of wave beyond the storm centre for a wave length of 20 m in 14 m deep water ? 31407. Maximum shear stress developed in a beam of rectangular section bears a constant ratio to its average shear stress and this ratio is equal to 31408. A circular pipe of radius R carries a laminar flow of a fluid. The average velocity is indicated as the local velocity at what radical distance, measured from the centre ? 31409. Radial splits in timber originating from 'Bark' and narrowing towards the 'Pith' are known as 31410. What is the condition for maximum transmission of power through a nozzle at the end of a long pipe ?(where H = total head at the inlet of the nozzle, hf = head loss due to friction ) 31411. A simply supported beam AB of span L is subjected to a concentrated load W at the centre C of the span. According to Mohr's moment area method, which one of the following gives the deflection under the load ? 31412. A bar of circular cross-section varies uniformly from a cross-section 2D to D. If extension of the bar is calculated treating it as a bar of average diameter, then the percentage error will be 31414. A 600 mm long and 50 mm diameter rod of steel (E = 200 GPa, α = 12 x 10-6/°C) is attached at the ends to unyielding supports. When the temperature is 30°C there is no stress in the rod. After the temperature of the rod drops to - 20°C, the axial stress in the rod will be 31415. The correct sequence, in the direction of the flow of water for installations in the hydro-power plant is 31422. If a circular shaft is subjected to a torque T and a bending moment 'M' the ratio of the maximum shear stress to the maximum bending stress is given by : 31425. The method of plane tabling commonly used for establishing the instrument station is a method of 31426. A fixed beam of unifrom section is carrying a point load at its mid-span. If the moment of inertia of the middle half length is now reduced to half its previous value, then the fixed end moments will 31427. The moisture content of a clayey soil is gradually decreased from a large value. What will be the correct sequence of the occurrence of the following limits ? 1. Shrinkage limit 2. Plastic limit 3. Liquid limit Select the correct answer from the codes given below : 31428. For the movement of vehicles at an intersection of two roads without any interference, which type of grade separation is generally preferred ? 31429. Which one of the following stresses is independent of yield stress as a permissible stress for steel members ? 31431. If the line of sight between two stations A and B, on sea, and 80 km apart, makes tangent at point A, then the minimum elevation of the signal required at B (considering the coefficient of refraction "m" = 0.08 and the mean radius of earth as 6400 km) will be 31432. A simply supported beam of uniform cross-section is subjected to a maximum bending moment of 2.25 t.m. If it has rectangular cross-section with width 15 cm and depth 30 cm, then the maximum bending stress induced in the beam will be 31433. Consider the following statements regarding underreamed piles : 1. They are used in expansive soils. 2. They are of perkcast reinforced concrete. 3. The ratio of bulb to shaft diameters is usually 2 to 3. 4. Minimum spacing between the piles should not be less than 1.5 times the bulb diameter. of these statements : 31435. Consider the following statements regarding a beam of uniform cross-section simply supported at its ends and carrying a concentrated load at one of its third points : 1. Its deflection under the load will be maximum. 2. The bending moment under the load will be maximum. 3. The deflection at the mid-point of the span will be maximum. 4. The slope at the nearer support will be maximum. Of these statements : 31437. Which of the following statements is/are true in relation to the term 'detention period' in a settling tank? 1. It may be determined by introducing a dye in the inlet and timing its appearance at the outlet. 2. Greater the detention period, greater the efficiency of removal of settleable matter. 3. It is the time taken for any unit of water to pass through the setting basin. 4. It is usually more than the flowthrough period. Select the correct answer using the codes given below : 31438. If a soil sample of weight 0.18 kg having a volume of 10-4 m3 and dry unit wt. of 1600 kg/m3 is mixed with 0.02 kg of water then the water content in the sample will be 31439. With 'n' variables and 'm' fundamental dimensions in a system, which one of the following statements relating to the application of the Buckingham's Pi Theorem is incorrect ? 31440. Two sources generate noise levels of 90 dB and 94 dB respectively. The cumulative effect of these two noise levels on the human ear is 31443. A steel cable of 2 cm diameter is used to lift a load of 500π kg. Given that, E = 2 x 106 kg/cm2 and the length of the cable is 10 m, the elongation of the cable due to the load will be 31444. Consider the following statements regarding tensile test diagrams for carbon steel with varying carbon contents : As the carbon content increases 1. the ultimate strength of steel decreases 2. the elongation before fracture increases 3. the ductility of the metal decreases 4. the ultimate strength increases. Of these statements : 31446. The defect which develops due to uncontrolled and non-uniform loss of moisture from wood is known as which one of the following ? 31447. Consider the following statements in respect of the critical depth of flow in a prismatical rectangular channel: 1. For known specific emergy, discharge is minimum. 2. For known discharge, the specific energy is minimum. Which of the statements given above is/are correct ? 31448. On the basis of the data given in the following figures, the bending moment under the load would work out to 31449. What is the innermost portion of approach zone which is the most critical portion from obstruction viewpoint, called ? 31450. A circular plate 100 mm diameter is welded to another plate by means of 6 mm fillet weld. If the permissible shearing stress in the weld equals 10 kg/mm2, then the greatest twisting moment that can be resisted by the weld will be <<= Back Next =>> Terms And Service:We do not guarantee the accuracy of available data ..We Provide Information On Public Data.. Please consult an expert before using this data for commercial or personal use | Powered By:Omega Web Solutions © 2002-2017 Omega Education PVT LTD...Privacy | Terms And Conditions Question ANSWER With Solution
{"url":"https://jobquiz.info/mquestion.php?page=628","timestamp":"2024-11-13T09:40:36Z","content_type":"text/html","content_length":"87917","record_id":"<urn:uuid:17003620-1f5d-4eb1-ba16-e1e5a8c81fba>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00289.warc.gz"}
displacement calculator Water-Cooled VW Performance Handbook Engine displacement calculator With the bore, stroke, and number of cylinders in your engine, you can calculate the displacement. If your bore and stroke are in inches (that is, the value for “Bore” is less than 40), the calculator will return the results in cubic inches. If the bore and stroke are in millimeters (“Bore” 40 or more), the results will be in cubic centimeters. For example, a four-cylinder engine with a bore of 84 mm and a stroke of 81 mm will have a displacement of 1,795.5 cc. An eight-cylinder engine with a bore of 4 inches and a stroke of 3.48 inches will have a displacement of 349.8 ci. The formula for engine displacement in cubic inches is: \[Displacement = \pi \times \left(\frac{Bore}{2}\right)^2 \times Stroke \times Cylinders\] … where each of your measurements is in inches. The formula for engine displacement in cubic centimeters is: \[Displacement = \frac{\pi \times \left(\frac{Bore}{2}\right)^2 \times Stroke \times Cylinders}{1,000}\] … where each of your measurements is in millimeters.
{"url":"https://gregraven.org/hotwater/calculators/displacement","timestamp":"2024-11-15T03:16:08Z","content_type":"text/html","content_length":"7405","record_id":"<urn:uuid:23cf1f2f-89ab-4a76-8dfe-8f70910a2cf9>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00876.warc.gz"}
Weight for a Few Minutes - Round and Round We Go - End Politicians Weight for a Few Minutes – Round and Round We Go by EndPoliticians | May 10, 2023 | Blog, Writing Extra, Extra? Read All About It! Wikipedia lists the weight of the Roman pound (libra) at 328.9 grams, and the Roman ounce (uncia) at 27.4 grams. Encyclopedia Britannica also tabulates the Roman pound as weighing in at 328.9 grams. Academic checked calculations then, with zero room for error? Heck, the top search engine results that The Machine (system) goes out of its way to make people form their reality around says it is true in doubly fashion, so it has to be, right!? Besides, way more than 20 media outlets (the system) all claimed simultaneously that Saddam Hussein harboured “weapons of mass destruction,” so the butchering and ultimate destruction or Iraq and its innocent citizenry was akin to baptizing a newborn baby. Bring on the holy-water boarding! That wacky academic consensus. Could it possibly be non-sensus? Well, CO2 is the only input driving climate science in this loss of all sight age we currently live within, but definitely not on top of. Officially year negative 3. Oh, those “professionals.” Textbook examples of being educated to the infinite point of nothingness? Fourword: welcome to the system. Well to be fair, Wikipedia does explicitly state, “Modern estimates of the libra range from 322 to 329 g (11.4 to 11.6 oz) with 5076 grains or 328.9 g (11.60 oz) an accepted figure.” Fair enough chaps and lasses. A 7 gram range then, an oh so small, yet simultaneously large margin of discrepancy. Certainly people right until yesterday have lost their lives for much less than a 7 gram discrepancy, regardless of what drug was being peddled. One must assume that Britannica’s encyclopedists are too busy counting “royal” gerbil habitrail volumes than making an attempt at figuring out a mathematical inquiry that does not finish up Chuck’s alley of liking. Hey, I just want to know how many grams were in an ancient antiquity uncia and libra of liberating liberty cap mushrooms? Nobody likes to be shorted after all. Though if the academics are correct, those libra’s were possibly 4.9 grams heavy. First, let this Virgo regurgitate the fact of the Roman libra of yore as being the reason as to why the pound of today is abbreviated as lb. Here is the origin of the word pound which derives from the Latin word pondō. Speaking of regurgitating. 324.00 grams. I had read elsewhere that 324.00 grams is what a Roman libra weighed. Not only so, but that it was divided into 12.00 uncia. Simple math (324÷12=27) would dictate that the Roman uncia weighed 27.00 grams. Clearly the equation is much too complex for academic publication, so they butchered it like an Iraqi baby in need of “democracy?” Wikipedia lists its source as per the classification of a roman pound as coming from: Smith, Sir William; Charles Anthon (1851) A new classical dictionary of Greek and Roman biography, mythology, and geography partly based upon the Dictionary of Greek and Roman biography and mythology. Though in reality, I know not how “Sir” Smith derived his weight structure of the Roman libra & accompanying uncia. If one were to harbour a guess, it would be from that of tabulating ancient roman weights and reference from surviving Roman writings perhaps? And in reality, as this paper states, ancient weight systems are highly unreliable with a coefficient of variation around ∼5 to 6%. As well the weight scales did change at different times of the Roman Empire. It sure does seem though, according to early (7th century BC) bronze ingot castings that the measurement and weight systems that the Romans were using tried very hard to produce items that were evenly divisible in length, height and weight. Below, I will skim the surface of the depths that could be gone into, strictly out of an acquired interest, and possibly striking a spark within others Below is the Wikipedia legend of the divisions of the libra: uncia – 1/12, sescuncia – 1/8, sextant – 1/6, quadrant – 1/4, triens – 1/3, quincunx – 5/12, semis – 1/2, septunx – 7/12, bes – 2/3, dodrans – 3/4, sextant – 5/6, deunx – 11/12. Owning a calculator and possessing curiosity into the contrarian reason for the 324.00 gram libra, the 27.00 gram uncia and subsequent divisional characteristics of the sextant, quincunx, septunx…..as being possibly based off of the even number of 324.00, therefore the numbers on the chart below of Wikipedia display having a chance of being inaccurate to history and the relics left behind to prove such in detail? The simple fractional libra calculation performed that I will display below the chart had basically alleviated all of the trailing decimal points when adding a thousandths, ten thousandths, or hundred thousandths to the computation. The 2013 book, Italian Cast Coinage, by Italo Vecchi clearly comes up with the same conclusions, though many years earlier. And no doubt other people have reached the same conclusion. Though nobody really elaborates as to why 324.00 grams becomes the possible benchmark for the Roman libra? I suspect that 324 and the number of degrees in a circle, 360, along with the accompanying divisional fractional equations and sums might have an integral part to do with it. As an example: derivatives of 360 such as 36, if subtracted from 360 comes to the round number of 324 that is posited to be the precise number of grams in a Roman uncia by contrarian historians. Subtract again 36 from 324 now and the number 288g (scripula – a 288th of a libra.) Minus 36 from 288 a few more times and one gets 216, (a bessis – 2/3rds of a libra.) of which repeats to other libra fraction of 108, (triens – 1/3rd of a libra.) As well 36 is a derivative of 12, of which is the exact number uncia purported to be in a Roman libra. Again, the circle probably had a major role in ancient units of measure, as well as multiples of ten: 36 & 360. 27 &270. 18 &180…… Now if one divides 360 by 288 it comes to 1.25, or a derivative of 3.75, 7.5, 11.25, 15, 22.5……. of which very often divide the numeric values (size and weight) of roman pre-coin ingots (aes formatum) into numbers lacking any decimal points and that are derivatives of 12. Example from aes formatum pictured below: 180mm ÷ 1.25= 144, or half a scripula (288th of a libra.) or 2700g ÷ 3.75= 720, number of degrees in a circle times two. The 1.25 fractional division works on many cast Roman ingots, large and small including aes signatum like such. (sometimes it is necessary to adjust by a mm or three the dimensions due to an uneven casting pour.) Many of the complete ingots in the Italian Cast Coinage book are divisible by size and weight into the 1.25 fractional measurements. Mess around on a calculator until your eyes cross and one comes up with things like 180 ÷ 1.25=144, of which is the same answer as 12 x 12 =144. Also that to dispute the Wikipedia & Britannica numerical fractions for number of grams in an uncia comes into play when the contrarian uncia of 27 grams is divided into a srupulum (1/24 of an uncia) a revised number reads out as 1.125 (not 1.14 as per Wikipedia.) Then when one divides 324 (Roman libra) by 288 (Roman scripula) one gets 1.125 as well, which in reality is just multiples of 12. And all this after reading about a 324 gram libra and 27 gram uncia that differed from academic consensus, then getting drunk one night and stumbling upon the weight and dimension of the cast Roman ingot pictured below that peaked my curiosity after having come across a Spanish article about Roman pre-coins and the number 3.75. This non mathematician has input so many numbers into my calculator in the past little while that I tire of it greatly. Heck, and the ancients did this all without calculators or the modern numeral system we have today. Quite interestingly 8.6 x 3.14= 27.004. The number of grams in a Roman uncia? 27 x 10 = 3/4’s of a circle in degree form. 8.6 (pi as 8.6 years in days – 3139), it will add to calculator sickness. Trust me. Uncial divisions of the libra Roman unit English name Equal to Metric equivalent Imperial equivalent Description uncia Roman ounce 1⁄12libra 27.4 g 0.967 oz lit. “a twelfth”^ sescuncia or sescunx 1⁄8 libra 41.1 g 1.45 oz lit. “one and one-half twelfths” sextans 1⁄6 libra 54.8 g 1.93 oz lit. “a sixth” quadrans 1⁄4 libra 82.2 g 2.90 oz lit. “a fourth” teruncius lit. “triple twelfth” triens 1⁄3 libra 109.6 g 3.87 oz lit. “a third” quincunx 5⁄12libra 137.0 g 4.83 oz lit. “five-twelfths”^ semis or semissis 1⁄2 libra 164.5 g 5.80 oz lit. “a half” septunx 7⁄12libra 191.9 g 6.77 oz lit. “seven-twelfths” bes or bessis 2⁄3 libra 219.3 g 7.74 oz lit. “two [parts] of an as“ dodrans 3⁄4 libra 246.7 g 8.70 oz lit. “less a fourth” dextans 5⁄6 libra 274.1 g 9.67 oz lit. “less a sixth” deunx 11⁄12libra 301.5 g 10.64 oz lit. “less a twelfth” libra Roman pound 328.9 g 11.60 oz lit. “balance”^ libra^ 0.725 lb Except where noted, based on Smith (1851).^[2] Metric equivalents are approximate, converted at 1 libra = 328.9 g . Contrarian Revision of the Roman Libra to 27 Grams Uncia – 1/12 of a Libra 1 ÷ 12= .083333333333 .0833333333 x 324= 26.9999999892. Close enough to call 27g? Sescuncia – 1/8 of a Libra 1 ÷ 8= .125. .125 x 324= 40.5. The only fraction of a libra to have a decimal point and being a derivative of 27. 27 + 13.5= 40.5g. Sextans – 1/6 of a Libra 1 ÷ 6= .1666666666. .1666666666 x 324= 53.9999999784. Close enough to call 54g? 27 x 2= 54. Quadrans – 1/4 of a Libra 1 ÷ 4= .25. .24 x 324= 81g. 27 x 3= 81. Triens – 1/3 of a Libra 1 ÷ 3= .3333333333. .3333333333 x 324= 107.9999999892. Close enough to cal 108g? 27 x 4= 108. Quincunx – 5/12 of a Libra 5 ÷ 12= .4166666666. .4166666666 x 324= 134.9999999784. Close enough to call 135g? 5 x 27= 135. Semis – 1/2 of a Libra 1 ÷2= .5. .5 x 324= 162g. 6 x 27= 162. Septunx – 1/7 of a Libra 7 ÷12= .5833333333. .5833333333 x 324= 188.9999999892g. Close enough to call 189g? 27 x 7= 189. Bes – 2/3 of a Libra 2 ÷ 3= .6666666666. .6666666666 x 324= 215.9999999784g. Close enough to call 216g? 27 x 8= 216. Dodrans – 3/4 of a Libra 3 ÷ 4= .75. .75 x 324= 243g. 27 x 9= 243g. Dextans – 5/6 of a Libra 5 ÷ 6= .8333333333. .8333333333 x 324= 269.9999999892g. Close enough to call 270g? 27 x 10= 270. Deunx – 11/12 of a Libra 11 ÷ 12= .9166666666. .9166666666 x 324= 296.9999999784g. Close enough to call 297g? 27 x 11= 297g. 1 ÷ 1= 1. 1 x 324= 324. Close enough to call 324g? 27 x 12= 324. Rounding it Out If you made it this far congatulations! I am beginning to think that the term, number of the beast – 666, was coined due to an observer watching ancient mathematicians dissect the numbers of the circle and coming up with a method to arrive at units of weight and the accompanying monetary system. 333333…. and 666666… sure seem to appear a lot when dissecting 360 and 324 along with other derivatives along the way. My neck and back muscles are wound tight after molesting a calculator in my recent spare time, and I have been dreaming about numbers along the way. Perhaps one day, I will be able to go into more detail about Roman cast ingots of antiquity and other measurements not determined by others.
{"url":"https://endpoliticians.com/2023/05/10/weight-for-a-few-minutes-round-and-round-we-go/","timestamp":"2024-11-07T15:20:17Z","content_type":"text/html","content_length":"196753","record_id":"<urn:uuid:6f651177-9e4d-43c3-8c9e-6bf9c1ee3120>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00826.warc.gz"}
Due Wednesday March 27: You flip a fair coin 10,000 times. What can be said about the length of your longest run, i.e., the largest k such that k heads in a row or k tails in a row appear in your outcome sequence? You may use math or do a simulation. Due Friday March 29: On average, how many rolls of a die does it take to see all six numbers? You may do the math, or do simulations and report the results. Due Monday April 1: 64 evenly matched teams are randomly bracketed and engaged in an elimination tournament. What is the probability that two particular teams (say, Dartmouth and Davidson) meet? You may do the math, or do simulations and report the results. Due Wednesday April 3: On average, how many rolls of a die does it take to get a 6, given that you don't roll any odd numbers en route? You may do the math, or do simulations (by rejection sampling!) and report the results. Due Friday April 5: Domino City occupies 6 islands in the Domino River, arranged like the pips on a die (or a domino). Islands A, B, and C, near the west bank, are connected by bridges to the west bank; D, E, and F to the east bank. Additional bridges connect the islands in a grid pattern: A to B, B to C, D to E, E to F, A to D, B to E, and C to F; 13 bridges in all. An earthquake is anticipated and the seismologists and engineers agree that each bridge will independently(!) collapse with probability 50%. What is the probability that, after the earthquake, it will still be possible to cross the river on the remaining bridges? (You may write a program to compute this, or to estimate it by random sampling; or, of course, you may try to solve it with mathematical reasoning.) reminder: Homeworks are due on paper at the beginning of each class. If you submit by email it must be before class, and should include the reason why you can't be in class. And don't forget the names of your collaborators! Due Monday April 8: Consider the following two-part experiment. First, choose a number p uniformly at random from the unit interval [0,1], and manufacture a coin whose probability of flipping "heads" is exactly p. Second, flip your coin 100 times. What is the probability that this experiment results in precisely 50 heads? (Note that your answer will not depend on p, since choosing p is part of the experiment.) You may use math or run your own computer experiments. Due Wednesday April 10: Prove that if p < 1/3, then the probability of bond percolation on the plane grid is zero. You may want to use the fact that if the open cluster containing the origin is infinite, then for every n there is an open path of length n starting at the origin. Note that if the probability that the open cluster containing 0 is infinite is 0, then so is the probability that the open cluster at any other point (x,y) is zero; and thus the probability of percolation is bounded by the sum of 0 over all (x,y) which is zero. Due Friday April 12: Let T[k,n] be the k-branching tree of depth n (which has a root r and k^n leaves). We will do bond percolation on T[k,n], meaning that each edge (bond) is open independently with some fixed probability p. Let P[k,n] be the probability that there is an open path from r to some leaf (any leaf). Express P[k,n] in terms of p and P[k,n-1]. Due Wednesday April 17: What fraction of a large box in the plane can be covered by disjoint unit-radius disks? Due Friday April 19: Use the last homework to help you prove that any 10 points on the plane can be covered by disjoint unit disks. Due Monday April 22: Compute the expected number of monochromatic sets (cliques or independent sets) of size k in G[n,1/2] . What can you conclude from your computation? Due Wednesday April 24: We know that the expected number of monotone subsequences of length k in a uniformly random permutation of order n is (n choose k) times 2/k!. For given large n, approximately what value of k will result in this expectation being equal to 1? What can you conclude from this about the expected length of the longest monotone subsequence in a random permutation of order n? Due Friday April 26: Fix a positive integer k and a probability p strictly between 0 and 1. Show that as n approaches infinity, the probability that G[n,p] satisfies the kth Alice's Restaurant axiom φ[k] approaches 1. The kth Alice's Restaurant axiom φ[k] says that for any vertices u[1], u[2], . . . , u[k], and v[1], v[2], . . . , v[k], where no u[i] is equal to any v[j], there's a vertex z that is adjacent to all the u[i]'s and none of the v[j]'s. Due Monday April 29: Express as a first-order logic sentence the property that a graph is regular of degree 3, i.e., every vertex is adjacent to exactly three other vertices. Due Wednesday May 1: Find a graph property P (that is, a set of graphs that is closed under isomorphism) such that the limit as n approaches infinity of the probability that G[n,1/2] satisfies P is a number strictly between 0 and 1. Due Friday May 3: Prove or argue or demonstrate experimentally that if V and W are exponential random variables with means a and b, respectively, then the probability that V > W is a/(a+b). In other words, an "exponential" light bulb with expected lifetime a will outlast one of expected lifetime b with probability a/(a+b). Due Wednesday May 8: Find two probability distributions such that if X has the first distribution and Y the second, and X and Y are independent, then X > Y with probability greater than 1/2, even though the expectation of X is less than the expectation of Y. No homework due Friday May 10: But you might want to refresh (or charge) your memory by glancing at Chapter 11 of Grinstead and Snell. Markov chains! Due Monday May 13: Estimate: (1) The expected number of pairs of multiple edges that will appear, for large n, with 3 stubs per vertex, when we pair up stubs uniformly at random; (b) the probability that the resulting graph G will have no multiple edges; and (c) the probability that G will have neither multiple edges nor loops. Your estimate can be derived either from math or from simulations. Due Wednesday May 15: Let P be (the transition matrix of) the Markov chain whose state space is the set of permutations of the numbers from 1 to 52, and whose transitions are as follows. Let (n[1], n [2], . . . , n[52]) be the current permutation. Choose i uniformly at random between 1 and 51, and switch n[i] with n[i+1]. (This is called "random adjacent transposition.") Answer, with reasoning: Is P irreducible? Is P ergodic? Due Friday May 17: Let the graph G be a path of length n, on vertices 0, 1, 2, . . ., n. What is the maximum over all starting vertices k of the expected time for a simple random walk on G, starting at k, to hit every vertex of G? Due Monday May 20: Prove that our Markov chain on Δ+2-colorings of a graph of maximum degree Δ is irreducible; i.e., show that for any two proper q-colorings of a finite graph G, where q=Δ+2, you can get from one coloring to the other by changing colors one vertex at a time. Note that every vertex always has at least two colors available to it, that is, unused by any neighbor. Due Wednesday May 22: A stream n[1], . . . , n[1,000,000] of names is input to a computer that has only 1,000 registers to store names in. The following algorithm is executed: d ← 0, L ← emptyset. For i = 1, . . . , 1,000,000 do: L ← L \ {n[i]}, then add n[i] to the set L with probability 1/2^d. If |L| = 1000 do: Flip a coin for each element of L and remove it if "tails"; then, set d ← d+1. Output 2^d |L|. Question 1: What is this algorithm trying to approximate? Question 2: Why does it (probably) succeed? Due Friday May 24: Write a program to compute the entropy of the binomial distribution B(n,1/2) and run it for various n, comparing your answer to (1/2)log[2] n. Or: Let k be a divisor of n, and calculate the entropy of the probability distribution on n values, the first n/k of which have probability (k-1)/n each, the rest having probability 1/((k-1)n) each. Due Wednesday May 29: Calculate the homorphism density of the path of length 2 (three vertices) in G[n,p].
{"url":"https://math.dartmouth.edu/~m100s24/","timestamp":"2024-11-13T22:01:09Z","content_type":"text/html","content_length":"26367","record_id":"<urn:uuid:af476684-993c-45be-a1f9-2cfdc4a783b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00887.warc.gz"}
Sample Size for 2-sample Binomial samplesize.bin {Hmisc} R Documentation Sample Size for 2-sample Binomial Computes sample size(s) for 2-sample binomial problem given vector or scalar probabilities in the two groups. samplesize.bin(alpha, beta, pit, pic, rho=0.5) alpha scalar ONE-SIDED test size, or two-sided size/2 beta scalar or vector of powers pit hypothesized treatment probability of success pic hypothesized control probability of success rho proportion of the sample devoted to treated group (0 <\code{rho} < 1) TOTAL sample size(s) Rick Chappell Dept. of Statistics and Human Oncology University of Wisconsin at Madison alpha <- .05 beta <- c(.70,.80,.90,.95) # N1 is a matrix of total sample sizes whose # rows vary by hypothesized treatment success probability and # columns vary by power # See Meinert's book for formulae. N1 <- samplesize.bin(alpha, beta, pit=.55, pic=.5) N1 <- rbind(N1, samplesize.bin(alpha, beta, pit=.60, pic=.5)) N1 <- rbind(N1, samplesize.bin(alpha, beta, pit=.65, pic=.5)) N1 <- rbind(N1, samplesize.bin(alpha, beta, pit=.70, pic=.5)) attr(N1,"dimnames") <- NULL #Accounting for 5% noncompliance in the treated group inflation <- (1/.95)**2 version 5.1-3
{"url":"https://search.r-project.org/CRAN/refmans/Hmisc/html/samplesize.bin.html","timestamp":"2024-11-13T16:24:09Z","content_type":"text/html","content_length":"3338","record_id":"<urn:uuid:4e128337-4212-4247-b31f-1608c4c56787>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00557.warc.gz"}
Separating Internal Variability from the Externally Forced Climate Response 1. Introduction Internally generated natural variability is an important part of the climate system. Although the longest-term, largest-scale climate trends are dominated by external forcing, internal variability plays a vital role at shorter time scales and at smaller spatial scales. An example is the recent slowdown in global surface warming, which has led to heightened scrutiny of the role played by both forced and internal climate variability on decadal to multidecadal time scales. Among the outstanding underlying issues is how best to separate internal variability from the forced climate signal. For the actual climate, we have only one realization of the internal variability and it is nontrivial to extract it from the available data. Schurer et al. (2013) used proxy reconstructions and model simulations to estimate the contributions of internal variability and external forcing over the last millennium. Estimating the forced signal during the historical era is complicated by the short length of the observational record and the challenge this creates in isolating low-frequency, multidecadal, and longer-term internal variability (Frankcombe et al. 2015). In addition, the dominant influence on climate in the most recent period is anthropogenic forcing, including greenhouse gases (GHGs), tropospheric aerosols, and ozone-depleting substances, each of which must separately be taken into account. One recent body of research, for example, has sought to ascertain how much of the mid-twentieth-century temperature variability is due to anthropogenic aerosols and how much is due to internal variability (Booth et al. 2012; Zhang et al. 2013). Mann et al. (2014) used observations to investigate the effect of biases caused by the incorrect partition of observed Northern Hemisphere temperatures into forced and internal components. Steinman et al. (2015) extended that work to study the relative contributions of the North Atlantic and North Pacific to the observed internal variability of the Northern Hemisphere. In this paper we compare various methods for separating the forced signal from the background of internal variability and examine the biases that may result from the different methods. We focus on the specific example of multidecadal North Atlantic sea surface temperature (SST) variability, but the results have broader implications for the problem of separating forced and internal climate Enhanced variability on multidecadal time scales centered in the North Atlantic has been found in modern observational climate data (Folland et al. 1984, 1986; Kushnir 1994; Mann and Park 1994; Delworth and Mann 2000) and in long-term climate proxy data (e.g., Mann et al. 1995; Delworth and Mann 2000). Such variability is also generated in a range of models from idealized ocean models to full GCMs (Delworth et al. 1993, 1997; Huck et al. 1999; Knight et al. 2005; Parker et al. 2007; Ting et al. 2011; Zhang and Wang 2013). The variability has been named the Atlantic multidecadal oscillation (AMO; Kerr 2000) or, alternatively, Atlantic multidecadal variability (AMV) since it is unclear whether it truly constitutes a narrowband oscillatory climate signal. In this study, we do not attempt to address the mechanisms causing the variability; we instead focus on North Atlantic SST variability as a case study in the application of competing statistical approaches to separating internal and external variability. The rest of this paper is divided as follows: We first describe the data used in the study (section 2) and then describe the various competing methods for separating forced and internal variability ( section 3). The methods are tested on synthetic data, where the true internal and external signals are known (section 4), and then applied to CMIP5 historical simulations (section 5) and observational data (section 6). We then discuss the results of our analyses (section 7) and finally summarize with our conclusions (section 8). 2. Data One often-used measure of AMV is the smoothed and linearly detrended average of North Atlantic SSTs (e.g., Sutton and Hodson 2003). We calculate an index of North Atlantic variability by averaging SST over the region 0°–60°N, 5°–75°W but do not detrend the series, for reasons that will become clear later in the discussion. We will call this raw index the North Atlantic SST index (NASSTI). Estimates of the internal variability obtained from the NASSTI using the methods tested here are referred to as Atlantic multidecadal oscillation indices (AMOI), since they are approximations of AMO/ AMV variability. We use the historical runs from phase 5 of the Coupled Model Intercomparison Project (CMIP5; Taylor et al. 2012), employing the 145-yr (1861–2005) interval spanned by nearly all ensemble members. Simulations that do not span the full interval are excluded, as are models in which the raw NASSTI time series does not display significant multidecadal variability. Two idealized historical scenarios—Hist[GHG] (in which only well-mixed greenhouse gas forcing is applied) and Hist[Nat] (natural forcings only, including solar variability and volcanoes)—are also used. The CMIP5 models used are listed in Table 1. For comparison to observations we use SST from HadISST (Rayner et al. 2003) between 1870 and 2005. Smoothed time series are calculated using a 40-yr adaptive low-pass filter (Mann 2008). Table 1. CMIP5 models used. Some models list more than one control run length, indicating that various sections of control runs were available. (Expansions of acronyms are available at http://www.ametsoc.org/ 3. Methods Of the many methods used to separate the forced signal and the internal variability, the most common is the “detrended” approach, where a linear trend is subtracted from the signal (e.g., Zhang and Wang 2013). This method has the advantage of being extremely simple and, in the absence of any better estimates of the forced signal, may also be useful as a first approximation. The external forcing is not linear in time, however. For this reason, the detrending procedure has been shown to bias the amplitude and phase of the estimated internal variability (Mann and Emanuel 2006; Mann et al. 2014 ). Biases in the estimated phase will in turn bias estimates of AMO periodicity. An alternative method, referred to as the “differenced” method, employs a large ensemble of climate simulations. Each individual ensemble member responds to the external forcing applied to the model, but it also contains a realization of internal variability. If the ensemble members are initialized so as to be independent of each other, then they will each contain a different realization of the internal variability. Averaging over a large number of these ensemble members will average out the internal variability so that the signal remaining is the model response to the external forcing. Subtracting this model-mean response from each ensemble member gives the internal variability. This method has the advantage that it does not make prior assumptions about the model response to external forcing. The method does, however, rely on each member of the ensemble having the same response to the external forcing, which is not necessarily the case. The strength of a model’s response to external forcing is represented by the equilibrium climate sensitivity (ECS), which is the equilibrium change in annual global-mean surface temperature after a doubling of the atmospheric CO[2] concentration relative to preindustrial levels. The CMIP5 models have equilibrium climate sensitivities of between 2.1° and 4.7°C (Flato et al. 2013). Even for an ensemble of realizations from a single climate model, the estimates of climate sensitivity derived from a single ensemble member may differ from the true model sensitivity because of the noise introduced by internal variability ( Huber et al. 2014). Furthermore, in the case of a multimodel ensemble, each individual model will have a different climate sensitivity altogether. The multimodel mean (MMM) represents an average across models that both overestimate (i.e., high-sensitivity models) and underestimate (i.e., low-sensitivity models) the forced response. The MMM will therefore overestimate the magnitude of the forced response for models with low sensitivity and underestimate it for models with high sensitivity. The differenced method thus potentially introduces a bias when used to estimate the internal variability of the various models. Although small during the earlier part of the historical record when the amplitude of the forced signal is modest, the bias becomes significant toward the end of the historical period and increasingly dominates over the signal of internal variability in any future projections. One method to mitigate this bias, the “scaling” method, is described by Steinman et al. (2015) . In this method the multimodel mean of the CMIP5 historical all-forcing ensemble is taken to be the best estimate of the climate response to external forcing and is then scaled to match the climate sensitivity of each individual ensemble member. For the test case described here the multimodel mean of the NASSTI is linearly regressed on to the NASSTI time series of each ensemble member from the CMIP5 historical all-forcing ensemble to obtain an estimate of the forced signal: is a constant, is the scaling factor, and MMM is the multimodel mean of the NASSTI from the CMIP5 all-forcing ensemble. The regression coefficient is a measure of the relative climate sensitivity of each ensemble member compared to MMM and is thus model dependent. The component of the time series of each ensemble member not explained by the scaled multimodel mean is taken as an estimate of the internal variability in the North and is recovered by subtracting , the estimate of each ensemble member’s forced response, from , the time series of each ensemble member from the historical simulation: This method, which we term the “single factor scaling” method, results in much better estimation of phase and amplitude of low-frequency variability than the detrending and differencing methods ( Steinman et al. 2015). It, too, however, is not completely free of potential biases. Consider that external forcing during the historical period has contributions from both greenhouse gases and aerosols (both anthropogenic and volcanic) and that different models may have different amplitude responses to the different types of forcing. Indeed, different models may have different specifications and implementations of the various forcing components. The single factor scaling method, however, uses a single regression coefficient to account for all external forcing. While the method performs well over the historical period (Steinman et al. 2015), application of the method to future projections, which contain an increasingly large contribution from one particular forcing (anthropogenic greenhouse gases), could result in biases at the ends of the time series. In addition to the single factor scaling method we test two modified scaling methods where two or three scaling factors are used. While in the single factor scaling method the (single) scaling factor represents the combined model response to all external forcings, in the modified scaling approaches different scaling factors are used to represent the model responses to different types of external forcing—in effect the different efficacies of the different forcings. For the modified scaling method using two scaling factors, estimates of the two factors for each time series are calculated by multilinear regression on the NASSTI time series of each ensemble member: is a constant and are the estimated scaling factors. The first scaling factor represents the model response to GHG forcing, while the second represents the model response to natural forcing, such as volcanic aerosols and solar variability. The estimates of the GHG and natural responses are obtained from the multimodel means of the Hist and Hist simulations of CMIP5 (MMM and MMM , respectively). The resulting estimate of the forced response is used to recover an estimate of the internal variability as follows: In addition to GHG and natural forcings there are also other forcings included in the all-forcing experiments that should be taken into account (anthropogenic aerosols and ozone being the most important in the context of North Atlantic multidecadal variability), but these cannot be robustly included because of the limited number of ensemble members that performed these individual forcing experiments. If sufficient simulations of the various other forcings were available, then scaling factors representing them could be included, in addition to the scaling factors representing GHG and natural forcings. As an estimate of these unrepresented forcings we include a third scaling factor MMM , which is the multimodel mean of the variability that remains unexplained after regressing MMM and MMM on MMM The three factor scaling method is calculated as follows: is a constant, and , and are the estimated scaling factors for GHG forcing, natural forcing, and residual forcing, respectively. The various MMMs are shown in Fig. 1 . Note that forcings included in the all-forcing historical simulations but not in Hist or Hist may have, in addition to the forced signal represented by MMM , additional projections onto MMM and MMM such that (and indeed in the two scaling factor method) represent sensitivities to combinations of forcings. Fig. 1. Multimodel means of the NASSTI in the all-forcing ensemble (black), the natural forcings ensemble (green), the GHG forcing ensemble (red), and the remainder after natural forcing and GHG forcing are removed from the all-forcing ensemble (magenta). Annual data are shown by the dashed lines while data smoothed with a 40-yr low-pass filter are shown by the solid lines. Upward (downward) pointing triangles on the x axis indicate the position of maxima (minima) of the four smoothed time series. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 These scaling methods are analogous to the methods of optimal fingerprinting used in detection and attribution studies (Allen and Tett 1999; Allen and Stott 2003). The difference here is that we use a single time series rather than spatial patterns and focus on extracting the natural variability rather than the forced signal. The three scaling methods were tested with both ordinary least squares regression (as used by Steinman et al. 2015) and total least squares regression (Allen and Stott 2003); no significant differences were found between the two regression methods. The multiensemble, multimodel mean of the CMIP5 historical runs is used as the estimate of the forced signal for the differenced and single factor scaling approaches. Each ensemble member from each model is given equal weight in the mean, which can lead to biasing toward models that contribute a large ensemble to the CMIP5 archive. However, averaging the ensemble of each model to get a model mean and then averaging all the model means to get a multimodel mean, as is sometimes done to account for differing ensemble sizes, results in the internal variability of the members of large ensembles being averaged out before they can contribute to the multimodel mean. This method implicitly assumes that internal variability is negligible and, in the presence of the nonnegligible internal variability that is of interest in this study, results in a bias toward the models that contribute fewer ensemble members to the archive (since each of the few ensemble members effectively receives a larger weight in the multimodel averaging process). In choosing to calculate the forced signal as a multiensemble mean we are implicitly assuming that all the ensemble members, from all the models, are drawn from the same distribution (i.e., that all the models perform equally). The limitations of this assumption will be investigated later. 4. Analysis of the various methods using synthetic data To test the various methods in an idealized situation where the true internal variability is known, we construct synthetic AMOI time series using the null hypothesis that the variability is due to red noise. Each synthetic time series of internal variability is a 145-yr-long time series of red noise (the same length as the CMIP5 historical runs), scaled by the average autocorrelation and amplitude of the CMIP5 historical runs. Three independent, random scaling factors (drawn from the uniform distribution between 0.2 and 2) are used—the first representing the response to GHG forcing ( ), the second the response to natural forcing ( ), and the third the response to any other forcings ( ). The independence of the scaling factors is shown in section 5 to be valid for the CMIP5 models; thus we use that assumption for the synthetic time series here. The synthetic historical time series were constructed by adding forced variability to the natural variability as follows: is the synthetic historical time series; is the synthetic time series of internal variability; and MMM , MMM , and MMM are the multimodel means of the NASSTI time series representing GHG, natural, and residual forcings from CMIP5, respectively (as shown in Fig. 1 ). An ensemble of 5000 such time series was constructed. The five methods to remove the forced signal are then applied to the synthetic data to find N[est], the estimated internal variability. The accuracy of the methods can be judged by comparing the estimated internal variability N[est] to the true time series N using a variety of metrics: 1. comparing the estimated scaling factors (β, γ, and δ) to the known ones (α), 2. calculating error as a function of time, 3. finding minima and maxima of the estimated time series compared to the known ones (to find the bias in phase introduced by each method), and 4. calculating the amplitudes of the estimated time series compared to the known ones (to find the bias in amplitude introduced by each method). This gives us a basis for comparison for the CMIP5 models, for which the true time series of internal variability are not known. Figure 2 shows scatterplots of the estimated scaling factors compared to the known scaling factors for the three factor scaling method. In Fig. 2a we can see that the true GHG scaling factor α[GHG] is well estimated by δ[GHG], the GHG scaling factor from the three factor method. This is also the case for the two factor scaling method, with α[GHG] and γ[GHG] being highly correlated. For the single factor scaling method the scaling factor β also correlates very well with α[GHG], while the correlation of β with α[Nat] is small, although not negligible, with higher α[Nat] on average corresponding to larger β for the same value of α[GHG]. This indicates that it is the model sensitivity to GHG forcing which dominates over the sensitivity to natural forcing in the single scaling method. Figure 2b shows the accuracy with which α[Nat] is estimated using the three factor scaling method. The accuracy is very similar for the two factor scaling method. The accuracy of estimation of α[rest] is shown in Fig. 2c. This factor is the most difficult to estimate because MMM[rest] varies on similar time scales to the internal variability, so the two may easily be mistaken for each other. The error in estimating α[Nat] is smaller but arises from the same source since the natural forcing also contains variability on multidecadal time scales. The error in estimating α[GHG] is the smallest of the three; therefore, sensitivity to GHG forcing should be the most robustly estimated parameter. Fig. 2. Scatterplots of the known scaling coefficients compared to the estimates made using the three scale factor method for (a) α[GHG] vs δ[GHG], (b) α[Nat] vs δ[Nat], and (c) α[rest] vs δ[rest] for 5000 synthetic time series. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 The error in each estimation can be calculated as a function of time: Figure 3a shows the mean error as a function of time for the synthetic time series for the five different methods. The raw NASSTI time series (gray lines) has errors that increase with time as the external forcing becomes increasingly dominant. The detrending method (blue lines) has large errors through the whole time series, particularly at the beginning and end owing to the assumption that the trend is linear. Errors in the differencing method (green lines) increase toward the end of the time series as a result of the increasing influence of different models’ climate sensitivities. The single factor scaling method (red lines) gives smaller errors than the detrending and differencing methods, especially toward the end of the time series, because the MMM is matched to the model climate sensitivity by the scaling. Errors at the beginning of the time series, however, are comparable to the differenced method because GHG forcing is small and differing climate sensitivities of the models thus have a minor impact. Errors using the single scaling method increase during volcanic eruptions because the single scaling factor is more sensitive to the GHG response than the naturally forced response. Using two scale factors (light blue lines) reduces the error in the 1940s (when there was a peak in MMM ; see Fig. 1 ) but not elsewhere, while the three factor scaling method (magenta lines) results in a general improvement over the other methods. Fig. 3. Time series of (a) mean error as a function of time (dashed; annual mean, solid; after 40-yr smoothing; note the log scale on the y axis) and (b) mean (solid) and one standard deviation on either side of the mean (dashed) as a function of time of the 5000 synthetic time series for the five methods. (c) Distribution of turning points as a function of time (solid lines indicating maxima and dashed lines indicating minima), with triangles on the x axis indicating minima and maxima of the MMMs as in Fig. 1. (d) Distribution of standard deviations of the estimated variability for each method for the synthetic time series. Dashed vertical lines in (d) indicate the means of the distributions. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 The means (solid curves) and standard deviations (dashed curves) of the time series of estimated internal variability are plotted in Fig. 3b as a function of time. By construction, as the number of time series increases, the mean of the true time series of internal variability approaches zero and the standard deviation approaches a constant. The accuracy of the various methods is assessed in comparison to this. This metric shows similar results to Fig. 3a and is included for comparison with the CMIP5 models, where the error cannot be directly calculated since the true time series of internal variability are not known. The raw forced signal (gray) shows increasing deviation from the true time series (in black). The mean of the detrended time series (blue) shows anomalous behavior particularly at the beginning and end. The mean for the differenced case (green) is always zero by construction (since we are subtracting the mean, the sum of the remainders will be zero), while the standard deviation shows a large increase at the end of the run. The single factor scaling method (red) shows a slightly larger spread of amplitudes around the times of volcanic eruptions. The mean for the two factor scaling case (light blue) shows larger departures from zero than the other two scaling cases during several periods (associated with turning points of MMM[rest]; see Fig. 1), indicating that the forced signal has not been completely removed. The three factor scaling method (magenta) generally shows the least spread, at times even having a lower standard deviation than the true time series. The reason for this reduction in amplitude will be discussed later. The discrepancies between the various estimates relative to the true time series all correspond to periods where the errors (in Fig. 3a) are the To show the bias in the phase of the internal variability estimated using the various methods, the turning points of the 40-yr smoothed time series are plotted in Fig. 3c. Unbiased time series should show a uniform distribution of both maxima (solid lines) and minima (dashed lines) throughout the historical period. The true time series (black), however, shows a decreasing number of both maxima and minima about 20 years from the beginning and end of the time series as a result of the edge effects of the 40-yr smoothing (which should therefore be common to all five methods). Both the raw forced time series (gray) and the detrended time series (blue) have a bias toward minima in the 1890s and 1970s with maxima in between, corresponding to turning points in MMM[Nat] (marked on the x axis in Fig. 3c). Both methods also show very few maxima after the 1960s because of the increasing dominance of the anthropogenic warming signal, which is not correctly removed. For the same reason, the differencing method (green) also shows a decrease in the number of turning points toward the end of the time series, which is larger than the filtering-induced decrease. The single scaling method (red) does a much improved job of finding the maxima and minima, while the two factor scaling method (light blue) results in large numbers of maxima around 1880 and 1940 and minima in the 1910s and 1970s, coinciding with turning points of MMM[rest]. The additional external forcing represented by MMM[rest] is already implicitly included in MMM[all], which is used in the single scaling method, but it is not represented by either MMM[GHG] or MMM[Nat] used in the two factor scaling method, which explains why the single scale factor method outperforms the two scale factor method when estimating phase of the internal variability. Of the five methods, the three factor scaling method (magenta) comes the closest to reproducing the true distribution of phases. The distribution of amplitudes of the 40-yr smoothed time series of the estimated internal variability is shown in Fig. 3d. Both detrending and differencing results in a large overestimation of the amplitude. The scaling methods all do a better job of estimating the amplitude, although the single factor scaling method overestimates the amplitude while the three factor scaling method underestimates it. In the single scaling method this is due to the sometimes incomplete removal of the natural forcing signal, which will then be mistaken to be internal variability. In the three factor scaling method the underestimation is due to the opposite effect; when the phase of the internal variability lines up with the variability in MMM[Nat] or MMM[rest], some of the internal variability will be removed. The two factor scaling method would appear to be the most accurate at estimating the standard deviation of the internal variability, although all the distributions are significantly different from the true distribution using a two-sided Kolmogorov–Smirnov test. This issue is explored further in Fig. 4. Fig. 4. Real vs estimated standard deviation of the synthetic time series for the (a) detrending, (b) differencing, (c),(d) single scaling, (e) two factor scaling, and (f) three factor scaling methods. In (a)–(c) color represents the (known) GHG scaling factor α[GHG], in (d) color represents the (known) natural scaling factor α[Nat], and in (e),(f) it represents the (known) residual scaling factor α Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 In the detrending method the degree of overestimation correlates with the magnitude of the sensitivity to GHG (given by α[GHG]), with large sensitivities leading to large estimates of natural variability (Fig. 4a). This is because large climate sensitivity results in highly nonlinear time series, for which a linear trend is a very poor approximation. In the differenced method it is the cases with either large or small values of α[GHG] (dark blue and red crosses in Fig. 4b) that have the largest overestimation of amplitude because these are the cases for which the MMM is the poorest approximation of the forced signal. A similar, although less pronounced, bias occurs in the single scaling case (Figs. 4c,d); here it is the cases with a large value of α[GHG] and a small value of α [Nat] (or conversely, a small value of α[GHG] and a large value of α[Nat]) that are overestimated. These are the cases for which the single scaling method will be the worst fit to the data because the single scaling method combines the sensitivity to GHG α[GHG] and the sensitivity to natural forcings α[Nat] in one parameter; it is thus a better approximation for cases where α[GHG] and α[Nat] are of similar magnitude. For two factor scaling (Fig. 4e) the amplitude in cases with large values of α[rest] is overestimated, which is due to the misattribution of forced variability as internal variability as mentioned earlier. Although it would appear from the distributions of standard deviations in Fig. 3d that the two factor scaling method may give a better estimate of the amplitude than the three factor scaling method, Fig. 4e shows that this apparent improvement is due to the fact that the two factor scaling method sometimes overestimates the real amplitude (because of neglecting α[rest], causing misattribution of the forced signal as internal variability) and sometimes underestimates the real amplitude (because of misattribution of the internal variability as the forced signal). In contrast, the three factor scaling method (Fig. 4f) gives a tighter estimate of the amplitudes, with a bias toward underestimation resulting from misattribution of internal variability as the forced signal. In summary, detrending and differencing, which are the simplest and most commonly used methods of removing the forced signal, both give large biases in the estimated amplitude of the variability, with detrending also causing large biases in the estimated phase. Differencing gives a better estimate of the phase during the earlier part of the time series, when GHG forcing is less important, but biases increase as GHG forcing becomes dominant. The scaling methods give more accurate estimates of the amplitude, although with one scaling factor there is a small overestimation of the amplitude because of the inability of the method to account for different models having different sensitivities to natural forcing. The two factor scaling method appears to accurately estimate the amplitude, but there are errors in the estimated phase resulting from not removing the portion of the signal because of forcings other than GHG and natural forcing (e.g., aerosols and ozone). Including this missing forcing as a third scaling parameter improves the estimate of the phase but leads to an underestimation of the amplitude resulting from misattribution of the internal variability as naturally forced variability (since they occur on the same time scales). We note that our results provide what are presumably generous estimates of the accuracy of the scaling methods since the forced time series were constructed with the same MMMs that were then used to estimate the scaling factors. When applying these methods to more complex data we must be aware that the MMMs themselves are only estimates of the underlying structure of the time series. The difference between the MMM calculated from the model ensemble and the true forced signal of each model will likely introduce additional errors. 5. Application to CMIP5 simulations We now apply the five different methods to the CMIP5 simulation results. In this case we do not know the underlying internal variability; however, we can compare the results of the five methods to the CMIP5 control runs, where external forcing is constant. We also do not know the underlying shape of the model response to the external forcing; we estimate it by the MMM from the GHG and natural forcing runs (whereas in the synthetic cases it was the MMMs by construction). We are thus implicitly assuming that the timing and relative amplitudes of the model responses are constant across the models (which is not necessarily true—e.g., some models may have a larger response to one type of natural forcing than another). The mean and standard deviation of the CMIP5 NASSTI are shown in Fig. 5a in gray, along with the mean and standard deviations of the AMO indices after the various methods to remove the forced signal have been applied. The results are very similar to the synthetic data. Figure 5b shows the distribution of turning points for the CMIP5 data, and once again the results correspond closely to the synthetic data. In the raw time series the maxima and minima line up with the maxima and minima of the MMM (shown by the black triangles on the x axis). This bias is not improved by detrending (dark blue). The differencing method and single scaling methods both result in a reasonably even distribution of turning points, apart from the edge effects of the filter. The two factor scaling method, however, shows preferences for maxima around 1880 and 1940 and for minima around 1920 and 1970. The first and last of these peaks may be partially influenced by edge effects, but in the middle of the time series there is still clearly some bias in the phase related to turning points of MMM[rest] (magenta triangles). The three factor scaling method, which does attempt to take the residual external forcing into account, also shows a uniform distribution of turning points. In reality the distribution of turning points of the AMOI may be nonuniform as a result of excitation of the variability by external forcings (Otterå et al. 2010; Zanchettin et al. 2012; Iwi et al. 2012; Menary and Scaife 2014). However, we see no evidence for that here; the lack of a common response across models may simply be due to the different amplitudes, periods, and even mechanisms underlying North Atlantic climate variability in each model. Fig. 5. (a) Time series of mean (solid) and one standard deviation either side of the mean (dashed) for the NASSTI time series (gray) and AMOI time series (colors). (b) Distribution of maxima (solid) and minima (dashed) as a function of time, with triangles on the x axis indicating minima and maxima of the MMMs as in Fig. 1. (c) Distribution of standard deviations of the amplitude of the NASSTI and AMOI. Dashed vertical lines indicate mean of the distributions while solid vertical lines indicate the standard deviation of the observed NASSTI (gray) and AMOI (colors). These may be compared to 145-yr-long sections of the control runs (black). Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 The distribution of amplitudes estimated by the various methods also follows the results found for the synthetic time series. In this case we also compare the amplitudes of internal variability estimated from the historical simulations to the amplitudes found in 145-yr-long sections of the control runs, where there is no variability in the external forcing (although note that control runs were not available for all models and that the amplitudes of variability from the control runs may be biased slightly high by slow drifts that can remain as a result of incomplete model spinup). The detrended, differenced, and, albeit to a lesser extent, single scale factor methods overestimate the amplitude of the internal variability while the three scale factor method underestimates it. The two scale factor method appears to give the best estimates of amplitude, as in the case of the synthetic data. Testing using a two-sided Kolmogorov–Smirnov test shows that the distribution of standard deviations from the control runs is not significantly different at the 99% level from the distributions calculated using the single scale factor and two scale factor methods. Next we examine the scaling factors that are obtained from the regression of the CMIP5 NASSTI onto the various MMMs. These scaling factors indicate the sensitivity of each model to the various external forcings relative to the ensemble mean. For comparison we also calculate scaling factors for the observed NASSTI. Figure 6 shows the scaling factors for the single factor scaling method ( Fig. 6a) and the three factor scaling method (Fig. 6b), along with the corresponding values for observations (dashed lines). We can see that there is a correlation between the scaling from the single factor scaling method and the GHG scaling factor from the three factor scaling method (red and blue asterisks in Fig. 6b), indicating that GHG sensitivity dominates the single factor scaling, as was the case with the synthetic data. Another estimate of the natural and GHG sensitivities can be made by regressing MMM[Nat] and MMM[GHG] on to each model’s natural only and GHG-only forcing runs. These estimates are shown in Fig. 6a. There is, however, little or no correlation between the GHG scaling factor from the three factor scaling method and the GHG scaling factor obtained directly from the GHG-only forcing runs (cf. red asterisks in Figs. 6a,b; also Fig. 8a). Note that the GHG scaling factor from the three factor scaling method is less than unity for most ensemble members, indicating that the estimates of GHG sensitivity obtained from the all-forcing runs are generally lower than the estimates obtained from the GHG-only runs. This systematic difference is due to the all-forcing scenarios containing forcings, such as anthropogenic aerosols, that are not included in the GHG-only runs but that have time series with a significant projection onto MMM[GHG] (Andreae et al. 2005). Anthropogenic aerosols act to partially offset GHG-induced warming, and thus the runs that include aerosol forcing will have a lower sensitivity since δ[GHG] now represents sensitivity to GHGs combined with other forcings rather than the sensitivity to just GHGs alone. Fig. 6. (a) Regression (scaling) factors for the single factor scaling method for the all-forcing runs (blue) from CMIP5. Also included are the scaling factors obtained when scaling the natural forcing runs with MMM[Nat] (green) and the GHG forcing runs with MMM[GHG] (red), where those runs are available. (b) Scaling factors obtained using the three factor scaling method. Individual runs are plotted with shapes, and means for each model ensemble are shown with the asterisks. Horizontal dashed lines indicate the values obtained when applying the same scaling methods to the observed NASSTI. The blue asterisks from (a) are repeated in (b) for comparison. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 As an illustration of the impact of the missing forcings on the estimates of the sensitivity parameters, Fig. 7 compares different estimates of the forced signal for one particular ensemble member from the GFDL CM3 model (Griffies et al. 2011). This model shows large sensitivities to both GHG and natural forcing when those sensitivities are estimated from the GHG-only and natural-forcing-only runs; however, an estimate for the forced time series made using those individual independent forcing sensitivities (magenta line in Fig. 7) is not a good fit for the modeled NASSTI (blue line). The three factor scaling method (in black) using the sensitivities from the all-forcing run provides a much closer fit using a lower estimate of the GHG sensitivity since that sensitivity is now no longer to GHG alone but includes other forcings that project significantly onto MMM[GHG]. Other models show similar results (Fig. 8a). Fig. 7. Estimate of the forced signal from one ensemble member from the GFDL CM3 model, showing the impact of missing forcing factors. The modeled NASSTI is shown in blue. Also shown are two estimates of the forced signal, one using the three factor scaling method (black) and the other using scaling from the GHG-only and natural-forcing-only runs (magenta). Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 Fig. 8. Scatterplots of (a) scaling factors for GHG obtained from the GHG-only run compared to those obtained from the all-forcing run using the three factor scaling method and (b) scaling factors for natural forcings obtained from the natural-forcing-only run compared to those obtained from the all-forcing run using the three factor scaling method. The scaling factors are averaged over the ensemble for models where more than one ensemble member was available. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 The natural forcing scaling factor agrees better with the value obtained from the natural forcing runs (green asterisks in Figs. 6a,b; also Fig. 8b). There is a wider spread in the estimated values of the natural scaling factor compared to the estimates of the GHG scaling factor, with some models even having negative values (i.e., the opposite response to the forcing than the MMM). Part of this spread is due to the inaccuracy of the method since we know from the synthetic data that there can be larger errors in estimating the natural scaling factor than the GHG scaling factor (see Fig. 2). Similarly, the scaling factors for the forced variability unaccounted for by natural and GHG forcing (magenta stars in Fig. 6) show a wide range, with some models giving negative values. There is no correlation between the models’ estimated sensitivity to GHG and their estimated sensitivity to natural forcings (Fig. 9), which justifies the choice of independent scaling parameters for the synthetic time series in section 4. This lack of correlation also highlights the limitations of the single scaling method, which uses the same scaling factor to account for both GHG and naturally forced responses. The fact that the single scaling method still provides good estimates of both phase and amplitude is due to the dominance of the GHG forcing over the natural forcing. Fig. 9. Scatterplots of scaling factors (a) for GHG compared to scaling factors for natural forcings obtained from the all-forcing runs using the three factor scaling method and (b) from the GHG-only and natural-forcing-only runs (for those models where data is available). The scaling factors are averaged over the ensemble for models where more than one ensemble member was available. The scaling factors for observations are plotted in (a) as a black square. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 6. Application to observations We have also applied the five different methods to the observed NASSTI, from 1870 to 2005, as shown in Fig. 10 (with the scaling factors used shown in Fig. 6). The largest differences between the various AMOI estimates occur toward the end of the record, with a spread of 11 years in the estimated timing of the most recent minimum (1976 for the raw NASSTI time series, 1978 for the detrended time series, and 1987 for the three factor scaling time series, with the others in between). This in turn affects the estimated time of the predicted future maximum. The timing of the AMO (and other low-frequency modes of variability to which these methods may be applied) is important in ascertaining the role that the various modes of internal variability may be playing in the current and near-term future climate—for example, their relative contributions to the recent hiatus in the global-mean surface temperature increase. Fig. 10. Observed AMO indices calculated using the five different methods (colors) as well as the raw NASSTI (gray). Dotted (solid) lines indicate the raw (40-yr smoothed) time series. Upward (downward) pointing triangles along the x axis mark the position of maxima (minima) of the smoothed time series. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 Note that when applying the scaling methods to the observations we still use the MMMs from the models. Since the CMIP5 all-forcing, GHG, and natural forcing runs extend only until 2005 it is not possible to extend the time series in Fig. 10 without making further assumptions (e.g., persistence of the mean or trend; see Steinman et al. 2015). Also note that the phase is estimated using smoothed time series, so edge effects may be important. This means that estimates near the end of the time series may change as additional data become available. Comparing the scaling factors obtained from observations (dashed lines in Fig. 6 and black square in Fig. 9) to those from the models, we can see that the observed GHG, natural, and residual scaling factors are all within the range simulated by the CMIP5 models. Comparing the estimated amplitudes of the observations to the estimates from the models, we can see in Fig. 5 that for the three factor scaling method the observations have an amplitude greater than about 95% of the model ensemble members, suggesting that many of the models may not be simulating multidecadal variability of large enough amplitude in the North Atlantic. There is also the possibility of underestimation of the amplitude (in both the CMIP5 results and observations) using this method. It is already known, however, that models tend to underestimate decadal variability in the Pacific (e.g., England et al. 2014); perhaps this is a problem that also applies to decadal modes of variability in other ocean basins. 7. Main sources of error in the methods to estimate the forced signal a. MMM shape One major difference between the synthetic time series analyzed in section 4 and the CMIP5 models analyzed in section 5 is that for the synthetic time series the MMMs were known exactly because they were used in the construction of the time series. For the CMIP5 models, each different model will have a slightly different ensemble mean, and while the differences between these ensemble means and the MMM (constructed using all the models) are minimized using the scaling, they are not completely removed. Figure 11 shows ensemble means from the natural-forcing-only runs for five models (each of which has five or more ensemble members). Comparing the ensemble means to the MMM (in black) and taking into account the noise of each ensemble mean resulting from the smaller size of the ensembles compared to the multimodel mean, we can see that each of the models has a slightly different forced response. Part of this may be due to differing sensitivity of the models to different components of the natural forcing or to different timing of the response in different models. In addition there is also the fact that different models may include different forcings or even the same forcings but implemented in different ways. For example, models with interactive atmospheric chemistry may simulate a volcanic eruption by directly adding aerosols to their atmospheres, whereas another model with a simpler atmosphere might simulate the same eruption by varying incoming radiation. These model differences will have an effect on the model responses. In addition, there are fewer ensemble members available for the GHG and natural forcing runs than for the all-forcing runs, making MMM[GHG] and MMM[Nat] less robust estimates than MMM[all]. Fig. 11. MMM (black) and single model means (colors) for five different models forced with natural forcing only (chosen because each model had five or more ensemble members). The number of ensemble members included in each mean is shown in parentheses in the legend. Citation: Journal of Climate 28, 20; 10.1175/JCLI-D-15-0069.1 The same problems apply when using the MMMs to estimate the internal variability from observations (as in section 6), since we are assuming that the model MMMs adequately represent the true forced climate signal. b. Missing forcing factors Another factor worth considering more closely is the missing forcing types. Since we have GHG-only and natural-forcing-only runs available we have been able to attempt to account for these forcing types, but there are other forcings that may be important as well. Anthropogenic aerosols and ozone are of particular interest because they vary on time scales of the same order as the internal variability in which we are interested. Not taking a forcing into account leads to a spread in the estimated amplitude of the internal variability since, depending on the timing of the missing forcing, it may be either amplifying or canceling out the internal variability. This can be seen in Fig. 4e, where using only two scaling factors cannot account for the influence of the residual forcing given by α[rest]. Including many different types of forcing leads to other problems, however, since time series may end up being overfitted, such that the true internal variability is mistaken as the forcing signal (as in Fig. 4f, where there is no problem with overestimation when α[rest] is included but there is some underestimation). Missing forcing factors are also responsible for the difference in estimating GHG scaling factors from the GHG-only runs and the all-forcing runs (which contain forcings that project onto the GHG forcing time series). c. Assumption of linearity Given enough computing power, both the above problems can be tackled by having more ensemble members and simulating more types and combinations of external forcings. However, a fundamental issue with all the methods described here is that we have assumed that the various forced signals and the internal variability can simply be combined linearly. Linearity was ensured by construction for the synthetic time series discussed in section 4. For the CMIP5 models this is not expected to introduce large errors since Schurer et al. (2013) found that the assumption of linearity held over the last In addition, external forcing may have the ability to excite internal variability. However, we have not seen any evidence of this in our results (i.e., a bias toward a particular phase that cannot be explained by the limitations of the various methods). d. Possibilities for improvement As mentioned above, some of the challenges that arise in using the scaling method can be reduced using greater computing power. Having more ensemble members would provide more robust estimates of the various MMMs, and performing simulations for various forcings separately would allow more forcings to be included, although it would also increase the possibility of misattribution. Having more ensemble members for individual models would also allow individual model ensemble means to be used instead of MMMs, removing one potential source of error. Comparing to observations remains error prone, however, because of the necessary but imperfect assumption that the MMMs are applicable to the real world. As for extending the methods into future projections, the different climate sensitivities mean that the different model trajectories diverge rather quickly as GHG forcing increases. Small errors in the estimated sensitivities at the end of the historical run quickly become overwhelming and make the estimates of internal variability in model forecasts increasingly unreliable. In addition, while the differencing and single scaling methods can be extended using RCP simulations, the two and three factor scaling methods rely on the Hist[GHG] and Hist[Nat] runs, which are available only until 8. Conclusions The aim of this study was to assess the performance of methods for separating internal and forced variability of the climate, with application to North Atlantic sea surface temperatures. We have tried five methods: detrending, differencing, and three different scaling methods. Detrending, which is very commonly used in an attempt to remove the anthropogenic signal, leads to large overestimations of the amplitude of internal variability as well as large biases in the estimated phase of the variability, which can in turn bias the estimated period. Similarly, differencing (i.e., taking the difference between the observed climate and an estimate of the forced signal given by the multimodel mean from CMIP5) is not an ideal method. It gives a less biased estimate of the phase than simply detrending but still overestimates the amplitude of the variability because of different climate sensitivities of the different models. Scaling the MMM responses to various types of forcing improves the estimates of the forced signal; however, care must be taken to include all the relevant forcings. Assuming that the models will have the same sensitivity to GHG and natural forcing (by using the MMM[all] as in the single scaling method) improves the estimates of the phase and amplitude of the internal variability, although there can still be errors for models that have large sensitivity to GHG forcing and low sensitivity to natural forcing, or vice versa (Figs. 4c,d). The single scaling method does, however, represent a significant improvement over the detrending or differencing methods. When GHG and natural forcings are scaled separately but the residual forcing is not included (as in the two factor scaling method) there can be either under- or overestimation of the amplitude of internal variability as well as a bias of the estimated phases toward the phase of the residual forced signal. Including the residual forcing (as in the three factor scaling method) improves the estimate of the phase but leads to a tendency toward underestimation of the amplitude. All the scaling methods suffer to varying extents from misattribution of the internal variability as the forced signal, which leads to underestimation of the amplitude when the phases of internal variability line up with the phases of the forced signal. The underestimation increases as more factors are included in the scaling. In addition, the scaling methods are subject to limitations, such as those due to the imperfect estimations of the various MMMs, variability due to missing forcings, and the assumption that the various forcings combine linearly. Despite these limitations, however, the scaling methods perform significantly better than detrending or differencing the time series. It is recommended that such scaling methods be used in preference to detrending or differencing in studies of low-frequency internal variability of the climate system. Applying the five methods to observations suggests that many models may underestimate the amplitude of internal variability in the North Atlantic (with the caveat that the methods applied to both models and observations are prone to underestimation). The different methods lead to different results for the timing of the last minimum in the observed AMO index and thus different predictions for the recent/future maximum. These disparate predictions highlight the importance of being able to correctly distinguish between the externally forced signal and internal variability. This work was supported by the Australian Research Council (ARC), including the ARC Centre of Excellence in Climate System Science. The authors acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and thank the climate modeling groups for producing and making available their model output. HadISST data were provided by the Met Office Hadley Centre (www.metoffice.gov.uk/hadobs). • Allen, M. R., and S. F. B. Tett, 1999: Checking for model consistency in optimal fingerprinting. Climate Dyn., 15, 419–434, doi:10.1007/s003820050291. • Allen, M. R., and P. A. Stott, 2003: Estimating signal amplitudes in optimal fingerprinting, part I: Theory. Climate Dyn., 21, 477–491, doi:10.1007/s00382-003-0313-9. • Andreae, M. O., C. D. Jones, and P. M. Cox, 2005: Strong present-day aerosol cooling implies a hot future. Nature, 435, 1187–1190, doi:10.1038/nature03671. • Booth, B. B. B., N. J. Dunstone, P. R. Halloran, T. Andrews, and N. Bellouin, 2012: Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability. Nature, 484, 228 –232, doi:10.1038/nature10946. • Delworth, T. L., and M. E. Mann, 2000: Observed and simulated multidecadal variability in the Northern Hemisphere. Climate Dyn., 16, 661–676, doi:10.1007/s003820000075. • Delworth, T. L., S. Manabe, and R. J. Stouffer, 1993: Interdecadal variations of the thermohaline circulation in a coupled ocean–atmosphere model. J. Climate, 6, 1993–2011, doi:10.1175/1520-0442 • Delworth, T. L., S. Manabe, and R. J. Stouffer, 1997: Multidecadal climate variability in the Greenland Sea and surrounding regions: A coupled model simulation. Geophys. Res. Lett., 24, 257–260, • England, M. H., and Coauthors, 2014: Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus. Nat. Climate Change, 4, 222–227, doi:10.1038/nclimate2106. • Flato, G., and Coauthors, 2013: Evaluation of climate models. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 741–866. [Available online at https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter09_FINAL.pdf.] • Folland, C. K., D. E. Parker, and F. E. Kates, 1984: Worldwide marine temperature fluctuations 1856–1981. Nature, 310, 670–673, doi:10.1038/310670a0. • Folland, C. K., T. N. Palmer, and D. E. Parker, 1986: Sahel rainfall and worldwide sea temperatures, 1901–85. Nature, 320, 602–607, doi:10.1038/320602a0. • Frankcombe, L. M., S. McGregor, and M. H. England, 2015: Robustness of the modes of Indo-Pacific sea level variability. Climate Dyn., 45, 1281–1298, doi:10.1007/s00382-014-2377-0. • Griffies, S. M., and Coauthors, 2011: The GFDL CM3 coupled climate model: Characteristics of the ocean and sea ice simulations. J. Climate, 24, 3520–3544, doi:10.1175/2011JCLI3964.1. • Huber, M., U. Beyerle, and R. Knutti, 2014: Estimating climate sensitivity and future temperature in the presence of natural climate variability. Geophys. Res. Lett., 41, 2086–2092, doi:10.1002/ • Huck, T., A. Colin de Verdière, and A. J. Weaver, 1999: Interdecadal variability of the thermohaline circulation in box-ocean models forced by fixed surface fluxes. J. Phys. Oceanogr., 29, 865– 892, doi:10.1175/1520-0485(1999)029<0865:IVOTTC>2.0.CO;2. • Iwi, A. M., L. Hermanson, K. Haines, and R. T. Sutton, 2012: Mechanisms linking volcanic aerosols to the Atlantic meridional overturning circulation. J. Climate, 25, 3039–3051, doi:10.1175/ • Kerr, R. A., 2000: A North Atlantic climate pacemaker for the centuries. Science, 288, 1984–1985, doi:10.1126/science.288.5473.1984. • Knight, J. R., R. J. Allan, C. K. Folland, M. Vellinga, and M. E. Mann, 2005: A signature of persistent natural thermohaline circulation cycles in observed climate. Geophys. Res. Lett., 32, L20708, doi:10.1029/2005GL024233. • Kushnir, Y., 1994: Interdecadal variations in North Atlantic sea surface temperature and associated atmospheric conditions. J. Climate, 7, 141–157, doi:10.1175/1520-0442(1994)007<0141:IVINAS> • Mann, M. E., 2008: Smoothing of climate time series revisited. Geophys. Res. Lett., 35, L16708, doi:10.1029/2008GL034716. • Mann, M. E., and J. Park, 1994: Global-scale modes of surface temperature variability on interannual to century timescales. J. Geophys. Res., 99, 25 819–25 833, doi:10.1029/94JD02396. • Mann, M. E., and K. A. Emanuel, 2006: Atlantic hurricane trends linked to climate change. Eos, Trans. Amer. Geophys. Union, 87, 233–241, doi:10.1029/2006EO240001. • Mann, M. E., J. Park, and R. S. Bradley, 1995: Global interdecadal and century-scale climate oscillations during the past five centuries. Nature, 378, 266–270, doi:10.1038/378266a0. • Mann, M. E., B. A. Steinman, and S. K. Miller, 2014: On forced temperature changes, internal variability, and the AMO. Geophys. Res. Lett., 41, 3211–3219, doi:10.1002/2014GL059233. • Menary, M., and A. Scaife, 2014: Naturally forced multidecadal variability of the Atlantic meridional overturning circulation. Climate Dyn., 42, 1347–1362, doi:10.1007/s00382-013-2028-x. • Otterå, O. H., M. Bentsen, H. Drange, and L. Suo, 2010: External forcing as a metronome for Atlantic multidecadal variability. Nat. Geosci., 3, 688–694, doi:10.1038/ngeo955. • Parker, D., C. Folland, A. Scaife, J. Knight, A. Colman, P. Baines, and B. Dong, 2007: Decadal to multidecadal variability and the climate change background. J. Geophys. Res., 112, D18115, doi: • Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent, and A. Kaplan, 2003: Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res., 108, 4407, doi:10.1029/2002JD002670. • Schurer, A. P., G. C. Hegerl, M. E. Mann, S. F. B. Tett, and S. J. Phipps, 2013: Separating forced from chaotic climate variability of the past millennium. J. Climate, 26, 6954–6973, doi:10.1175/ • Steinman, B. A., M. E. Mann, and S. K. Miller, 2015: Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. Science, 347, 988–991, doi:10.1126/science.1257856. • Sutton, R. T., and D. L. R. Hodson, 2003: Influence of the ocean on North Atlantic climate variability 1871–1999. J. Climate, 16, 3296–3313, doi:10.1175/1520-0442(2003)016<3296:IOTOON>2.0.CO;2. • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485–498, doi:10.1175/BAMS-D-11-00094.1. • Ting, M., Y. Kushnir, R. Seager, and C. Li, 2011: Robust features of Atlantic multi-decadal variability and its climate impacts. Geophys. Res. Lett., 38, L17705, doi:10.1029/2011GL048712. • Zanchettin, D., C. Timmreck, H. F. Graf, A. Rubino, S. Lorenz, K. Lohmann, K. Krüger, and J. Jungclaus, 2012: Bi-decadal variability excited in the coupled ocean–atmosphere system by strong tropical volcanic eruptions. Climate Dyn., 39, 419–444, doi:10.1007/s00382-011-1167-1. • Zhang, L., and C. Wang, 2013: Multidecadal North Atlantic sea surface temperature and Atlantic meridional overturning circulation variability in CMIP5 historical simulations. J. Geophys. Res. Oceans, 118, 5772–5791, doi:10.1002/jgrc.20390. • Zhang, R., and Coauthors, 2013: Have aerosols caused the observed Atlantic multidecadal variability? J. Atmos. Sci., 70, 1135–1144, doi:10.1175/JAS-D-12-0331.1.
{"url":"https://journals.ametsoc.org:443/view/journals/clim/28/20/jcli-d-15-0069.1.xml","timestamp":"2024-11-05T09:15:11Z","content_type":"text/html","content_length":"984031","record_id":"<urn:uuid:6909b261-3ac0-43c6-8ba1-6d85858e8aa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00359.warc.gz"}
Andrew John Wiles Date of birth: 11.04.1953 Place of birth: Cambridge, Great Britain Citizenship: Great Britain Mr Wiles is an English and American mathematician, Professor of Mathematics at Princeton University, Head of the Department for Mathematics in it, member of the Research Council at Clay Mathematics University). He was conferred with Bachelor’s degree in 1974 at Murton College of Oxford University. Mr Wiles began his academic career in summer 1975 headed by the Professor John Coates at Clare College of Cambridge University, where he got Doctor’s degree. In the period from 1977 to 1980, Andrew Wiles worked as the Junior Research Worker at Clare College and Associate Professor at Harvard University. He worked on the arithmetic of elliptic curves with complex multiplication using the methods from Iwasawa theory in cooperation with John Coates. In 1982 Mr Wiles left Great Britain for the USA. One of the most important events in his career became a statement about proving Fermat’s Great Theorem in 1993 and discovery of an elegant method which enabled to complete the proving in 1994. Andrew Wiles began his professional work on Fermat’s Great Theorem in summer 1986 after Kenneth Ribet proved the hypothesis regarding a connection of semi-stable elliptic curves (a special case of Taniyama-Shimura theorem) with Fermat’s theorem. History of Proving Andrew Wiles got acquainted with Fermat’s Great Theorem at the age of 10. At that time he made an attempt to prove it using the methods from the school textbook. Later he began studying works of mathematicians who tried to prove this theorem. When Mr Wiles entered the College, he gave up his attempts to prove Fermat’s Great Theorem and devoted his time to studying of elliptic curves under the guidance of John Coates. In the 1950s and 1960s a Japanese mathematician Shimura made a suggestion that there is a connection between elliptic curves and modular forms. Shimura relied on the ideas expressed by another Japanese mathematician - Taniyama. This hypothesis was known in the western academic circles thanks to work of Andre Weil who found a lot of fundamental data which supported the aforementioned hypothesis as a result of Weil’s thorough analysis of it. Therefore this theorem is often called “Shimura-Taniyama-Weil theorem”. The theorem says that each elliptic curve above the field of rational numbers is a modular. The theorem was completely proved in 1998 by Christoph Broyle, Brian Conrad, Fred Diamond and Richard Taylor who used the methods published by Andrew Wiles in 1995. A connection between Taniyama-Shimura and Fermat’s theorems was ascertained by Kenneth Ribet who relied on works of Barry Mazur and Jean-Pierre Serra. Ribet proved that Frey curve was not modular. It meant that proving of a semi-stable case of Taniyama-Shimura theorem confirms truthfulness of Fermat’s Great Theorem. After Wiles found out that Kenneth Ribet got the proving in 1986, he decided to pay all his attention to proving of Taniyama-Shimura theorem. While the attitude of many mathematicians to the possibility of finding this proving was very skeptical, Andrew Wiles believed that the hypothesis may be proved using the methods of the XX^th century. At the very beginning of his work on Taniyama-Shimura theorem, Wiles mentioned Fermat’s Great Theorem casually in the conversation with colleagues, and they became greatly interested in it. But Andrew Wiles wanted to concentrate himself on the problem as much as possible, and excessive attention could disturb him only. In order to exclude these cases, Mr Wiles decided to keep in secret the true essence of his research, revealing his secret to Nicolas Catz only. At that time Andrew Wiles, although he continued teaching at Princeton University, did not make any researches which were not connected with Taniyama-Shimura theorem.
{"url":"https://urfodu.ru/ww/it/experts/endryu_dzhon_uayls/","timestamp":"2024-11-10T01:16:36Z","content_type":"text/html","content_length":"36150","record_id":"<urn:uuid:b48cbf11-8ebd-4de9-b614-dc2ed59c42bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00445.warc.gz"}
Multiplying a Percent by a Decimal Question Video: Multiplying a Percent by a Decimal Mathematics • First Year of Preparatory School Calculate 25% × 0.2. Give your answer as a decimal number. Video Transcript Calculate 25 percent times 0.2. Give your answer as a decimal number. Let’s consider two different ways to find this product. First, we’ll take 25 percent time 0.2, and we’ll rewrite 25 percent as a decimal. We know that 25 percent is 25 out of 100. Another way to say that would be twenty-five hundredths. You could also imagine that we have moved the digits two places to the right. Anyway, we have a decimal value of 0.25, which we need to multiply by 0.2. We could rewrite this vertically and say that two times five is 10, two times two is four plus one is five, two times zero is zero. And then our second row would be all zeros: zero times five, zero times two, and zero times zero. Then, we’ll add the partial products. Then, there are three decimal places in the multiplication, and that means the solution will have three decimal places, which will be 0.05. Now, there was a zero behind the five, but 0.050 can be simplified to 0.05. And that means in simplest form, we would say 0.05. This means 25 percent time 0.2 is equal to 0.05. 25 percent times 0.2 is equal to 0.05. Let’s look at a second option. What if we said 25 percent we knew was equal to one-fourth. And 0.2 written as a fraction is two over 10. From there, we could multiply one-fourth by two-tenths. And then we could simplify. Both two and four are divisible by two. We multiply the numerators together to get one times one and the denominators together to get two times 10, which is 20. This is telling us that 25 percent of 0.2 is one twentieth. But since we want this final answer as a decimal number, we could rewrite this fraction with the denominator of 100. 20 times five is 100. If we multiply the denominator by five, we need to multiply the numerator by five. One twentieth equals five hundredths. And that means to write it as a decimal, we put a five in the hundredths place. Five hundredths as a fraction written as a decimal is I know we said we’re gonna look at two examples, but let’s just see one more. Let’s say you got to this point: one-fourth times 0.2. Multiplying 0.2 by one-fourth would be equal to 0.2 divided by four. And if we divide 0.2 by four, we bring up the decimal. Four goes into two zero times, but four goes into 20 five times with no remainder. And that means 0.2 divided by four is 0.05. It’s 0.05. There are lots of ways to calculate this. Just remember to follow the instructions and give your final answer as a decimal number, as indicated here.
{"url":"https://www.nagwa.com/en/videos/970106935327/","timestamp":"2024-11-12T22:36:36Z","content_type":"text/html","content_length":"250167","record_id":"<urn:uuid:baa4c0d1-8d3b-48aa-bad2-20239e0264ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00751.warc.gz"}
The Stacks project Lemma 52.25.1. For any coherent triple $(\mathcal{F}, \mathcal{F}_0, \alpha )$ there exists a coherent $\mathcal{O}_ X$-module $\mathcal{F}'$ such that $f : \mathcal{F}' \to \mathcal{F}'$ is injective, an isomorphism $\alpha ' : \mathcal{F}'|_ U \to \mathcal{F}$, and a map $\alpha '_0 : \mathcal{F}'/f\mathcal{F}' \to \mathcal{F}_0$ such that $\alpha \circ (\alpha ' \bmod f) = \alpha '_0| Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0F23. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0F23, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0F23","timestamp":"2024-11-07T03:39:50Z","content_type":"text/html","content_length":"15031","record_id":"<urn:uuid:1357ba83-8014-49a4-9d17-239d464e9144>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00494.warc.gz"}
Logics for Practical Reasoning 3-4 May 2016 Aula Enzo Paci, Department of Philosophy, University of Milan The purpose of this workshop is to foster cross disciplinary research into applied logics, and in particular logics applied to capturing interesting aspects of practical reasoning. To this end we have asked our speakers to provide accessible presentations of their material and the schedule allows plenty of time for discussion. Attendance is free, but registration is mandatory. Travel Grants A limited number of small travel grants are available for PhD student and PostDocs who have no access to funding. Please write to hykel.hosni AT unimi.it to apply. Schedule for Tuesday 3 May • 14:00 – 14:30 | Arrival and Opening • 14:30 – 15:30 | Walter Carnielli • 15:30 – 16:30 | Dov Gabbay Schedule for Wednesday 4 May • 10:30 – 11:30 | Marcello D’Agostino • 11:30 – 12:30 | Jeff Paris Titles and Abstracts Walter Carnielli: “Probability, Consistency and Evidence” I intend to discuss the first steps towards a unifying theory of probability based on logic, regarding probability as a branch of logic in a generalized way. According to this program, one can define in a same set of meta-axioms probability measures that are either classical, paraconsistent, intuitionistic and simultaneously intuitionistic and paraconsistent just by parameterizing on logic systems. In particular, I discuss theories of probability built upon the paraconsistent Logic of Formal Inconsistency Ci, and upon the paraconsistent and paracomplete Logic of Evidence and Truth LETj, a Logic of Formal Inconsistency (LFI) and Undeterminateness (LFU). I argue that LFIs very naturally encode an extension of the notion of probability able to express probabilistic reasoning under excess of information (contradictions), while LFUs encode extensions of the notion of probability able to express probabilistic reasoning under lack of information (incompleteness), and thus better connected to evidence than to truth. LETj is designed to express the notions of conclusive and non-conclusive evidence, as well as preservation of evidence; it is also able to recover classical logic for propositions whose truth-value have been conclusively established. In this way, it can in particular also express the notion of preservation of truth. By means of defining appropriate versions of conditional probability, notions of potential evidence versus veridical evidence can be defined, in contrast to the proposals by Peter Achinstein (The Book of Evidence, 2001). Marcello D’Agostino: An introduction to Depth-bounded Logics Logic is informationally trivial and, yet, computationally hard. This is one of the most baffling paradoxes arising from the traditional account of logical consequence. Triviality stems from the widely accepted characterization of deductive inference as non-ampliative: the information carried by the conclusion is (in some sense) contained in the information carried by the premises. Computational hardness stems from the well-known results showing that most interesting logics are undecidable or, even when decidable, very likely to be intractable. This situation leads to the so-called “scandal of deduction” and to the related “problem of logical omniscience”. To address this problem, we present a unifying semantic and proof-theoretical framework for investigating depth-bounded approximations to Boolean Logic. These approximations provide a hierarchy of tractable logical systems that indefinitely converge to classical propositional logic and can be usefully employed to model the inferential activity of real-world, resource-bounded agents. Dov Gabbay: Talmudic Norms Approach to the Paradox of the Heap: A Position Paper This paper offers a Talmudic norms solution to the paradox of the heap. The claim is that the paradox arises because philosophers use the wrong language to discuss it and the appropriate language is that of an extended blocks world language, together with the Talmudic normative theory of mixing (Talmudic calculus of Sorites) and the principle that a property of any mixture (or indeed any object) is also how it was constructed. We seek a correlation between Talmudic positions on mixtures and philosophical positions on Sorites. The Talmud is very practical and cannot allow for any theoretically unresolved paradox to get in the way, and so it has a lot to offer to philosophy. Jeff Paris: A Model of Belief, and Truth The ENT model of belief was introduced by Alena Vencovska and myself in a paper in AI in 1993 as a way for an agent to form beliefs from direct experience while avoiding the feasibility problems which dogged the alternative probability constraints approaches which were in vogue at that time. Since then two other papers have appeared which provide some further possible support for this ENT model. In my talk I shall sketch the original construction and these subsequent observations. Travel Grants A limited number of small travel grants are available for PhD student and PostDocs who have no access to funding. Please write to hykel.hosni AT unimi.it to apply. The workshop will take place in “Aula Enzo Paci” at the Direction of the Department of Philosophy, which is located in the University Main Building in Via Festa del Perdono 7.
{"url":"https://sites.unimi.it/hosni/lpr/","timestamp":"2024-11-04T11:36:13Z","content_type":"text/html","content_length":"55461","record_id":"<urn:uuid:bf9e230f-abd5-4b28-bf9e-454ada04551f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00793.warc.gz"}
Data Science: Theories, Models, Algorithms, and Analytics - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials Data Science: Theories, Models, Algorithms, and Analytics • Title: Data Science: Theories, Models, Algorithms, and Analytics • Author(s) Sanjiv Ranjan Das • Publisher: Self-publishing via GitHub; eBook (Apache Licensed) • License(s): Apache License, Version 2.0 • Hardcover/Paperback: 288 pages • eBook: HTML and PDF • Language: English • ISBN-10/ASIN: N/A • ISBN-13: N/A • Share This: Book Description The goal of data science is to improve decision making through the analysis of data. Today data science determines the ads we see online, the books and movies that are recommended to us online, which emails are filtered into our spam folders, and even how much we pay for health insurance. This book provides a bucket full of information regarding Data Science, covers a wide variety of sections by giving access to theories, data science algorithms, tools and analytics. Some highlighting contents of the book are Open Source: Modelling in R to Bayes Theorem. It offers a concise introduction to the emerging field of data science, explaining its evolution, current uses, data infrastructure issues, and ethical challenges. You'll explore the right approach to data science project management, along with useful tips and best practices to guide you along the way. • Learn the basics of data science and explore its possibilities and limitations • Manage data science projects and assemble teams effectively even in the most challenging situations • Understand management principles and approaches for data science projects to streamline the innovation process Reviews, Rating, and Recommendations: Related Book Categories: Read and Download Links: Similar Books: Book Categories Other Categories Resources and Links
{"url":"https://freecomputerbooks.com/Data-Science-Theories-Models-Algorithms-and-Analytics.html","timestamp":"2024-11-14T21:48:44Z","content_type":"application/xhtml+xml","content_length":"36217","record_id":"<urn:uuid:6a8cd95d-4e6a-45e8-ad1f-bc44b1be24ef>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00822.warc.gz"}
How do you calculate the weight of an object given its mass and gravitational acceleration? in context of weight to mass 31 Aug 2024 Title: The Relationship Between Weight, Mass, and Gravitational Acceleration: A Mathematical Exploration Abstract: This article delves into the fundamental principles governing the relationship between an object’s weight, mass, and gravitational acceleration. We will derive a mathematical formula to calculate the weight of an object given its mass and gravitational acceleration, providing insight into the underlying physics. Weight (W) is a measure of the force exerted on an object by gravity, while mass (m) is a measure of the amount of matter in an object. Gravitational acceleration (g), which varies depending on location, is the acceleration due to gravity at a given point on Earth’s surface. Understanding the interplay between these three physical quantities is crucial for various scientific and engineering Theoretical Background: According to Newton’s second law of motion, force (F) is equal to mass times acceleration (F = ma). In the context of weight, this equation can be rewritten as: W = mg where W is the weight of the object, m is its mass, and g is the gravitational acceleration. Mathematical Derivation: To derive a formula for calculating weight given mass and gravitational acceleration, we start with the fundamental equation: F = ma Substituting F with W (weight), we get: W = ma Since mass (m) remains constant, we can rearrange this equation to solve for weight (W): W = m × g In conclusion, the weight of an object can be calculated using its mass and gravitational acceleration. The mathematical formula derived in this article provides a straightforward method for determining weight given these two physical quantities. Weight (W) = Mass (m) × Gravitational Acceleration (g) W = mg This equation serves as a fundamental principle in physics, allowing us to calculate the weight of an object based on its mass and the gravitational acceleration at a given location. Related articles for ‘weight to mass’ : Calculators for ‘weight to mass’
{"url":"https://blog.truegeometry.com/tutorials/education/5f0cc25ce452ae016ea8e4ac53d3b3ff/JSON_TO_ARTCL_How_do_you_calculate_the_weight_of_an_object_given_its_mass_and_gr.html","timestamp":"2024-11-04T21:48:03Z","content_type":"text/html","content_length":"15999","record_id":"<urn:uuid:7947f131-cf4a-4c85-a9a2-f7cdb1cbbd93>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00360.warc.gz"}
Area and Its Boundary – CBSE Notes for Class 5 Maths Students can refer to the Area and Its Boundary – CBSE Notes for Class 5 Maths https://www.cbselabs.com/area-boundary-cbse-notes-class-5-maths/ Pdf here. They can also access the CBSE Class 5 Area and Its Boundary Notes while gearing up for their Board exams. Area and Its Boundary – CBSE Notes for Class 5 Maths CBSE NotesCBSE Notes Class 5 MathsNCERT Solutions Maths • Rectangle Area of rectangle = Length x Breadth Perimeter = 2 (length + breadth) • Square Area of square = Side x Side Perimeter = 4 x Side You can find the area of rectangle and square by splitting them into small boxes of 1 square cm. Area of irregular shape can be obtained by counting the number of complete, half and more than half squares.
{"url":"https://www.cbselabs.com/area-boundary-cbse-notes-class-5-maths/","timestamp":"2024-11-08T20:22:23Z","content_type":"text/html","content_length":"187548","record_id":"<urn:uuid:dbfd57d8-8f8f-4706-959d-b372672622f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00540.warc.gz"}
RD Sharma Class 10 Solutions Chapter 8 Circles Ex 8.1 These Solutions are part of RD Sharma Class 10 Solutions. Here we have given RD Sharma Class 10 Solutions Chapter 8 Circles Ex 8.1 Other Exercises Question 1. Fill in the blanks : (i) The common point of a tangent and the circle is called ………. (ii) A circle may have ………. parallel tangents. (iii) A tangent to a circle intersects it in ……….. point(s). (iv) A line intersecting a circle in two points is called a ………… (v) The angle between tangent at a point on a circle and the radius through the point is ……….. (i) The common point of a tangent and the circle is called the point of contact. (ii) A circle may have two parallel tangents. (iii) A tangents to a circle intersects it in one point. (iv) A line intersecting a circle in two points is called a secant. (v) The angle between tangent at a point, on a circle and the radius through the point is 90°. Question 2. How many tangents can a circle have ? A circle can have infinitely many tangents. Question 3. O is the centre of a circle of radius 8 cm. The tangent at a point A on the circle cuts a line through O at B such that AB = 15 cm. Find OB. Radius OA = 8 cm, ST is the tangent to the circle at A and AB = 15 cm OA ⊥ tangent TS In right ∆OAB, OB² = OA² + AB² (Pythagoras Theorem) = (8)² + (15)² = 64 + 225 = 289 = (17)² OB = 17 cm Question 4. If the tangent at a point P to a circle with centre O cuts a line through O at Q such that PQ = 24 cm and OQ = 25 cm. Find the radius of the circle. OP is the radius and TS is the tangent to the circle at P OQ is a line OP ⊥ tangent TS In right ∆OPQ, OQ² = OP² + PQ² (Pythagoras Theorem) => (25)² = OP² + (24)² => 625 = OP² + 576 => OP² = 625 – 576 = 49 => OP² = (7)² OP = 7 cm Hence radius of the circle is 7 cm Hope given RD Sharma Class 10 Solutions Chapter 8 Circles Ex 8.1 are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
{"url":"https://www.learninsta.com/rd-sharma-class-10-solutions-chapter-8-circles-ex-8-1/","timestamp":"2024-11-03T15:32:25Z","content_type":"text/html","content_length":"55114","record_id":"<urn:uuid:b8c6eff0-366d-498e-b6fb-12b505937df8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00064.warc.gz"}
We consider the problem of computing the outer-radii of point sets. In this problem, we are given integers $n, d, k$ where $k \le d$, and a set $P$ of $n$ points in $R^d$. The goal is to compute the {\em outer $k$-radius} of $P$, denoted by $\kflatr(P)$, which is the minimum, over all $(d-k)$-dimensional … Read more Theory of Semidefinite Programming for Sensor Network Localization We analyze the semidefinite programming (SDP) based model and method for the position estimation problem in sensor network localization and other Euclidean distance geometry applications. We use SDP duality and interior–point algorithm theories to prove that the SDP localizes any network or graph that has unique sensor positions to fit given distance measures. Therefore, we … Read more Semidefinite Programming Based Algorithms for Sensor Network Localization An SDP relaxation based method is developed to solve the localization problem in sensor networks using incomplete and inaccurate distance information. The problem is set up to find a set of sensor positions such that given distance constraints are satisfied. The nonconvex constraints in the formulation are then relaxed in order to yield a semidefinite … Read more On complexity of Shmoys – Swamy class of two-stage linear stochastic programming problems We consider a class of two-stage linear stochastic programming problems, introduced by Shmoys and Swamy (2004), motivated by a relaxation of a stochastic set cover problem. We show that the sample size required to solve this problem by the sample average approximation (SAA) method with a relative accuracy $\kappa>0$ and confidence $1-\alpha$ is polynomial in … Read more A Note on Exchange Market Equilibria with Leontief’s Utility: Freedom of Pricing Leads to Rationality We extend the analysis of [27] to handling more general utility functions: piece-wise linear functions, which include Leontief’s utility. We show that the problem reduces to the general analytic center model discussed in [27]. Thus, the same linear programming complexity bound applies to approximating the Fisher equilibrium problem with these utilities. More importantly, we show … Read more A Path to the Arrow-Debreu Competitive Market Equilibrium We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and $n$ players. Both of them have the arithmetic operation complexity bound of $O(n^4\log(1/\epsilon))$ for computing an $\epsilon$-equilibrium solution. If the problem data are rational numbers and their bit-length is $L$, then the bound to generate an … Read more Alternating projections on manifolds We prove that if two smooth manifolds intersect transversally, then the method of alternating projections converges locally at a linear rate. We bound the speed of convergence in terms of the angle between the manifolds, which in turn we relate to the modulus of metric regularity for the intersection problem, a natural measure of conditioning. … Read more An efficient method to compute traffic assignment problems with elastic demands The traffic assignment problem with elastic demands can be formulated as an optimization problem, whose objective is sum of a congestion function and a disutility function. We propose to use a variant of the Analytic Center Cutting Plane Method to solve this problem. We test the method on instances with different congestion functions (linear with … Read more Stability and Sensitivity Analysis for Optimal Control Problems with a First-order State Constraint having (nonessential) Touch Points The paper deals with an optimal control problem with a scalar first-order state constraint and a scalar control. In presence of (nonessential) touch points, the arc structure of the trajectory is not stable. We show how to perform a sensitivity analysis that predicts which touch points will, under a small perturbation, become inactive, remain touch … Read more A Proximal Point Algorithm with phi-Divergence to Quasiconvex Programming We use the proximal point method with the phi-divergence given by phi(t) = t – log t – 1 for the minimization of quasiconvex functions subject to nonnegativity constraints. We establish that the sequence generated by our algorithm is well-defined in the sense that it exists and it is not cyclical. Without any assumption of … Read more
{"url":"https://optimization-online.org/2006/07/","timestamp":"2024-11-11T20:24:06Z","content_type":"text/html","content_length":"106265","record_id":"<urn:uuid:7a7b2366-4974-4cfc-addf-6d91f1b6343d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00875.warc.gz"}
Discrete and Continuous Models and Applied Computational ScienceDiscrete and Continuous Models and Applied Computational Science2658-46702658-7149Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University)3032610.22363/2658-4670-2022-30-1-52-61Research ArticleOn the many-body problem with short-range interactionGambaryanMark M.<p>PhD student of Department of Applied Probability and Informatics</p>gamb.mg@gmail.comhttps://orcid.org/0000-0002-4650-4648MalykhMikhail D.<p>Doctor of Physical and Mathematical Sciences, Assistant Professor of Department of Applied Probability and Informatics of Peoples&rsquo; Friendship University of Russia (RUDN University); Researcher in Meshcheryakov Laboratory of Information Technologies, Joint Institute for Nuclear Research</p>malykh-md@rudn.ruhttps://orcid.org/0000-0001-6541-6603Peoples’ Friendship University of Russia (RUDN University)Meshcheryakov Laboratory of Information Technologies Joint Institute for Nuclear Research01042022301526125022022Copyright © 2022, Gambaryan M.M., Malykh M.D.2022<p style="text-align: justify;">The classical problem of the interaction of charged particles is considered in the framework of the concept of short-range interaction. Difficulties in the mathematical description of short-range interaction are discussed, for which it is necessary to combine two models, a nonlinear dynamic system describing the motion of particles in a field, and a boundary value problem for a hyperbolic equation or Maxwell’s equations describing the field. Attention is paid to the averaging procedure, that is, the transition from the positions of particles and their velocities to the charge and current densities. The problem is shown to contain several parameters; when they tend to zero in a strictly defined order, the model turns into the classical many-body problem. According to the Galerkin method, the problem is reduced to a dynamic system in which the equations describing the dynamics of particles, are added to the equations describing the oscillations of a field in a box. This problem is a simplification, different from that leading to classical mechanics. It is proposed to be considered as the simplest mathematical model describing the many-body problem with short-range interaction. This model consists of the equations of motion for particles, supplemented with equations that describe the natural oscillations of the field in the box. The results of the first computer experiments with this short-range interaction model are presented. It is shown that this model is rich in conservation laws.</p>many-body problemGalerkin methodshort-range interactionзадача многих телметод ГалёркинаблизкодействиеC. K. Birdsall and L. A. Bruce, Plasma physics via computer simulation. Bristol, Philadelphia and New York: Adam Hilger, 1991.R. W. Hockney and J. W. Eastwood, Computer simulation using particles. Bristol, Philadelphia and New York: Adam Hilger, 1988.V. P. Tarakanov, User’s manual for code KARAT. Springfield, VA: Berkley Research, 1999.E. Hairer, G. Wanner, and S. P. Nørsett, Solving Ordinary Differential Equations, 3rd ed. New York: Springer, 2008, vol. 1.G. Duvaut and J. L. Lions, Inequalities in Mechanics and Physics. Berlin: Springer-Verlag, 1976.H. Qin, S. Zhang, J. Xiao, J. Liu, and Y. Sun, “Why is Boris algorithm so good?” Physics of Plasmas, vol. 20, p. 084503, 2013. DOI: 10.1063/1.4818428.A. G. Sveshnikov, A. N. Bogolyubov, and K. V. V., Lectures on Mathematical Physics [Lektsii po matematicheskoy fizike]. Moscow: MGU, 1993, in Russian.I. I. Vorovich, “On some direct methods in the nonlinear theory of oscillations of shallow shells [O nekotorykh pryamykh metodakh v nelineynoy teorii kolebaniy pologikh obolochek],” Izvestiya Akademii Nauk USSR, Seriya Matematicheskaya, vol. 21, no. 6, pp. 747-784, 1957, in Russian.P. G. Ciarlet, The finite element method for elliptic problems. NorthHolland, 1978.N. G. Afendikova, “The history of Galerkin’s method and its role in M.V. Keldysh’s work [Istoriya metoda Galerkina i yego rol’ v tvorchestve M.V.Keldysha],” Keldysh Institute preprints, no. 77, 2014, in Russian.G. Hellwig, Partial differential equations. An introduction. Leipzig: Teubner, 1960.G. Hellwig, Differential operators of Mathematical Physics. Reading, MA: Addison-Wesley, 1967.A. N. Tikhonov, “Systems of differential equations containing small parameters at derivatives [Sistemy differentsial’nykh uravneniy, soderzhashchiye malyye parametry pri proizvodnykh],” Mat. Sb., vol. 31, no. 3, pp. 575-586, 1952, in Russian.A. B. Vassilieva and V. F. Butuzov, Asymptotic methods in singular perturbation theory [Asimptoticheskiye metody v teorii singulyarnykh vozmushcheniy]. Moscow: Vysshaya shkola, 1990, in Russian.
{"url":"https://journals.rudn.ru/miph/article/xml/30326/ru_RU","timestamp":"2024-11-06T18:15:59Z","content_type":"application/xml","content_length":"8329","record_id":"<urn:uuid:46e2b327-fbe2-4d7a-84a0-de52916645be>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00803.warc.gz"}
How to conduct process time and process cost analysis? - PRIME BPM One of the methods of cost reduction is to analyse the time associated with doing an activity or a complete process. There are two categories of time we must take note of: Execution time per activity and Delay time. Execution time per activity: That is the time required to complete a single activity in a process. For example, recording customer data takes 2 minutes. Delay time: This is the time lag associated with the task in process. There are 2 kinds of delay: • A delay in a process due to the unavailability of a third party e.g. system or person not available. • A delay in the process due to not prioritizing the activity E.g. I received an order but I did not act upon it immediately. There are two informative measurement opportunities. The first is the time per activity and the second is the time of the process. The time measurement per activity leads to the calculation to answer the question of how much time we are spending on CVA, BVA or NVA activities. For example, the process has 3 NVA activities. NVA activity one = 2 minutes NVA activity two = 1 minute NVA activity three = 4 minutes That means that in one process alone, 7 minutes of time is wasted. The time measurement for the complete process is called Process Cycle Time. It is easily calculated by adding the time taken for all activities in the process + delay time. Generally, process cycle time is expressed annually. To achieve this, the frequency of the process must be considered. For example, the process cycle time is 30 minutes and its frequency is twice per week. Hence, on an annual basis, the organisation is spending 52 hours on this process. If 7 minutes of the 30 minutes are dedicated to NVA activities, on an annual basis the organisation is wasting 728 minutes or over 12 hours on non-value adding activities! Process Efficiency, also known as Process Cycle Efficiency, signifies a level of performance that describes a process. In short, An efficient process uses the lowest amount of inputs to create the greatest amount of outputs. Process Cycle Efficiency is a measurement, expressed as a number. This number represents the amount of value adding time in a process. The higher the number, the more efficient the process becomes. The Process Cycle efficiency is calculated by totalling the value adding activities (BVA & CVA) time in the process and then dividing it by the total Process Cycle Time of the process. In other words, process cycle time = process execution me + delay time Process Cycle Efficiency is improved by decreasing the cycle time through the reduction of delay time. Let us look at a scenario: Process Cycle time = 50 minutes Execution time = 10 minutes Hence, there is a delay time of 40 minutes in the process. Divide Execution time by process cycle time. That is: 10/50 = 0.2 To represent the decimal as a percentage, multiply it by 100. Therefore, the efficiency of this process is 20%. Hence, if we want the process to be 100% efficient, we need to remove the 40 minutes delay time. As a reference, for transactional processes, that is anything non-manufacturing, 25% efficiency or above is acceptable. The blog series covered some of the many approaches to improve business processes within your organisation. It is important that before executing any of these, risks and costs are properly calculated to avoid failures and instead get excellent results from the improvement initiatives. Activity and Process Cost Analysis Cost reduction is a critically important goal of every organisation. Decrease in operational costs signifies higher profit margins and a better organisational budget. It is also one of the key measures of success for Business Improvement programs. According to the PEX Network Biennial State of the Industry Report 2015, around 22.2% professionals consider Cost Savings as the primary measure of success for their Process Improvement Programs. Analysis of cost is essential for crafting a reduction strategy and process analysis is essentially the foundation of it. A process can be analysed in either quantitative or qualitative way. The quantitative aspects of a business process analysis are usually classified as the following: • Process Cost Analysis • Process Time Analysis • Process Efficiency Analysis Through our blog series, we have already shared how you can do value analysis. In today’s blog we are going to focus on Process Cost Analysis. Analysing the cost of a process is another way to identify cost reduction opportunities; what the cost saving could be if we were to slightly redesign the process. Role cost and overhead cost needs to be considered to calculate process cost and identify cost reduction opportunities. A process model indicates the activities performed by respective roles. Identify the role(s) and source the annual cost from the Human Resource department. For example, role XYZ has an annual cost of $100,000. Secondly, the business overhead cost must be calculated. An overhead or overhead expense refers to an ongoing expense related to operating a business. Overhead costs include rent, accounting fees, taxes, telephone bills etc. but does not include labour costs, material costs and direct expenses. An overhead cost is usually expressed as a ratio and this can be sourced from the finance department. For example, the overhead rate is 0.31 or 31 percent, which means that $0.31 in overhead costs is incurred for every $1 in direct labour costs. Hence, if role cost is $100,000 per annum, the overhead cost is $31,000 per annum. Translate the overhead cost and the role cost into minutes. This can then be multiplied by the minutes it takes to do the activity. Hence, the result is a per activity cost. Allocating a different role to an activity can significantly change the cost of the activity. Following the Value Analysis described above, calculate the annual process execution cost of your process for: • Customer Value Adding activities • Business Value Adding activities • Non-Value Adding activities Non-Value-Adding activities represent waste in the organisation; hence these costs can be immediately removed to deliver cost savings Process cost analysis is one of the approaches to analyse a business process. However, it is important that before executing any of the shared analysis techniques, risks and costs are properly calculated to avoid failures and instead get excellent results from the improvement initiatives. Additionally, based on the objective of the analysis, one needs to choose a technique hence, you need to be very cautious while choosing one.
{"url":"https://www.primebpm.com/how-to-conduct-process-time-and-process-cost-analysis","timestamp":"2024-11-13T15:42:43Z","content_type":"text/html","content_length":"378613","record_id":"<urn:uuid:c96b7141-5347-4ff5-b1f0-adc44aabf970>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00471.warc.gz"}
Winbuzz | A Beginner Guide to Decimal Fractional, & American Odds Understanding Betting Odds: A Beginner’s Guide to Decimal, Fractional, and American Odds Betting odds are a fundamental aspect of sports betting, and understanding how they work is crucial for any bettor. Odds represent the likelihood of an event occurring and determine the potential payout of a bet. At Winbuzz, we provide a comprehensive guide to help beginners understand decimal, fractional, and American odds, enhancing their sports betting experience. What Are Betting Odds? Betting odds indicate the probability of an event occurring and help bettors determine how much they can win from a wager. Different regions use different formats to display odds, including decimal, fractional, and American odds. Each format represents the same probability but in a different way. 1. Decimal Odds: Commonly used in Europe, Canada, and Australia, decimal odds are straightforward and easy to understand. They represent the total payout, including the initial stake. 2. Fractional Odds: Popular in the UK and Ireland, fractional odds show the potential profit relative to the stake. They are often used in horse racing and other traditional betting markets. 3. American Odds: Predominantly used in the United States, American odds can be either positive or negative, indicating how much profit a bettor can make on a $100 stake or how much needs to be wagered to win $100. Example: If a football team has decimal odds of 2.50, fractional odds of 3/2, and American odds of +150, all these formats represent the same probability and potential payout. Understanding Decimal Odds Decimal odds are the most straightforward format to understand and calculate. They represent the total payout, including the initial stake. 1. Calculation: To calculate the potential payout, multiply the stake by the decimal odds. The result includes both the profit and the initial stake. 2. Example: If you bet ₹100 on a team with decimal odds of 2.50, the potential payout is ₹100 * 2.50 = ₹250. This means you will receive ₹250 if the bet is successful, including your ₹100 stake and ₹150 profit. 3. Advantages: Decimal odds are easy to understand and calculate, making them popular among beginners and experienced bettors alike. Example: A bet of ₹200 at decimal odds of 1.75 would result in a total payout of ₹200 * 1.75 = ₹350. Understanding Fractional Odds Fractional odds are traditionally used in the UK and Ireland and are commonly seen in horse racing. They represent the potential profit relative to the stake. 1. Calculation: Fractional odds are expressed as a fraction (e.g., 3/2). To calculate the potential profit, multiply the stake by the fraction. Add the initial stake to determine the total payout. 2. Example: If you bet ₹100 on a team with fractional odds of 3/2, the potential profit is ₹100 * (3/2) = ₹150. The total payout, including the initial stake, is ₹150 + ₹100 = ₹250. 3. Advantages: Fractional odds clearly show the profit relative to the stake, making it easy to compare different betting options. Example: A bet of ₹200 at fractional odds of 5/4 would result in a profit of ₹200 * (5/4) = ₹250 and a total payout of ₹450. Understanding American Odds American odds, also known as moneyline odds, are predominantly used in the United States. They can be either positive or negative, indicating different types of bets. 1. Positive Odds: Positive odds (e.g., +150) indicate the profit on a $100 stake. If the odds are +150, a $100 bet would yield $150 profit. 2. Negative Odds: Negative odds (e.g., -150) indicate the amount that needs to be wagered to win $100. If the odds are -150, a bet of $150 would yield a $100 profit. 3. Calculation: For positive odds, divide the odds by 100 and multiply by the stake. For negative odds, divide 100 by the odds and multiply by the stake. Example: A bet of ₹100 at +200 odds would result in a profit of ₹100 * (200/100) = ₹200 and a total payout of ₹300. A bet of ₹150 at -150 odds would result in a profit of ₹150 * (100/150) = ₹100 and a total payout of ₹250. Converting Odds Formats Understanding how to convert between different odds formats can be helpful, especially when comparing betting options across different regions. 1. Decimal to Fractional: To convert decimal odds to fractional, subtract 1 and convert the result to a fraction. For example, 2.50 decimal odds convert to 2.50 – 1 = 1.50 or 3/2 fractional odds. 2. Fractional to Decimal: To convert fractional odds to decimal, divide the fraction and add 1. For example, 3/2 fractional odds convert to (3/2) + 1 = 2.50 decimal odds. 3. Decimal to American: For decimal odds greater than 2.00, use (decimal odds – 1) * 100. For example, 2.50 decimal odds convert to (2.50 – 1) * 100 = +150 American odds. For decimal odds less than 2.00, use -100 / (decimal odds – 1). For example, 1.75 decimal odds convert to -100 / (1.75 – 1) = -133 American odds. Example: Converting 2.50 decimal odds to fractional results in 3/2, and to American results in +150. Practical Application Here’s a practical example of using different odds formats: 1. Decimal Odds: You bet ₹100 on a team with 2.00 odds. The potential payout is ₹100 * 2.00 = ₹200. 2. Fractional Odds: You bet ₹100 on a team with 1/1 odds. The potential profit is ₹100 * (1/1) = ₹100, and the total payout is ₹200. 3. American Odds: You bet ₹100 on a team with +100 odds. The potential profit is ₹100 * (100/100) = ₹100, and the total payout is ₹200. Example: For a bet with -200 American odds, you need to wager ₹200 to win ₹100, resulting in a total payout of ₹300. Understanding betting odds is crucial for any bettor looking to enhance their sports betting experience. By familiarizing yourself with decimal, fractional, and American odds, you can make more informed decisions and improve your chances of success. Sign up for a Winbuzz ID and explore our wide range of betting options and resources. At Winbuzz Sports betting, we prioritize your enjoyment and safety, providing a secure and user-friendly platform for all your betting needs. Enhance your betting knowledge and experience with Winbuzz, and make the most of your wagers by mastering betting Leave a Comment
{"url":"https://win-buzzz.com.in/blog/understanding-betting-odds-a-beginners-guide-to-decimal-fractional-and-american-odds/","timestamp":"2024-11-02T20:47:36Z","content_type":"text/html","content_length":"114227","record_id":"<urn:uuid:8d700f6c-be12-4abc-ba03-37411535b638>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00341.warc.gz"}
Mathematics - Course Description - Faculty of Physical Sciences, University Of Nigeria Nsukka MTH 111 Elementary Mathematics I 3 units Elementary Set theory, subsets, union, intersection, complements. Venn diagrams, Real numbers, integers, rational and irrational numbers, mathematical induction, real sequences and series, theory of quadratic equations, binomial theorem. Circular measure, trigonometric functions of angles of any magnitude, addition and factor formulae. Complex numbers, algebra of complex numbers, the Argand Diagram, De Moivre’s theorem, nth roots of unity. MTH 121 Elementary Mathematics II 3 units Functions of a real variable, graphs, limits and continuity. The derivative as limit of rate of change. Techniques of differentiation. Curve sketching, integration as an inverse of differentiation. Methods of integration, definite integrals. Application of integration to areas and volumes. MTH 122 Elementary Mathematics III 3 units Geometric representation of vectors in 1-3 dimensions, Components, direction cosines. Addition of vectors and multiplication of vectors by a scalar, linear independence. Scalar and vector products of two vectors. Differentiation and integration of vectors with respect to a scalar variable. Two-dimensional coordinate geometry. Straight lines, circles, parabolas, ellipses, hyperbolas. Tangents and normals. Kinematics of a particle. Components of velocity and acceleration of a particle moving in a plane. Force and momentum. Newton’s laws of motion; motion under gravity, projectile motion, resisted vertical motion of a particle, elastic string, motion of a simple pendulum, impulse and change of momentum. Impact of two smooth elastic spheres. Direct and oblique impacts. MTH 132 Elementary Mechanics I 3 units Vectors: Algebra of vectors; coplanar forces; their resolution into components, equilibrium conditions, moments and couples, parallel forces; friction; centroids and centres of gravity of particles and rigid bodies; equivalence of sets of coplanar forces. Kinematics and rectilinear motion of a particle, vertical motion under gravity, projection, relative motion. Dynamics of a particle. Newton’s laws of motion; motion of connected particles. MTH 201 Advanced Mathematics I 3 units Mathematics and symbolic logic: inductive and deductive systems. Concepts of sets; mappings and transformations. Introduction to complex numbers. Introduction to vectors, Matrices and determinants. MTH 202 Advanced Mathematics II 3 units Discrete and continuous variables. The equation of a straight line in various forms. The circle. Trigonometric functions; logarithmic functions; exponential functions. Maxima, minima and points of inflexion. Integral Calculus: Integration by substitution and by parts. Expansion of algebraic functions. Simple sequences and series. MTH 203 Advanced Mathematics III 3 units Matrices and determinants, introduction to linear programming and integer programming, sequences and series. Taylor’s and Maclaurin’s series. Vector Calculus, line integrals and surfce integrals. Gauss’ (divergence), Green’s and Stokes’ Theorems. Complex numbers and functions of a complex variable; conformal mapping; infinite series in the complex plane. MTH 204 Advanced Mathematics IV 3 units Translation and rotation of axes, space curves; applications of vector Calculus to space curves; the Gaussian and Mean curvatures, the geodesic and geodesic curvature. Differential equations: sECOd order ordinary differential equations and methods of solutions. Partial differential equations: sECOd order partial differential equations and methods of solution. MTH 205 Advanced Mathematics V 3 units Translation and rotation of axes, plane geometry of lines, circles and other simple curves; lines in space; equations of the plane, space curve. The Gaussian and mean curvatures; the geodesic and geodesic curvature. MTH 206 Advanced Mathematics VI 2 units Complex analysis – Elements of the algebra of complex variables, trigonometric, exponential and logarithmic functions. The number system; sequences and series. Vector differentiation and integration. MTH 207 Advanced Mathematics VII 2 units Elements of linear algebra. Calculus: Elementary differentiation and relevant theorems. Differential equations: Exact equations, methods of solution of sECOd-order ordinary differential equations; partial differential equations, with applications. MTH 208 Advanced Mathematics VIII 2 units Numerical analysis: Linear equations, non-linear equations; finite difference operators. Introduction to linear programming. MTH 211 Sets, Logic and Algebra 3 units Introduction to the language and concepts of modern mathematics; topics include: Basic set theory, mappings, relations, equivalence and other relations, Cartesian product. Binary logic, methods of proof. Binary operations, algebraic structures, semi-groups, rings, integral domains, fields. Homomorphism. Number systems; properties of integers, rationals, real and complex numbers. MTH 215 Linear Algebra I 2 units System of linear equations. Matrices and algebra of matrices. Vector spaces over the real field. Subspaces, linear independence, bases and dimensions.Gram-Schmidt orthogonalization procedure. Linear transformations: range, null space and rank. Singular and non-singular transformations. MTH 218 Three-Dimensional Analytic Geometry 2 units Plane curves, parametric representations, length of a plane arc, lines in three-space, surfaces, cylinders, cylindrical and spherical coordinates, quadratic forms, quadrics and central quadrics. MTH 216 Linear Algebra II 2 units Representations of linear transformations by matrices, change of bases, equivalence and similarity. Determinants. Eigenvalues and eigenvectors. Minimum and characteristic polynomials of a linear transformation. Cayley- Hamilton theorem, bilinear and quadratic forms, orthogonal diagonalization. Canonical forms. MTH 221 Real Analysis I 3 units Bounds of real numbers, convergence of sequences of numbers. Monotone convergence of series. Absolute and conditional convergence of series, and rearrangements. Completeness of reals and incompleteness of rationals. Continuity and differentiability of functions. Rolle’s and mean-value theorems for differentiable functions. Taylor series. MTH 222 Elementary Differential Equations I 3 units First-order ordinary differential equations. Existence and uniqueness of solution. SECOd-order ordinary differential equations with constant coefficients. General theory of nth-order linear ordinary differential equations. The Laplace transform. Solution of initial- and boundary-value problems by Laplace transform method. Simple treatment of partial differential equations in two independent variables. Applications of ordinary and partial differential equations to physical, life and social sciences. MTH 224 Introduction to Numerical Analysis 3 units Solution of algebraic and transcendental equations. Curve fitting, error analysis. Interpolation, approximation, zeros of non-linear equations of one variable. Systems of linear equations. Numerical differentiation and integration. Numerical solution of initial-value problems for ordinary differential equations. MTH 231 Elementary Mechanics II 2 units Impulse and Momentum, conservation of momentum; work, power and energy; work and energy principle, conservation of mechanical energy. Direct and oblique impact of elastic bodies. General motion of a particle in two dimensions, central orbits, motion in horizontal and vertical circles, simple harmonic motion, motion of a particle attached to a light inelastic spring or string. Motion of a rigid body about a fixed axis; moments of inertia calculations; perpendicular and parallel axes theorems, principal axes of inertia and directions. Conservation of energy. Compound pendulum. Conservation of angular momentum . MTH 242 Mathematical Methods I 3 units Real-valued functions of a real variable. Review of differentiation and integration and their applications. Mean-value theorem. Taylor series. Real-valued functions of two or three variables. Partial derivatives. Chain-rule, extrema, Lagrange’s multipliers, increments, differentials and linear approximations. Evaluation of line-integrals. Multiple integrals. MTH 311 Abstract Algebra I 3 units Group: definition; examples, including permutation groups. Subgroups and cosets. Lagrange’s theorem and applications. Cyclic groups. Normal subgroups and quotient groups. Homomorphism, Isomorphism theorems. Cayley’s theorems. Direct products. Groups of small order. Group acting on sets. Sylow theorems, MTH 312 Abstract Algebra II 3 units Rings: definition; examples, including Z, Z[n]; rings of polynomials and matrices, integral domains, fields, polynomial rings, factorization. Euclidean algorithm for polynomials, H.C. F. and L.C.M. of polynomials.ideals and quotient rings, P.I.D.’s, U.F.D’s, Euclidean rings. Irreducibility. Field theorems, degree of an extension, minimum polynomial. Algebraic and transcendental extensions. Straight-edge and compass constructions. MTH 313 Geometry I 2 units Coordinates in Â^3. Polar coordinates; distance between points, surfaces and curves in space. The plane and straight line. MTH 314 Geometry II 2 units Introductory projective geometry. Affine and Euclidean geometries. MTH 316 Differential Geometry 3 units Concept of a curve, regular, differentiable and smooth curves, osculating, rectifying and normal planes, tangent lines, curvature, torsion, Frenet-Serret formulae, fundamental, existence and uniqueness theorem, involutes, evolutes, spherical indicatrix, developable surfaces, ruled surfaces, curves on a surface, first and sECOd fundamental forms, lines of curvature, umbilics, asymptotic curves, geodesics. Topological properties of simple surfaces. MTH 321 Metric Space Topology 3 units Sets, metrics and examples. Open spheres or balls. Open sets and neighbourhoods. Closed sets. Interior, exterior, frontier, limit points and closure of a set. Dense subsets and separable space. Convergence in metric space, homeomorphism. Continuity and compactness, connectedness. MTH 327 Elements of Differential Equations II 3 units Series solution of sECOd-order differential equations. Sturm-Liouville problems. Orthogonal polynomials and functions. Fourier series, Fourier-Bessel and Fourier-Legendre series. Fourier transformation, solution of Laplace, wave and heat equations by the Fourier method. (separation of variables). Special functions:Gamma,Beta, Bessel, Legendre and Hypergeometric MTH 323 Complex Analysis I 3 units Functions of a complex variable: limits and continuity of functions of a complex variable. Derivation of the Cauchy-Riemann equations; Bilinear transformations, conformal mapping, contour integrals. Cauchy’s theorem and its main consequences. Convergence of sequences and series of functions of a complex variable. Power series. Taylor series. MTH 324 Vector and Tensor Analysis 3 units Vector algebra. The dot and cross products. Equations of curves and surfaces. Vector differentiation and applications. Gradient, divergence and curl. Vector integrals: line, surface and volume integrals. Green’s, Stoke’s and divergence theorems. Tensor products of vector spaces. Tensor algebra. Symmetry. Cartesian tensors and applications. MTH 326 Real Analysis II 3 units Riemann integral of real function of a real variable, continuous monopositive functions. Functions of bounded variation. The Riemann-Stieltjes integral. Point-wise and uniform convergence of sequences and series of functions ®Â. Effects on limits (sums) when the functions are continuously differentiable or Riemann integrable power series. MTH 328 Complex Analysis II 3 units Laurent expansions, isolated singularities and residues. The Residue theorem, calculus of residues and application to the evaluation of integrals and to summation of series. . Maximum modulus principle. Argument principle. Rouche’s theorem. The fundamental theorem of algebra. Principle of analytic continuation. Multiple-valued functions and Riemann surfaces. MTH 331 Introduction to Mathematical Modelling 3 units Methodology of model building; identification, formulation and solution of problems; cause-effect diagrams. Equation types. Algebraic, ordinary differential, partial differential, difference, integral and functional equations. Applications of mathematical models to physical, biological, social and behavioural sciences. MTH 334 Special Theory of Relativity 4 units Classical mechanics and principles of Relativity, Einstein Postulates; Interval between events, Lorentz transformation and its consequences; Four-Dimensional Space-time, Relativistic Mechanics of a particle, Maxwell’s theory in a Relativistic form. Optical phenomena. MTH 335 Introduction to Operations Research 3 unitsPhases of operations research study. Classification of operations research models; linear, dynamic and integer programming. Decision theory. Inventory models. Critical path analysis and project controls. MTH 336 Dynamics of a Rigid body 3 units General motion of a rigid body as a translation plus a rotation. Moment of inertia and product of inertia in three dimensions. Parallel and perpendicular axes theorems. Principal axes, angular momentum, kinetic energy of a rigid body. Impulsive motion. Examples involving one- and two-dimensional motion of a simple systems. Moving frames of reference; rotating and translating frames of reference. Coriolis force. Motion near the earth’s surface. The Foucauli’s pendulum. Euler’s dynamical equations of motion of a rigid body with one point fixed. The symmetric top. Precessional MTH 337 Optimization Theory II 2 units Linear programming models. The simplex method: formulation and theory, duality, integer programming; transportation problem. Two-person-zero-sum games. Nonlinear programming. Quadratic programming. MTH 338 Optimization Theory II 2 units Kuhn-Tucker methods. Optimality criteria. Single variable optimization. Multivariable techniques. Gradient methods. MTH 339 Analytic Dynamics 3 units Degrees of freedom. Holonomic and non-holonomic constraints. Generalized coordinates. Lagrange’s equations of motion for holonomic systems; force dependent on coordinates only; force obtainable from a potential. Impulsive force. MTH 341 Discrete Mathematics I 2 units Groups and subgroups, group axioms, permutation groups, cosets, graphs, directed and undirected graphs, subgraphs, cycles, connectivity. Applications (flow charts) and state-transition graphs. MTH 342 Discrete Mathematics II 2 units Lattices and Boolean algebra. Finite fields: Mini-polynomials, irreducible polynomials, polynomial roots. Applications (error-correcting codes) MTH 344 Numerical Analysis I 3 units Polynomials and splines approximation. Orthogonal polynomial and Chebysev approximation. Direct and iterative methods for the solution of system of Linear equations. Eigenvalue problem – power methods, inverse power method, pivoting strategies. MTH 412 Abstract Algebra III 3 units Splitting fields. Separability. Algebraic closure. Solvable groups. Fundamental theorem of Galois theory. Solution by radicals. Definition and examples of modules, submodules and quotient modules. Isormorphism theorems. Theory of group representations. MTH 421 Ordinary Differential Equations 3 units Existence and uniqueness of solutions; dependence on initial conditions and on parameters, general theory for linear differential equations with constant coefficients. The two-point Sturm-Liouville boundary value problem; self-adjointness; Sturm theory; stability of solutions of nonlinear equations; phase-plane analysis. Floquet Theory. Integral equations classifications – Voltera and Freedhom types. Reduction of ordinary differential equations to integral equations. MTH 424 General Topology 3 units Topological spaces, definition, open and closed sets, neighbourhood. Coarser and finer topologies. Bases and sub-bases. Separation axioms, compactness, local compactness, connectedness. Construction of new topological spaces from given ones. Subspaces, quotient spaces, continuous functions, homomorphisms, topological invariants, spaces of continuous functions. Point-wise and uniform convergence. MTH 425 Lebesgue Measure and Integration 3 units Lebesgue measure: measurable and non-measurable sets. Measurable functions. Lebesgue integral: integration of non-negative functions, the general integral convergence theorems. MTH 426 Measure Theory 4 units Abstract L[p]– spaces. MTH 427 Field Theory in Mathematical Physics 3 units Gradient, divergence and curl. Further treatment and application of the definitions of the differential. The integral definition of gradient, divergence and curl. Line-, surface- and volume- integrals. Green’s, Gauss’s, and Stokes’ theorems. Curvilinear coordinates. Simple notion of tensors. The use of tensor notations. MTH 428 Partial Differential Equations 3 units Partial differential equations in two independent variables with constant coefficients: the Cauchy problem for the quasi-linear first- order partial differential equations in two independent variables; existence and uniqueness of solutions. The Cauchy problem for the linear, sECOd- order partial differential equation in two independent variables, existence and uniqueness of solution: normal forms. Boundary- and initial – value problems for hyperbolic, elliptic and parabolic partial differential equations. MTH 429 Functional Analysis 3 units A survey of the classical theory of metric spaces, including Baire’s category theorem, compactness, separability, isometries and completion.; elements of Banach and Hilbert spaces; parallelogram law and polar identity in Hilbert space H; the natural embeddings of normed linear spaces into sECOd dual, and H onto H; properties of operators including the open mapping and closed graph theorem; the spaces C(X), the sequence (Banach) spaces, l[p]^n, l[p] and (c=space of convergent sequences). MTH 432 General Theory of Relativity 3 units Particles in a gravitational field: Curvilinear coordinates, intervals. Covariant differentiation: Christoffel symbols and metric tensor. The constant gravitational field. Rotation. The curvature tensor. The action function for the gravitational field. The energy- momentum tensor. Newton’s laws. Motion in a centrally symmetric gravitational field. The energy- momentum pseudo- tensor. Gravitational waves. Gravitational fields at large distances from bodies. Isotropic space. Space- time metric in the closed and open isotropic models. MTH 434 Elasticity 3 units Stress and strain analysis, constitutive relations, equilibrium and compatibility equations, principles of minimum potential and complementary energy, principles of virtual work, variational formulation, extension, bending and torsion of beams; elastic waves. MTH 436 Fluid Dynamics 3 units Real and ideal fluids; differentiation following the motion of fluid particles. Equations of motion and continuity for incompressible inviscid fluids. Velocity potentials and Stokes’ stream function. Bernoulli’s equation with applications to flows along curved paths. Kinetic energy. Sources, sinks and doublets in 2- and 3- dimensional flows; limiting stream lines. Images and rigid planes, streaming motion past bodies including aerofoils. MTH 437 Systems Theory 4 units Existence, boundedness and periodicity for solutions of linear systems of differential equations with constant coefficients. Lyapunov theorems. Solution of Lyapunov stability equations. ATP+PA=Q. Controllability and observability. Theorems on existence of solution of linear systems of differential equations with constant coefficients. MTH 438 Electromagnetism 3 units Maxwell’s field equations. Electromagnetic waves and electromagnetic theory of light; plane electromagnetic waves in non- conducting media; reflected and refractional place- boundary. Wave- guides and resonant cavities. Simple radiating systems. The Lorentz-Einstein transformation. Energy and momentum. Electromagnetic 4- vectors. Transformation of (E.H.) fields. The Lorentz force. MTH 439 Analytical Dynamics II 3 units Lagrange’s equations for non- holonomic systems. Lagrange’s multipliers. Variational principles. Calculus of variations. Hamilton’s principle. Lagrange’s equations of motion from Hamilton’s principle. Contact or canonical transformations. Normal modes of vibration. Hamilton- Jacobi equations for a dynamical system. MTH 441 Mathematical Method II 3 units Calculus of variations: Lagrange’s functional and associated density. Necessary condition for a weak relative extremum. Hamilton’s principle. Lagrange’s equations and geodesic problems. The Du Bois Raymond equation and corner conditions. Variable end points and related theorems. Sufficient conditions for a minimum, isoperimetric problems. Variational integral transforms. Laplace, Fourier and Hankel transforms. Complex variable methods; convolution theorems; applications to solutions of differential equations with initial/boundary conditions.
{"url":"https://physicalscs.unn.edu.ng/mathematics-course-description/","timestamp":"2024-11-06T14:34:03Z","content_type":"text/html","content_length":"96227","record_id":"<urn:uuid:0701d8aa-0328-4b48-923b-4628aeb10cf4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00375.warc.gz"}
CFrame Help (Rainbow Effect) Hello! How can I create a cool effect like this: Notice how the Parts spawn, and then they go in a rainbow motion through the air until it reaches a specific location? (I know how to make them spawn in a random area, but for the purpose of this lets just say that it spawns in a specific area, and THEN does the rest.) One more thing… Those are PARTS. It isnt a beam, but individual parts. Use a beam, this isnt really anything to do with CFrame Those are parts… I have gone under them and such. They are actual parts You can use bezier curves to draw something like this, however in your case the midpoint at the top of the arc should go “up” relative to the world and not “right” relative to the CFrame.lookAt from current position to target position. Details on the method So here is how it goes I believe: --percent1,percent2 = 0.25, 0.25 --1/4 of the length of the positionToLookAt vector local function bezierCurvePoints(position,lookAt,upLength,percent1,percent2) local upVector = Vector3.new(0,1,0) local positionToLookAt = lookAt-position--vector from position to lookAt/Target point local unitDirection = positionToLookAt.Unit --find the right vector relative to the world local rightVector = positionToLookAt:Cross(upVector).Unit --find the new upvector local upVector = rightVector:Cross(unitDirection).Unit local midpoint = position+positionToLookAt/2 --right length influences the length it goes "right" local midpointToTheSky= midpoint + upVector *upLength local p2 = midpointToTheSky- positionToLookAt*percent1 local p3 = midpointToTheSky+ positionToLookAt*percent2 return p2,p3 Then apply the cubic bezier curve equation with p2 and p3 function cubicBezier(t, p1, p2, p3, p4) return (1 - t)^3*p1 + 3*(1 - t)^2*t*p2 + 3*(1 - t)*t^2*p3 + t^3*p4 Another option is to use the quadratic projectile motion equation to model the arc, it should also be more physics like if you are going for a more realistic approach. Hey, Im very confused, haha. I would prefer CFrame You could try tweeting it. Try looking at this TweenService | Documentation - Roblox Creator Hub and check out some YouTube videos on tweening. Use my (ok i guess you cant credit math but I kinda discovered it for myself when messing around with the vertex formula) formula for getting arc motion within specific parameters Then have h be a random value with t being the distance between the start and the end. Alternatively you could also just use my Math++ library I wonder what he’s making this time 2 Likes Ask me 2 months ago. Perhaps you’ll find an answer
{"url":"https://devforum.roblox.com/t/cframe-help-rainbow-effect/1050048/4","timestamp":"2024-11-13T07:29:04Z","content_type":"text/html","content_length":"51339","record_id":"<urn:uuid:14858aff-b569-4b8d-a463-268c794fdbd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00522.warc.gz"}
A CFA Level 1 Discussion About Change In Basis Points Clarification... • Author □ :: What’s the rule to find the change in basis points in decimals for the duration calculation? D = ( P if yields down – P if yields up ) / (2*initial price* change in yield in decimals) In one question, the change in basis points was 100 and the change in yield in decimals was said to be : 0.001 In another, the basis point change was 60, and the answer in the formula used 0.006 as change in yield in decimal. As far as I’m concerned, they didn’t do the same conversion there…… yet the answers work ( I checked their calculations)…. so what’s the deal? how do I turn basis point change into decimal yield change in order to use that formula?? □ :: Current price of bond 108.00 10 bp increase makes price 106.50 10 bp decrease makes price 110.00. portfolio value = $2 million. The expected change in the market value of this holding for a 100 basis point change in interest rates will be closest to: a- $124,000. b-$322,600. c-$645,200. B is the answer second exec: price now 92.733 if price becomes 94.474 for a 60 bp drop and price becomes 91.041 for a 60 bp increase, the effective duration of the bond is: a- 1.85. b-3.09. c-6.17. B is the answer □ :: can you see the images? @graemea □ :: oh … maybe they did 0.001 in the first exec because the situation said TEN bp?? i thought they were using the HUNDRED bp that was in the actual question portion of the exec… □ :: I guess ill type first exec calculation is: (110-106.5 ) / (2*108.5*0.001) = 16.13 = duration second exec calculation is: (94.474-91.041)/(2*92.733*0.006) = 3.09 I did the execs in another way (not using this precise formula) and got the same answers but if I were to use their formula, then I’m not sure whats happening □ :: it would mean that if its 10 bp then you divide 10 by 10,000 or whichever bp –> divide by 10,000 to get percentage yield change?? □ :: Hey Lulu123, The first question seems to be erroneous because 100bps is 1% or 0.01… so you’re correct in that they didn’t do the same conversion. May you post the question so I can see and attempt to interpret what was done? □ :: Yes that is correct Lulu123….you’re duration calculation is also correct….16.13%*1.0*2,000,000=322,600 … since the example didn’t specify the direction of the rate change and all are negative… the answer is B … the 10bps change had to be used in the denominator of your duration calculation since that was the change used to compute the prices in the numerator….. the 100bps change came into play when you were estimating the change in market value 10bps (or 0.1%) was the hypothetical change used to calculate the duration 100bps (or 1.0%)was the actual change used to estimate the change in the value of the portfolio • Author • You must be logged in to reply to this topic.
{"url":"https://300hours.com/f/cfa/level-1/t/change-in-basis-points-clarification/","timestamp":"2024-11-07T23:57:11Z","content_type":"text/html","content_length":"274967","record_id":"<urn:uuid:f31c3136-ee40-4f86-8008-624a2736c1cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00727.warc.gz"}
Thermal Equilibrium and Nonequilibrium Heat Transfer in Porous Media Porous materials have a growing range of applications thanks to their availability, cost, and special thermal properties. For example, foam materials are increasingly used in different aeronautical applications due to their excellent mechanical and thermal properties. Porous structures are also found in the batteries used in electric vehicles. We even find countless porous materials in nature, such as soils, rocks, and wood; and we take advantage of their thermal properties when we make use of them. The many industrial applications of porous materials require them to have optimized thermal Heat Transfer on the Microscopic Scale Let’s take a closer look at the heat transport in a porous structure on the microscopic level. As discussed in our previous blog post, we use these findings to verify and understand the flow equations at the macroscopic level. In that example, the flow is isothermal, so we do not investigate the implication of the pore geometry in the heat transport. Since the thermal properties of fluids might significantly differ from the properties of solids, the interaction of these is essential for understanding how heat transport in porous media works. Temperature evolution of a cooled porous structure. The initial local nonequilibrium reaches thermal equilibrium over time. Let’s use the same example as shown in the previous blog post, and inject a fluid that is much warmer than the porous matrix. We observe that the temperatures of the porous matrix T_\textrm{s} and fluid T_\textrm{f} initially differ, and balance gradually over time. Of course, this depends on the boundary conditions and thermal properties of the fluid and solid. In many applications, the assumption T_\textrm{s}=T_\textrm{f} is valid and we speak of (local) thermal equilibrium, whereas in other applications, T_\textrm{s}\neq T_\textrm{f} applies and we speak of (local) thermal nonequilibrium. The addition “local” refers to the pointwise comparison of the temperatures T_\textrm{f} and T_\textrm{s}. Heat Transfer Under Thermal Equilibrium We only need one equation to describe the average temperature of the whole (solid and fluid) porous structure under the local thermal equilibrium assumption. Based on the conservation of energy and by applying mixing rules, the following equation is obtained for the heat transport equation \left(\rho C_p\right)_\textrm{eff}\frac{\partial T}{\partial t}+\rho_\textrm{f} C_{p,\textrm{f}} \mathbf{u}\cdot\nabla T+\nabla\cdot(-k_\textrm{eff}\nabla T)=Q It is no surprise that this is very similar to the well-known heat transfer equation. The thermal properties of the fluid and porous matrix are combined as effective properties, which are the effective volumetric heat capacity and effective thermal conductivity \left(\rho C_p\right)_\textrm{eff}=\theta_\textrm{p}\rho_\textrm{s}C_{p,\textrm{s}}+\theta_\textrm{f}\rho_\textrm{f}C_{p,\textrm{f}} The indices \textrm{f} and \textrm{s} stand for the fluid and solid, respectively, and \rho is the density, C_p the heat capacity at constant pressure, and \theta_\textrm{s} is the solid volume fraction. We assume a fully saturated porous medium, so the porosity would correspond to the fluid volume fraction, \theta_\textrm{f} =1-\theta_\textrm{s}. For thermal conduction, the effective thermal conductivity k_\textrm{eff} depends on the structure of the porous medium as well as on the thermal conductivity of the solid and fluid. Three options to calculate the effective thermal conductivity k_\textrm{eff} are available in the software: 1. Volume average, which symbolically represents solid and fluid stripes in parallel to the heat flux, with k_\textrm{eff}=\theta_\textrm{s} k_\textrm{s} + \theta_\textrm{f} k_\textrm{f} 2. Reciprocal average, for solid and fluid stripes perpendicular to the heat flux, with \frac{1}{k_\textrm{eff}}=\frac{\theta_\textrm{s}}{k_\textrm{s}} + \frac{ \theta_\textrm{f}}{k_\textrm{f}} 3. Power law, for a random geometry with similar thermal conductivities for the solid and fluid, with k_\textrm{eff}=k_\textrm{s}^{\theta_\textrm{p} }\cdot k_\textrm{f}^{\theta_\textrm{f}} We illustrate these three averaging techniques by using an artificial example of a porous material, and compare the results for the different options with those of the calculated value. Comparison of the average temperature for different effective thermal conductivity options. From left to right: Solid (gray) and fluid (blue) materials arranged in parallel, in series, and in a check pattern. The heat flux is produced by a prescribed temperature difference between the upper and lower boundaries. The finer the structure, the better the approximation for the reciprocal average and power law. The true effective thermal conductivity ranges between the volume average and the reciprocal average, which are the upper and lower bounds according to the rule of mixtures. If convection is the dominant effect, the effect of the mixing rule for the thermal conductivity is less important. A porous material can also consist of several solids and immobile fluids; e.g., a rock consisting of different minerals or trapped fluids. This can also be considered in the model, and the effective material properties are then calculated accordingly. For example, the volume average thermal conductivity of the porous matrix consisting of i different materials is calculated according to k_\textrm Thermal Dispersion Thermal dispersion is another important effect related to the porous microstructure. Typically, for convection-dominated regimes, the fluid follows a swirled path at the pore scale, which enhances the heat exchange between solid and fluid phases. This is macroscopically described by an additional thermal conductivity contribution to the heat transfer equation (Eq. 1), k_\textrm{disp}=\rho_\ textrm{f} C_{p,\textrm{f}} D_{ij}, where D_{ij} is the dispersion tensor due to the fast velocity field. We compare the results of the average temperature for the example shown in the previous blog post. The graph below shows the average temperature computed from the microscale approach, and the values obtained from the averaged macroscopic equations with and without thermal dispersion. Comparison of the average temperature for the microscopic and macroscopic approaches. There is a better match when thermal dispersion is included. Heat Transfer Under Local Thermal Nonequilibrium As stated at the beginning of this blog post, local thermal equilibrium is not always attained. In particular, the difference between solid and fluid temperatures can be substantial for fast nonisothermal flows, short time scales, or when strong dependencies on additional effects (for instance, phase transitions) occur. Then, Eq. 1 is not sufficient, the energy balance for each phase must be considered separately, and the heat exchange between the two phases must be accounted for in an explicit way. This is accomplished with a two-temperature model. The Local Thermal Nonequilibrium approach (Ref. 1) solves for two temperature fields, and couple these through a heat source/sink: \theta_\textrm{s}\rho_\textrm{s} C_{p,\textrm{s}} \frac{\partial T_\textrm{s}}{\partial t} + \nabla\cdot(-\theta_\textrm{s} k_\textrm{s} \nabla T_\textrm{s}) &= q_\textrm{sf}(T_\textrm{f}-T_\textrm \theta_\textrm{f}\rho_\textrm{f} C_{p,\textrm{f}} \frac{\partial T_\textrm{f}}{\partial t}+\rho_f C_{p, \textrm{f}}\mathbf{u}\cdot\nabla T_\textrm{f} + \nabla\cdot(-\theta_\textrm{f} k_\textrm{f} \ nabla T_\textrm{f})&=q_\textrm{sf}(T_\textrm{s}-T_\textrm{f}) The heat exchange between the solid and fluid is considered by the term on the right-hand side, where q_\textrm{sf} (W/(m^3 K)) is the interstitial heat transfer coefficient that depends on the thermal properties of the phases as well as on the structure of the porous medium; more precisely, the specific surface area of contact. An excellent example where nonequilibrium heat transfer is present is a thermal energy storage (TES) unit. This device works as follows: Water is heated up by a solar collector, and it circulates through a tank containing paraffin-filled capsules. During the charging period, the paraffin inside the capsules is heated above its melting temperature. As a result, the solar energy is stored in the form of sensible and latent heat, which allows more energy storage over longer periods. Operating principle of the thermal storage unit and the Local Thermal Nonequilibrium multiphysics coupling node for the packed bed. Paraffin, water, and average temperature in the tank during the charging period. The encapsulated paraffin takes a longer time to heat up as compared to water, so the time until the tank is fully charged cannot be estimated by the water temperature alone. The Local Thermal Nonequilibrium multiphysics interface provides the means to couple the heat transfer in water and paraffin. Final Remarks on Modeling Heat Transfer in Porous Media We have taken a closer look at the mechanism of heat transfer in porous media at microscopic and macroscopic levels. The macroscopic equations with effective thermal properties provide a very good approximation for modeling homogenized heat transfer in porous media. Complex calculations of pore-scale flow can also be included in the COMSOL Multiphysics® software. For example, you can compute heat transfer equations for the simulation of representative elementary volumes (REV) in order to obtain averaged values for large-scale, real-life applications. 1. D. A. Nield and A. Bejan, Convection in Porous Media, 4^th ed. Springer, 2013. Try It Yourself Try the packed bed latent heat storage model featured in this blog post by clicking the button below. Doing so will take you to the Application Gallery, where you can download the MPH-file. Comments (2) Shubhadeep Nag May 22, 2021 I would like to ask how to achieve a local hot zone in a nano porous material experimentally? and of what width and how high the temperature can be such that it does not destroy the crystal structure of the material? Achilles Ji November 30, 2021 I would like to ask how the temperature is extracted from?
{"url":"https://www.comsol.com/blogs/thermal-equilibrium-and-nonequilibrium-heat-transfer-in-porous-media","timestamp":"2024-11-07T15:48:04Z","content_type":"text/html","content_length":"101504","record_id":"<urn:uuid:77fd107e-53d5-46c2-bd99-a1358d79fe9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00655.warc.gz"}
An annualized rate of return is, essentially, the average return an investor receives over a given period, scaled down to a period of one year. We calculate the return over the period since inception and then perform a calculation to figure out the annualised figure. i.e. x ((1 + R)^(1/N) - 1) gives. Converting daily returns to annual returns simplifies with a basic equation, AR = ((DR + 1)^ – 1) x The same formula applies to various return. What is a Rate of Return? · (($15 + $1 – $10) / $10) x = 60% · 10 shares x ($1 annual dividend x 2) = $20 in dividends from 10 shares · 10 shares x $25 = $ Multiply by to find the percentage. [2] X Research source. For example, if the beginning value of your portfolio was $, Annualized returns are calculated by adding up “distributable cash” both from cash flow during the operation of your investment property and “equity” realized. For the second calculation, the average return is the total return of the entire period (for all returns involved) divided by the number of periods. The time. The annualized rate of return calculates the rate of return on investments by averaging returns on an annual basis. For investors with diverse portfolios, the. For a quarterly investment, the formula to calculate the annual rate of return is: Annual Rate of Return = [(1 + Quarterly Rate of Return)^4] - 1. The number 4. The basic idea is to compound the returns to an annual period. So, if we have monthly returns, we know that there are 12 months in the year, similarly there are. Average Return: If you add up all those returns and divide by the number of years, that's your average return. It's like finding the middle. The Annualized Return Calculator computes the annualized return of an investment held for a specified number of years. Do not enter $ in any field. If the calculation has a negative ROI percentage, that means the business -- or metric being measured -- owes more money than what is being earned. In short, if. Annualised return can be calculated with the following formula: End Value – Beginning Value/Beginning Value * * (1/holding period of the investment) For. Formula to calculate the annualised returns · You need to calculate the total return for the investment period. This is done by taking the investment's end. 5. Calculate the annualized return: Raise the return to the power of (1 / holding period) and subtract 1 from the result. This will give you the. The most basic way to calculate rate of return is to measure the percentage change in an investment's value for a time period. The equation to derive this can. You can calculate your rate of return by month and then multiply the result by 12 to get your annual rate of return. Numerous calculators are available. Free return on investment (ROI) calculator that returns total ROI rate and annualized ROI using either actual dates of investment or simply investment. Future value of current investment · Enter a dollar value of an investment at the outset. · Input a starting year and an end year. · Enter an annual interest rate. The formula structure is =(PRODUCT(1+range_of_monthly_returns)^(12/n_months)) This method directly calculates the compounded return over a year. Large Table. At it simplest you need a starting value, an ending value, and the time elapsed in years. Then end value / start value is the return. [ Annual Return = (ending value / beginning value)^(1 / number of years) – 1 ] When we know the annual return but not the total return, we can calculate total. The annualized rate of return, also called the compound annual growth rate (CAGR), is how much your investments would have to grow each year to have the. The actual rate of return is largely dependent on the types of investments you select. The Standard & Poor's ® (S&P ®) for the 10 years ending December The Standard & Poor's ® (S&P ®) for the 10 years ending December 31st , had an annual compounded rate of return of %, including reinvestment of. Calculating the annualized rate of return needs only two variables: the returns for a given period and the time the investment was held. The formula structure is =(PRODUCT(1+range_of_monthly_returns)^(12/n_months)) This method directly calculates the compounded return over a year. Large Table. The formula for calculating rate of return is R = [(Ve Vb) / Vb] x , where Ve is the end of period value and Vb is the beginning of period value. First, there is such a thing as an annual return, which is simply the portfolio return over 1 year. Then, we have the annualized return. Now this is calculated. How To Scan Qr Code In Oneplus | Gas Price Statistics
{"url":"https://holkovo.ru/prices/calculate-annualized-rate-of-return.php","timestamp":"2024-11-14T23:20:43Z","content_type":"text/html","content_length":"11797","record_id":"<urn:uuid:d5e48011-1431-4c49-9bcf-bebb6b4b2341>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00261.warc.gz"}
How do you factor \ Hint: We need to factorize \[{x^2} + 4\]. To factorize the numbers, the numbers are split as their multiplication. For example the number \[12\] is factored as its multiple that is \[12 = 2 \times 2 \times 3\]. The factorization of the polynomial is done in the same manner. This is done by first determining the terms are multiplied to obtain the given polynomial and then factorize the term. The process is continued until there is no simplification possible. Complete step by step solution: The given term is \[{x^2} + 4\]. Here, \[{x^2} + 4\] is a binomial as there are only \[2\] terms. It is clear in the binomial \[{x^2} + 4\]that there are no common factors other than \[1\]. Since no factor with more than one term can be factored further so there are no real number factors. This concludes that the factors are Prime. The roots of the quadratic function depend on the discriminant that is \[{b^2} - 4ac\], if the discriminant is \[0\] then the roots are real, if the discriminant is rational then the factors are rational too. For the positive discriminant, the roots are real and for the negative one, the roots are complex. Calculate the discriminant of \[{x^2} + 4\] as, \[ \Rightarrow {b^2} - 4ac = {0^2} - 4\left( 1 \right)\left( 4 \right)\] \[ \Rightarrow {b^2} - 4ac = - 16\] The discriminant is negative this means that the roots are complex. Then, the factorization of \[{x^2} + 4\] is done as, \[ \Rightarrow {x^2} + 4 = {x^2} - \left( { - 4} \right)\] \[ \Rightarrow {x^2} + 4 = {x^2} - {\left( {2i} \right)^2}\] Here, \[i\] is the imaginary root and its value is \[i = \sqrt { - 1} \]. Consider the algebraic identity \[\left( {a + b} \right)\left( {a - b} \right) = {a^2} - {b^2}\] Then, the binomial is, \[ \Rightarrow {x^2} + 4 = \left( {x - 2i} \right)\left( {x + 2i} \right)\] Thus, the factored form is \[\left( {x - 2i} \right)\left( {x + 2i} \right)\]. The quadratic equation is the equation that is of the standard from \[a{x^2} + bx + c\]. Here, a and b are the coefficients and c are the constant. In the general equation, the highest power of the x is \[2\] so the equation is called quadratic.
{"url":"https://www.vedantu.com/question-answer/how-do-you-factor-x2-+-4-class-11-maths-cbse-600af454fb8327714e6ee1d3","timestamp":"2024-11-14T08:58:01Z","content_type":"text/html","content_length":"162482","record_id":"<urn:uuid:491945f0-5134-425d-869b-7f2a5ffd70eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00644.warc.gz"}
How long is a semi Olympic pool? How long is a semi Olympic pool? This type of swimming pool is used in the Olympic Games, where the race course is 50 metres (164.0 ft) in length, typically referred to as “long course”, distinguishing it from “short course” which applies to competitions in pools that are 25 metres (82.0 ft) in length. What is semi Olympic swimming pool? The term Semi Olympic Swimming pool is used to identify a pool that is 25 meters (27.34 yd) in length. The term is also often included in meet names when conducted in a short course pool. What are the dimensions of an Olympic size swimming pool? They are 50 meters long, 25 meters wide, and 2 meters deep. In terms of volume, when full, these pools hold 2.5 million liters of water or about 660,000 gallons. If you used a normal garden hose to fill one of these pools, it would take roughly 19 days to get it full. How wide is a 25 yard pool? The 25 yard pool should be built 75 feet 1 & 3/16” long. This measurement has a 0 tolerance for being shorter and should not be more than fractions of an inch longer. Long Course 50 meters. How big is an Olympic size swimming pool? A 50 m × 25 m (164 ft × 82 ft) Olympic swimming pool, built to the FR3 minimum ..more definition+ In relation to the base unit of [volume] => (liters), 1 Olympic Size Swimming Pool (os sp) is equal to 2500000 liters, while 1 Cubic Meters (m3) = 1000 liters. What’s the difference between a semi olympic pool and an Olympic pool? The feedback you provide will help us show you more relevant content in the future. I wasn’t familiar with ‘semi-Olympic’ pool. Apparently, the main difference is the size: a semi Olympic pool is 25 m by 12.5 m, while an Olympic pool is 50, by 25. So a semi-Olympic pool is 1/4 the size. How big is a 25 meter swimming pool? To differentiate between pool sizes for swimming times, the following descriptions and abbreviations are used: Long course meters, or LCMs, for 50 meter pools. Short course yards, or SCYs, for 25 yard pools. Short course meters, or SCMs, for 25 meter pools. How tall is a short course swimming pool? A Long course and short course meters: The front edge of the starting platform shall be no less than 0.50 meters (1 foot 8 inches) nor more than 0.75 meters (2 feet 5 and ½ inches) above the surface of the water.
{"url":"https://moorejustinmusic.com/other-papers/how-long-is-a-semi-olympic-pool/","timestamp":"2024-11-11T17:05:18Z","content_type":"text/html","content_length":"34250","record_id":"<urn:uuid:b7ffe889-21b8-4d36-bf17-a72a3a97753a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00146.warc.gz"}
Mathematics Seminar - 11/12/21 Nov 12 3:00 pm Seongjai Kim, Professor, Department of Mathematics & Statistics, Mississippi State University Mathematics Seminar Series Mathematics of Digital Image Interpolation Digital Location Digital zooming is a method of magnifying the size of digital photographic or video images. It is usually accomplished employing interpolation methods, with no adjustment of the camera's optics. However the resulting images hardly gain optical resolution and may involve interpolation artifacts such as ringing (aliasing), blurring, and image halo. Various interpolation methods have been proposed in order to minimize interpolation artifacts, particularly by avoiding the interpolation evaluation across the edges. This talk will begin with basic principles in interpolation. Then we will consider mathematical image interpolation methods such as the curvature interpolation methods (CIMs) and the sharp edge-enhancing diffeomorphism (SEED), which outperform state-of-the-art interpolation methods. These mathematical image interpolation methods minimize interpolation artifacts and enhances the optical resolution (super-resolution) as well, by avoiding the interpolation evaluation across the edges and trying to sharpen the image in the normal direction of the edges. Various numerical examples will be shown to verify the claim.
{"url":"https://www.math.msstate.edu/events/2021/10/mathematics-seminar-111221","timestamp":"2024-11-08T05:22:09Z","content_type":"text/html","content_length":"35697","record_id":"<urn:uuid:15ad7d30-7fdd-46db-af13-02bd5d4c0b73>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00452.warc.gz"}
The class Triangulation_conformer_2 is an auxiliary class of Delaunay_mesher_2<CDT>. It permits to refine a constrained Delaunay triangulation into a conforming Delaunay or conforming Gabriel triangulation. For standard needs, consider using the global functions make_conforming_Gabriel_2() and make_conforming_Delaunay_2(). Template Parameters CDT must be a 2D constrained Delaunay triangulation and its geometric traits class must be a model of the concept ConformingDelaunayTriangulationTraits_2. Using This Class The constructor of the class Triangulation_conformer_2 takes a reference to a CDT as an argument. A call to the method make_conforming_Delaunay() or make_conforming_Gabriel() will refine this constrained Delaunay triangulation into a conforming Delaunay or conforming Gabriel triangulation. Note that if, during the life time of the Triangulation_conformer_2 object, the triangulation is externally modified, any further call to its member methods may lead to undefined behavior. Consider reconstructing a new Triangulation_conformer_2 object if the triangulation has been modified. The conforming methods insert points into constrained edges, thereby splitting them into several sub-constraints. You have access to the initial inserted constraints if you instantiate the template parameter by a Constrained_triangulation_plus_2<CDT>. The Triangulation_conformer_2 class allows, for debugging or demos, to play the conforming algorithm step by step, using the following methods. They exist in two versions, depending on whether you want the triangulation to be conforming Delaunay or conforming Gabriel, respectively. Any call to a step_by_step_conforming_XX function requires a previous call to the corresponding function init_XX and Gabriel and Delaunay methods can not be mixed between two calls of init_XX. void init_Delaunay () The method must be called after all points and constrained segments are inserted and before any call to the following methods. More... bool step_by_step_conforming_Delaunay () Applies one step of the algorithm, by inserting one point, if the algorithm is not done. More... void init_Gabriel () Analog to init_Delaunay for Gabriel conforming. bool step_by_step_conforming_Gabriel () Analog to step_by_step_conforming_Delaunay() for Gabriel conforming. bool is_conforming_done () Tests if the step by step conforming algorithm is done. More...
{"url":"https://doc.cgal.org/4.8.2/Mesh_2/classCGAL_1_1Triangulation__conformer__2.html","timestamp":"2024-11-12T15:44:49Z","content_type":"application/xhtml+xml","content_length":"24831","record_id":"<urn:uuid:cc1a0330-4245-4bcb-a9ce-1073bc1f70a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00512.warc.gz"}