content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
sustainability – earfluff and eyecandy
N.B. I updated this page on 2023 04 05 based on new information from our suppliers…
We have two cars. One is a fully-electric car, and the other is a diesel.
Originally, the plan we had with our electricity supplier for the electric car was a flat fee per month, and an “all you can eat” plan. This made the choice of which car to drive a no-brainer: take
the electric car whenever possible.
However, due to the rising price of energy, our supplier is changing their plan to a new pricing structure. The new price will be
799 DKK per month flat fee + kWh * (average electrical price – 0.89)
The reasoning behind this pricing is explained on their website – I won’t bother getting into that.
Note that they define the “average electrical price” as the average monthly price for both DK1 and DK2 (Denmark is split into two regions for electricity prices). The calculation is done on a
charge-by-charge basis, where the month that’s chosen for the calculation is the month when you unplug the cable at the end of charging your car.
Our problem is that it made the decision of which car to drive (looking at it from a purely economic point of view) complicated. If we park the electric car, it still costs us 799 DKK / month + the
price of diesel in the other car. On the other hand, if we drive the electric car, it costs us something that’s difficult to calculate when you’re heading out to the car in the morning with only one
cup of coffee in you…
One thing that makes it even more complicated is the fact that, if we charge the electric car at home, we first pay our normal electricity supplier for the power we used, and we then get reimbursed
by the electricity supplier for the electric car by some amount per kWh.
The way the electricity supplier for the electric car calculates this reimbursement is also complicated: They use the average monthly electricity price between 11:00 p.m. and 6:00 a.m. including
charges. That number changes but it’s currently defaulting to 1.33 DKK / kWh on this page – look for the “Tilbagebetalingssats” amount in the sidebar on the right called “Tilbagebetaling”. (Note
that this value is difficult if not impossible to determine using the NordPool information. The webpage linked above calculates it from the “forventet indkøbspris” that you can change yourself on
their calculator.
It turned out that figuring out this problem was the most interesting math that I did this week. I ran the calculations first in Matlab, and then duplicated them in Excel (for compatibility’s sake)
to find out how to deal with this.
The variables are:
• Electrical supplier for the electric car:
□ Flat monthly rate for our subscription
□ The amount that they subtract from the average Danish price, per kWh for charging the car (currently 0.89 DKK)
□ The amount that they pay us back to cover a portion of the electrical costs when we charge the car at home
• The price we pay for electricity for the house
• Average electricity price in DKK / MWh
(available from this page. Select the DK1 and DK2 prices for the month of interest. The Excel spreadsheet finds the average of those two values, and adds 25% tax. shown at the bottom in cell B17
in DKK/kWh)
• Fossil fuel Price in DKK/litre (in my case, that’s diesel)
• Consumption of the two cars
□ Average consumption of the electric car in kWh/100 km
□ Average consumption of the fossil-fueled car in litres/100 km
• Total number of km driven per month
The result is two plots:
• The one on the left shows the price of driving each car individually, based on the total number of km driven in the month, as a function of how many of those km are driven in the electric car.
□ The green line shows the cost of driving the electric car if we charge it at a station away from home
□ The red line shows the cost of driving the electric car if we charge it at home
□ The black line shows the cost of driving the fossil-fuel car
• The one on the right shows our total price, as a function of how many of the total number of km driven are driven in the electric car.
So, as you can see in the plots above, at the current prices, and using the average consumption values for our two cars, the more we drive the electric car, the more money we save, and we’ll save a
lot more money if we don’t charge at home.
Looking at the plot on the right, if we park the electric car (0 km on the X-axis) we’ll spend about 2700 DKK per month. If we only drive the electric car (2000 km on the X-axis) and charge away from
home at charging stations, then we’ll spend less than 1000 DKK (green line on the right-hand plot). Quite a savings! If we charge at home, we’ll spend about 2200 DKK (red line on the right-hand plot)
– still cheaper than the diesel, but more than double the price of NOT charging at home.
In case you are in the same position as we are, and the little Excel calculator I made might be useful, you can download it here. However, I make no promises about its reliability. Don’t send me an
email because I screwed up the math – fix it yourself. :-)
2023 05 19 update: We switched to “spot pricing” for the house electricity. So, this calculation has become dependent on the time of day when we charge the car. As a result, I’ve given up trying to
understand it… | {"url":"http://www.tonmeister.ca/wordpress/category/sustainability/","timestamp":"2024-11-08T19:11:13Z","content_type":"text/html","content_length":"40604","record_id":"<urn:uuid:e725affb-1829-4611-b535-761660526ae3>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00404.warc.gz"} |
Advantages of Polyphase System | Electrical Academia
Advantages of Polyphase System
The two-phase alternator in Figure 1(a) has two identical loops mounted on the same rotor. Since both loops have the same number of turns and rotate at the same angular velocity, the voltages induced
in them have the same magnitude and frequency.
Loop A is mounted on the rotor 90° ahead (in the direction of rotation) of loop B. Consequently, the voltage in loop A always leads the voltage in loop B by 90°. A single-phase circuit has only one
source of alternating current.
Figure 1 Simple two-phase alternator
In Figure 2, identical loads are connected to the two windings of the two-phase alternator. It is customary to represent each alternator winding with a coil rather than using a generator symbol. In
Figure 2, the coils are at right angles to each other to indicate that their voltages are 90° out of phase.
Although each winding can be connected independently to its load as in Figure 2(a), two-phase circuits usually have a common or neutral lead as in Figure 2(b). This arrangement reduces the number of
conductors needed from four to three.
The neutral current is the phasor sum of the two load currents. If the two loads are identical, the neutral current is only $\sqrt{2} $, or 1.414, times the current in each of the other conductors,
as shown in the phasor diagram of Figure 3. With the two 120-V loads at 120 V, Line A and Line B each carry a current of 1 A and the neutral carries a current of 1.414 A. The total of these three
currents is 3.414 A. The equivalent single-phase system supplying the two 120-V loads in parallel has two conductors that each carry 2 A, making a total of 4 A.
Figure 2 Simple two-phase system
A polyphase system needs less copper than a single-phase system to supply a given power at a given voltage.
Figure 3 Phasor diagram for the neutral current in a simple two-phase system
Figure 4 shows the instantaneous power to the two identical loads in Figure 2. The total instantaneous power supplied by the alternator at any instant is the sum of the instantaneous power to the two
loads. Since the two voltages are 90° out of phase, the instantaneous power to one load is greatest when the power to other load is zero.
If we check carefully, we find that the sum of the two instantaneous powers is the same at every instant. A constant power is an important advantage for large machines since it allows a steady
conversion of mechanical energy into electric energy.
Figure 4 Instantaneous power in a balanced two-phase system
The total instantaneous power of a polyphase source is constant if the load on each phase of it is identical.
When the two-phase alternator is connected to a set of perpendicular coils as shown in Figure 5(a), each coil passes a sine wave of current, but the current in the B coils is 90° out of phase with
the current in the A coils. The magnetic flux produced by the two currents at any instant depends on the magnitude of the two instantaneous currents.
At 0 rad, the current in the B coils is zero and the current in the A coils is at its maximum in the positive direction. The direction of the resulting magnetic field is indicated by the arrow in
Figure 5(b).
At π/4 rad, the coils all pass a current of 0.707I[m]. Therefore, the total flux is the phasor sum of two perpendicular components with equal magnitudes. This total flux has the same density as at 0
rad, but its direction has changed as shown in Figure 5(c).
At π/2 rad, i[A] is zero and i[B] is at its positive maximum. The magnitude of the flux density is still the same, but the direction of the magnetic field, as shown in Figure 5(d), is now at right
angles to its original direction at the start of the cycle.
Calculating the magnetic flux phasors through the rest of the cycle shows that the coil arrangement of Figure 5(a) produces a rotating magnetic field with a constant magnitude.
Figure 5 Producing a rotating magnetic field in a two-phase system
A polyphase source can develop a magnetic field that has a constant flux density and rotates at the frequency of the applied sine wave.
A compass needle placed in the center of the coils in Figure 5(a) will rotate with the magnetic field at a synchronous speed. Rotating magnetic fields greatly simplify AC motor construction.
A single-phase system produces a magnetic field that does not rotate. Instead, this magnetic field has a varying magnitude and reverses its direction each 180°.
• A two-phase alternator with two coils rotating 90° apart in a magnetic field produces two voltage sine waves, one of which leads the other by 90°.
• The instantaneous power output of a two-phase alternator with identical loads in each phase is constant.
• A set of perpendicular coils connected to a two-phase alternator produces a magnetic field with a constant flux density rotating at the frequency of the applied voltage. | {"url":"https://electricalacademia.com/electric-power/advantages-polyphase-system/","timestamp":"2024-11-08T21:46:54Z","content_type":"text/html","content_length":"117838","record_id":"<urn:uuid:261f8fe0-474e-4695-bbff-f9115105eaa7>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00660.warc.gz"} |
The Information Community of the Arctic in Russia: Evaluation of the Expenses for the IT Projects Development, Characteristics of the Labor Costs Calculating
Ivan V. Evdokimov, Alexander S. Khaluimov, Nikita V. Sokolov & Sergey E. Golokhvastov
In the Arctic conditions of northern Siberia, the IT-industry represents an important platform for providing globally competitive employment. Hence, evaluation of the expenses related to
IT-development is a highly important question for the information community of the Arctic. Nowadays, software solutions provided by the 1C company are leading in the fields of public administration,
municipal board and business in the aforementioned region. Adequate assessment of the cost and development time has an important role in the software development. In the field of information
technology (IT) specialists often use different metrics based on the software functionality – function-oriented metrics. The models used for evaluation contain a number of parameters. Each of these
parameters has a special coefficient, which is based on the company standard. Their values have a direct impact on the software developing cost calculation. Among all of the functionally-oriented
assessing methods we can give a special credit to the Function Points (FP) method. The basis of its use is the correlation of parameters of future programming with tables which include special
coefficients. To calculate the number of function points, the cost, and the time of IT project development we use special formulas which are based on varieties of the COCOMO model and FP-tables. A
special feature of the FP method is a table including coefficients of the empirical complexity for each programming language and IDE, based on the number of operators for one function point.
Consequently, this method allows us to estimate the value of the product development not only in terms of its functionality, but also in terms of applied tools. Thus, the subject of this research
will be the definition of the value factors which are used to calculate the FP-evaluations on the 1C v.8.3 platform. It will be based on statistical analysis of several regional IT projects. To
improve the adequacy of FP-models, we will consider stakeholders of the 1C-based IT-projects as objects of our research. Recent software engineering developments allow us to move away from clichés
about the High North, which has been considered only as a supplier of natural resources for many years. | {"url":"https://www.arcticyearbook.com/arctic-yearbook/2017/2017-scholarly-papers/227-the-information-community-of-the-arctic-in-russia-evaluation-of-the-expenses-for-the-it-projects-development-characteristics-of-the-labor-costs-calculating","timestamp":"2024-11-08T21:01:27Z","content_type":"text/html","content_length":"26592","record_id":"<urn:uuid:5534760b-4bfb-485b-be2b-da04405209e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00761.warc.gz"} |
6.4.17. Impredicative polymorphism
6.4.17. Impredicative polymorphism¶
Implies: RankNTypes
Since: 9.2.1 (unreliable in 6.10 - 9.0)
Allow impredicative polymorphic types.
In general, GHC will only instantiate a polymorphic function at a monomorphic type (one with no foralls). For example,
runST :: (forall s. ST s a) -> a
id :: forall b. b -> b
foo = id runST -- Rejected
The definition of foo is rejected because one would have to instantiate id’s type with b := (forall s. ST s a) -> a, and that is not allowed. Instantiating polymorphic type variables with polymorphic
types is called impredicative polymorphism.
GHC has robust support for impredicative polymorphism, enabled with ImpredicativeTypes, using the so-called Quick Look inference algorithm. It is described in the paper A quick look at
impredicativity (Serrano et al, ICFP 2020).
Switching on ImpredicativeTypes
• Switches on RankNTypes
• Allows user-written types to have foralls under type constructors, not just under arrows. For example f :: Maybe (forall a. [a] -> [a]) is a legal type signature.
• Allows polymorphic types in Visible Type Application (when TypeApplications is enabled). For example, you can write reverse @(forall b. b->b) xs. Using VTA with a polymorphic type argument is
useful in cases when Quick Look cannot infer the correct instantiation.
• Switches on the Quick Look type inference algorithm, as described in the paper. This allows the compiler to infer impredicative instantiations of polymorphic functions in many cases. For example,
reverse xs will typecheck even if xs :: [forall a. a->a], by instantiating reverse at type forall a. a->a.
Note that the treatment of type-class constraints and implicit parameters remains entirely monomorphic, even with ImpredicativeTypes. Specifically:
• You cannot apply a type class to a polymorphic type. This is illegal: f :: C (forall a. a->a) => [a] -> [a]
• You cannot give an instance declaration with a polymorphic argument. This is illegal: instance C (forall a. a->a)
• An implicit parameter cannot have a polymorphic type: g :: (?x :: forall a. a->a) => [a] -> [a]
For many years GHC has a special case for the function ($), that allows it to typecheck an application like runST $ (do { ... }), even though that instantiation may be impredicative. This special
case remains: even without ImpredicativeTypes GHC switches on Quick Look for applications of ($).
This flag was available in earlier versions of GHC (6.10.1 - 9.0), but the behavior was unpredictable and not officially supported. | {"url":"https://downloads.haskell.org/~ghc/9.6.6/docs/users_guide/exts/impredicative_types.html","timestamp":"2024-11-07T06:39:32Z","content_type":"text/html","content_length":"20429","record_id":"<urn:uuid:d473a711-8308-489e-91f1-9fb285720077>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00033.warc.gz"} |
What is Light? - Fact / Myth
Understanding the Nature of Light (And Thus Photons and Electromagnetic Energy)
We explain “light,” both as electromagnetic radiation within a visible portion of the electromagnetic spectrum, and as electromagnetic energy carried by photons.
In other words, on this page we are using light as a placeholder for all things electromagnetic from the photon, to the virtual photon, to electromagnetic energy, to electromagnetic radiation, etc.
That is, the photon in any form and understood any way.^[1]
TIP: Below we define “light” according to quantum field theory (where it is a particle-wave vibrating in the electromagnetic field), the standard model (where it is, in simple terms, the force
carrier for the electromagnetic force, the photon; with this being true even when static via virtual photons), and the Copenhagen interpretation (where it is a particle-wave in quantum
superposition). That is, we are defining light as a photon and its effects, through the lens of different models. These aren’t the only theories (models) for understanding light/photons/
electromagnetism or quantum physics in general, but these are widely accepted theories that work well to introduce the fundamental property of the universe we here are calling “light.”
What is Light?
Light, understood broadly as all electromagnetic energy, is a core property of the universe. As such it is not so easily explained.
When we say light in common language, we simply mean the visible portion of the electromagnetic spectrum that our eyes can detect and process as visuals.
However, if we understand light more broadly, we can say any portion of the spectrum, visible or not, is “light.” This means radio waves, wi-fi, radiation, heat, all of it, is all “light.”
When we take into account that photons are the carrier particle for electromagnetic energy, we can say: photons (the carrier of electromagnetic energy), electromagnetic force (one of four fundamental
forces in the universe), electromagnetic radiation or energy in any form, and in simple terms “light” are all different words describing the same fundamental phenomena.
With that in mind, we can offer a more complex definition of light as a wave-particle in a quantum field where we can say: “Light” is a particle (photon), that acts like a wave, and is understood as
a localized vibration in the electromagnetic field. Or, we can describe light as a wave function of probable locations of excited states in the electromagnetic field in superposition that can be
measured as “particle-like or a wave-like” and travel in spacetime as electric and magnetic polarized transverse waves (unless they are behaving as virtual photons) moving in a straight line at a
constant speed when unimpeded but manifesting as localized vibrating wave-packets in the electromagnetic field (something like that).^[2]
All the above definitions of light are essentially correct, even if each only hints at the true dualistic and quantum nature of that electro-magnetic force that exists as a wave-particle that is so
closely related to mass-energy.
From here we could go into what is almost a poem about light (which remember we are considering as a synonym for electromagnetic energy and the photon), where we describe many aspects of it rather
than trying to give it a single definition:
• Light has a dualistic particle-wave like nature (as do all “quanta” in the standard model).
• Light is pure electromagnetic energy (and is also thus the carrier particle the “photon”).
• Light is one of four forces (AKA quantum interactions) in the physical universe.
• Light exists as a charged energy field, and it exists in a quantum state (it is a “quantum particle,” with quantum behavior, just like the other quanta studied in quantum physics).
• Light has a charge, but has no mass, yet it can interact with other objects and add to their relative mass by adding to an objects total energy.
• Light is both electricity and magnetism in constant oscillation.
• Light exists in a state of superposition. It can’t be localized in both time and space.
• Different wave lengths of light can be seen as different colors in the “electromagnetic spectrum,” this spectrum can also carry information as radio waves, or carry information as wifi. Different
colors and wave types differ by frequency (waves vibrating more frequently are higher energy).
• Light can cook an egg on the sidewalk by speeding up the molecules in an egg (light is also heat).
• Light reflects and refracts based off the laws of quantum probability.
• Light can be trapped in a crystal, and when anything emits heat or light (like fire), it is emitting photons AKA electromagnetic energy AKA light we call “electromagnetic radiation.”
• Light behaves like a transverse wave (with electric and magnetic waves oscillating in space time at 90 degrees to each other, perpendicular to the wave direction, keeping a straight line
trajectory despite their transversing and polarization).
• Light waves crest, cancel other light waves, amplify other light waves, and charged fields can gain or lose energy based on interactions (emitting and absorbing photons).
• Light is subject to what was called “spooky action at a distance,” but is now called quantum entanglement. That is, what happens to one photon can affect another paired, but distant, photon
despite being separated in space.
• Light seems to react differently when being observed (for example in the double slit experiment).
• Light can power a car or a factory.
• When things burn they emit light, one can tell their energy content from how they burn, because mass is in ways just a measure of energy content.
• Light will travel in a single direction, at the constant speed of “light speed,” forever, if unimpeded.
• Many photons can exist in the same exact spot creating “wave packets,” or just one photon can exist alone in its lowest energy state (still exhibiting a quantum wave-particle nature). The lowest
energy state possible for a photon can be considered a single photon (although one could argue that state can be zero or infinitely small). Meanwhile, there is no limit to how many photons can be
in one space (lasers are created from fitting many photons in a small space, this increases the energy content of the wave packet, and can be used to make precession “cuts” via the “heat”
Light, it has lots of properties, and is in many respects the basis of our universe. We call it by many names, but it is always the same core thing, “photons,” a charged part of the electromagnetic
These simple(-ish) sentences pertaining to the dual wave-particle quantum nature of light have many deep meanings beyond what we could express above, and we’ve only just scratched the surface.
We explain more about light and its mind-bending properties below.
What Is Light?
Or, a less complicated question, “what isn’t light?”
The Photon, the Carrier Particle of Electromagnetic Energy
Light waves, visible and invisible – Lucianne Walkowicz
Light Is Waves: Crash Course Physics #39
Light Speed and the Momentum of a Photon
• The wave-like pattern it travels in aside, the photon only travels in a single direction only. It has “unidirectional” linear momentum.
• The photon travels at a single and constant speed called light speed (in a perfect vacuum). Light speed is one of the universal constants and is one of the only non-relative things in the
universe. As Einstein noted, because light speed is constant time and space are relative (but spacetime, a composite measure of space and time, isn’t).^[3]^[4]
• The photon can travel over an infinite distance unless impeded.
• Light doesn’t “slow down” under normal conditions (it doesn’t change linear momentum in free space), but it can change energy content (slowing its frequency of vibration and cooling down, or
increasing frequency and heating up), bounce off objects, and be generally manipulated in strange ways within these criteria (see here and here).
• The photon also has spin (angular momentum). It, like other force carrier particles, has a spin of “1”. Matter particles (fermions) have a spin of “+ or – 1/2”.
• Since a photon has a spin of “1” it doesn’t have to adhere to the Pauli Exclusion principle and thus more than one photon can be in the same place and the same time. Lasers are made from
condensing many photons into one spot, this increases the energy content of the packet of photons, and results in a really “hot” laser.
FACT: Photons can’t interact with each other directly (outside of forming wave packets and absorbing and emitting other photons) and have no rest-mass when traveling through empty space, yet in
special circumstances they can exist as Photonic matter. Photonic matter phenomenon where photons interacting with a gas develop apparent mass, and can interact with each other, even forming photonic
The Speed of Light is NOT About Light | Space Time | PBS Digital Studios
Electromagnetic Radiation and the Electromagnetic Spectrum
• A photon can emit other photons and can absorb other photons (within limits), this changes the photon’s energy content and frequency.
• Other quantum particles can also absorb and emit photons; this is what happens when electrons change energy states.
• Electromagnetic radiation describes emitted photons, like the ones that we call visible light (although not every emitted photon is visible).
• Visible light is a specific range of frequencies in the electromagnetic spectrum (precise frequencies of electromagnetic radiation that the human eye can see).
• The electromagnetic spectrum describes all the possible frequencies of vibration in the electromagnetic field, the visible spectrum describes the wavelengths we can see with the naked eye.
• Different “size” waves have different properties, and each color of light has a unique range of vibrations.
• White light describes multiple colors of light and can be separated into different colors of light (like Newton did with a prism and wrote about in his book Opticks).
• The energy content of a photon is determined by its frequency times Max Planck’s constant (which represents the universe’s smallest unit). The equation is simply E=hf.
• Despite the Planck representing the smallest unit, the minimum wavelength of a photon is zero and there is no maximum. Despite the quantizing nature of quanta, there is, in one sense, no minimum
or maximum energy content of a photon. The rule is that it must be a positive integer greater than zero (it must have a frequency greater than zero).
• Frequency is a way to measure waves and vibrations by examining their patterns. The wavelength is the distance between successive crests of a wave. The higher the frequency, the shorter the
distance between crests.
• Only certain frequencies of electromagnetic radiation produce visible wavelengths. Short waves are toward blue tones, including ultraviolet, which bees can see, and X-rays. Long waves are toward
red and include infrared, which the pit viper can see, and radio waves.
NASA – Tour of the Electromagnetic Spectrum
Light, Both a Particle and a Wave
• The photon’s wave and particle qualities are two observable aspects of a single phenomenon. In 2015 the long-standing theory that light is both a particle and a wave was confirmed and published
in Nature Communications.
• The photon doesn’t just travel in “a wave,” it travels as a transverse wave specifically. Transverse describes the direction the wave moves in (wave vibrating at right angles to the direction of
its propagation).
• Electromagnetic energy is a quantum wave, and it doesn’t need to travel through a medium. It is not a mechanical wave like gravity or sound (that travels through a medium).
• All waves transfer energy; electromagnetic waves are no different.
• Electromagnetic energy oscillates in a “near field” of electricity and magnetism. The term “far field” describes the potentially infinite field in which the photon travels via linear momentum
(forward movement).
The Quantum Experiment that Broke Reality (a video about the double slit and wave-particle duality) | Space Time | PBS Digital Studios
. Light, it really is a particle and a wave… but that isn’t the weird part. It is the quantizing, elusive, and probable nature of light that is really fascinating. See our page on
the observer effect
to dive into one of the unknown aspects of light.
The Quantum Nature of Light, Quantum Fields, and the Photon
• As noted above, the photon is a quantum particle or a quanta. “Quantum” describes the fact that photons jump to discrete states in the electromagnetic field. Instead of moving in a continuous
wave, it “quantizes” to probable locations.
• All quantum particles, including the photon, exist as localized vibrating states in their respective field and move as quantized transverse waves. This behavior is explained by Quantum Field
theory (QFT) and this behavior is where quantum physics gets its name.
• The electromagnetic field is a single field that covers all of space and time, like a container for electromagnetic energy. Each particle of the standard model of particle physics has its own
• When the electromagnetic field contains a localized vibration, we call it a photon.
• Each particle has a separate field. Particle interactions occur when charged excited states in fields overlap.
Quantum Field Theory
by Fermilab.
Probability, Uncertainty, and Reflection
• The quantum behavior of light is described by laws of probability, uncertainty (of position and speed simultaneously), and quantum mechanics.
• When photons reflect off a surface, they reflect based on probability. For example, instead of one out of every four photons bouncing off a surface, 25% of all photons that hit a surface bounce
off. There is no way to tell what a single photon will do. Reflection is a matter or odds, not a certainty. Most quantum behavior works this way.
• We can’t “perfectly” localize a photon as it has no mass (and only momentum), and thus it can’t exist without moving (one reason why its better to generally describe light as a wave function of
probabilities). We can predict a range of places in which a photon will travel, but we can’t determine it with certainty or localize it perfectly. Or rather, the uncertainty principle says “the
more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa,” this is due to its quantum nature. It only gets more complex when we
consider the bizarre effects of the double slit experiment and quantum entanglement.
What is the Heisenberg Uncertainty Principle? – Chad Orzel
by Fermilab.
Heisenberg, Feynman, Maxwell, Planck, and Einstein
• Heisenberg’s uncertainty principle describes the uncertainty of quantum mechanics noted above.
• Maxwell’s equations describe light using mathematics. Einstein showed us that kinetic energy was equivalent to a body’s mass and the constant speed of light. We can use Max Planck’s work to see
that kinetic energy is also equal to Planck’s constant multiplied by frequency.
• Later, Richard Feynman broadened the theory of Quantum ElectroDynamics (QED), which describes the nature of light, sometimes using shockingly simple Feynman diagrams.
QED: Photons — Corpuscles of Light — Richard Feynman (1/4)
. An amazing talk on light.
The Big Picture, Light and the Other Forces
• Electromagnetic energy describes all energy that isn’t dark energy, gravitational energy, or (weak or strong) nuclear force. This includes all kinetic energy such as heat and even the energy
stored in calories.
• We can consider most of the energy bound to particles as charge or motion to be photons. When particles interact they exchange virtual bosons (the name for all the carrier particles of the four
• When we think of all four forces together – electromagnetic, gravity, and both nuclear forces, and their respective “bosons” – then we can say energy is simply the kinetic and bound
potential energy of a system in any form. We can describe this in one word, mass-energy.^[5]
• We can sum this up by saying, “All elementary particles, including light, exhibit properties of mass-energy, and are understood as wave-like quantum fields, that interact with other quantum
fields, in excited quantized states called particles.”
• You are starting to understand quantum physics if you can picture the physical universe as composed of massless vibrating energy field interactions in which everything is relative to their
nature. Constants derived from the behavior of these fields provide our only reliable measure of what is real.
Quantum Theory – Full Documentary HD
Article Citations | {"url":"https://factmyth.com/what-is-light/","timestamp":"2024-11-03T23:26:33Z","content_type":"text/html","content_length":"63974","record_id":"<urn:uuid:fc2dbd24-47ff-4f09-918e-c4872c35608c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00143.warc.gz"} |
PHY251 Modern Physics
Home Teaching PHY251: Announcement/Update AcademicCalendar
PHY251 Modern Physics Spring 2017
Lecture time: 2:30-3:50pm Tues Thurs, Location: Physics Room P118
Instructor: Tzu-Chieh Wei <tzu-chieh.wei[at]stonybrook[dot]edu>
Office hour: Wed 4-5pm, Math 6-101
Recitation Instructors:
Prof. Dmitri Tsybychev <dmitri.tsybychev[at]stonybrook[dot]edu>
Recitation time: Tu 10:00AM - 10:53AM, Location: Physics Room P112
Office hour: 1-2pm Friday, Physics D-135
Prof. Navid Vafaei-Najafabadi, <navid.vafaei-najafabadi[at]stonybrook[dot]edu>
Recitation time: Th 10:00AM - 10:53AM, Location: Physics Room P112
Office hour: 11am-12pm Monday, Physics D-101
Grader: Charles Shugert <charles.shugert[at]stonybrook[dot]edu>
Office hour: 5-6pm Monday, Physics B-130 (For consultation regarding grading and homework solutions after grading)
[For office hours, it would be useful to email the instructor informing him that you will be coming.]
Course description:
A survey of the major physics theories of the 20th century (relativity and quantum mechanics) and their impact on most areas of physics. It introduces the special theory of relativity, the concepts
of quantum and wave-particle duality, Schroedinger's wave equation, and other fundamentals of quantum theory as they apply to nuclei, atoms, molecules, and solids. The Laboratory component, PHY 252
(Modern Physics Laboratory), must be taken concurrently; a common grade for both courses will be assigned. Three hours lecture and one hour recitation per week, as well as laboratory work.
Prerequisite: PHY 122/124, or PHY 126 and 127, or PHY 132 or PHY 142; and PHY 134; C or higher in MAT 126 or 132 or 142 or 171 or AMS 161 Pre- or Corequisite: MAT 203 or MAT 205 or AMS 261 or MAT 307
Corequisite: PHY 252
We will cover quantum mechanics to the extent that we need for other parts of this course. PHY307 Physical and Mathematical Foundations of Quantum Mechanics is recommended after you finish PHY251.
Quantum mechanics will be treated more rigorously and extensively in PHY308 Quantum Physics. Other more specialized courses you may want to consider in the future after finishing this course: PHY408
Relativity, PHY431 Nuclear and Particle Physics, PHY451 Quantum Electronics, PHY452 Lasers, PHY472 Solid State Physics, and AST347 Cosmology.
PHY252 Modern Physics Laboratory (must be taken concurrently) is administered by Prof. Matthew Dawber and has a website here.
Required Textbook :
There are many textbooks on Modern Physics. The one that I shall use as the main one is
Modern Physics for Scientists and Engineers by John Taylor, Chris Zafiratos, and Michael A. Dubson, (2nd edition) published by University Science Books or previously by Addison-Wesley (2nd edition)
[Science/Engineering Lirbary has the latter copy: use PHY251 as course name in library reserve search at http://library.stonybrook.edu/services/course-reserves/; the reserved copy is two-hour loan
and would be held behind the main desk in the North Reading Room]
Recommended Textbooks:
There are a few textbooks that will complement the above Taylor, Zafiratos and Dubson in styles and materials, including
1. Modern Physics for Scientists and Engineers 2nd Edition by John Morrison (which covers fewer materials, but slightly more advanced than Taylor et al.)
This book has a website that contains applets, which can be downloaded or used online. Morrison also discusses simulations from PhET developed at the University of Colorado; see here for simulations
of quantum phenomena. (Morrison's book is entirely optional.)
2. The Feynman Lectures on Physics, Vol. 3 [optional] (which can be read online here): the classic Feynman Lectures are highly recommended irregardless.
There are other Modern Physics textbooks similar in style and material selection to Taylor et al., including Tipler and Llewellyn (which nicely includes Astrophysics and Cosmology), Thorton and Rex
(which also includes Astrophysics and Cosmology), Serway and Moses (Cosmology is Web only), Eisberg and Resnick (classic but a bit outdated), etc
3. Special Relativity. If you are interested in reading more about relativity, there is a recent book by Dr. David Morin (Harvard University): Special Relativity - For the Enthusiastic Beginner
(David Morin) [a free chapter 1 is provided for viewing]
Recommended Documentary (not much science background assumed):
I. The Fabric of the Cosmos which includes:
Episode (1) What is space?
Episode (2) The illusion of time
Episode (3) Quantum leap
Episode (4) Universe or multiverse?
II. The Mystery of Matter: Search for the Elements (which was the 37th Annual News & Documentary Emmy® Awards Winner - Outstanding Lighting Direction and Scenic Design):
Episode (1) Out of thin air
Episode (2) Unruly elements
Episode (3) Into the atom
Learning outcomes: After this course, you will be able to have good understanding of modern physics, to do simple estimates and calculations about the atoms, nuclei, light, and acquire basic
knowledge about atomic physics, statistical physics, solid state physics, nuclear and elementary particles, and the universe. You will also have better appreication of the many scientific details
about important discoveries (such as in the above documentaries).
Grades: (tentative) Note that (1) PHY252 (Modern Physics Laboratory) must be taken concurrently and it will be included as part of the grade for PHY251 and (2) Recitations are an integral part of
this course and must be taken as well
Final Grading is based on:
Homework (+class participation, quizzes, etc): 10%
Recitations (participation and quizzes): 15%
Midterms: 30% (15% each)
Final Exam: 20%
Lab (PHY252): 25%
For example, on a scale of 0 to 100, the letter grade is assigned approximately, A: 90-100, A-: 86-89.99, B+: 80-85.99, B: 76-79.99, B-: 70-75.99, C+: 66-69.99, C: 60-65.99, C-: 56-59.99, and so on.
Homework problems may involve use of computer; you can use any programs you prefer, such as C/C++, Fortran, Matlab, Mathematica, etc. Intel offers Free Software Tools for students. University also
has licenses for Matlab and Mathematica and these softwares are available at SINC sites.
Homework policy: no late homework (must be turned in on the due day in class; exception must be requested two days or earlier before deadline; if you cannot bring homework to class, you can scan it
and email it to the instructor). Grading of homework is based on overall effort and 3 selected problems. (e.g. say there are 10pts for one homework set: 3 problems are graded and each worth 2pts max,
the effor of the remaining will be checked and 4pts max can be awarded)
Recitations: homework problems will be discussed in recitations; there is also a quiz from time to time (see below) based on homework problems
Exams: formula sheet of one page of letter size is allowed (only formulas, not solutions to any problems); your solutions should present clear logic, cannot simply copy formulas. Since the exam is
accumulative, failing to take the final exam for no valid excuses will automatically fail the course. Make-up exam needs to be scheduled with the instructor within two days of missing the exam.
Laboratory schedue is posted here
Topics to be covered and tentative syllabus
(This is a tentative syllabus. Exam dates and due dates may change. Check later for update.)
The syllabus will evolve as classes move on. Reading of sections by Taylor, Zafiratos and Dubson will be listed.
Notes can be downloaded from Blackboard (clickable links are provided).
1. Overview and special theory of relativity:
<reading: 1.1-1.14, 2.1-2.10>
(week 1)[1/24,1/26] (Homework 01 posted on Blackboard, due on 2/2); notes on "The space time of relativity"
(week 2) [1/31,2/2] (Homework 02 posted on Blackboard, due on 2/9); notes on "Relativistic mechanics"
(week 3) [2/7, ]
It is highly recommended that you watch episodes 1 and 2 of the Fabric of the Cosmos: (1) What is space? and (2) The illusion of time
2. Experiments and ideas (wave-particle duality, uncertainty principle, quantization, etc.) leading to quantum theory:
<reading: 3.10-3.12, 4.1-4.7,6.1-6.9>
(week 3) [ ,2/9 canceled]* (Homework 03 posted on Blackboard, due 2/21)
(week 4) [2/14,2/16 A B]* notes on this part "Electron, Rutherford's nuclear atom, quantization of light"
[change of syllabus: we will postpone detailed discussions on Bohr's model of hydrogen until after we learn quantum mechanics in 3D and compare the two approaches]
(week 5) [2/21,2/23]* Homework 04 due 2/28; notes on "Matter Waves"
3. Quantum mechanics in 1D:
<reading: 7.1-7.11>
(week 5) [ ,2/23 A B]* Homework 05 due 3/9
(week 6) [2/28, 3/2] In-class midterm exam I: 3/2 (closed book but a formula sheet of letter size paper, front and back, is allowed)
(week 7) [3/7, 3/9] Homework 06 (requires watching the documentary: The Fabric of the Cosmos) due 3/23
(week 8) Spring recess
(week 9) [3/21, ]* Notes on Quantum Mechanics in 1D
4. Quantum mechanics in 2& 3D and atomic energy levels:
<reading: 8.1-8.10, 5.7-5.9>
(week 9) [ ,3/23]* Homework 07 due 3/30
(week 10) [3/28,3/30]* Homework 08 due 4/6; notes on Quantum mechanics in 2& 3D and hydrogen atom
5. Electron spin, multi-electron atoms, periodic table:
<reading: 9.1-9.7, 10.1-10.8>
(week 11) [4/4,4/6]* Notes on Electron spin
(week 12) [4/11,4/13] In-class midterm exam II: 4/13 Homework 09 due 4/20; notes on Multi-electron atoms, Pauli exclusion principle and the periodic table
6. Statistical physics:
<reading: 15.3, 15.7-15.8, (supplementary materials: three kinds of distribution, and black-body radiation) >
(week 13) [4/18,4/20] Homework 10 due 4/27; Notes on Statistical Physics (selection of topics)
Review sessions:
(week 14) [4/25,4/27]* Practice problems set 1, set 2
(week 15) [5/2,5/4] * Practice problems set 3, set 4 review/overview
Final exam [coverpage] ==> 11:15am-1:45pm Monday May 15, 2017 at Javits 105
[The remaining topics will not be covered in this semester. But for each, there is a separate course you can take.]
7. Atomic transitions and radiation:
<reading: 11.3-11.9>
8. Solid-state physics:
<reading: 13.5-13.12, 14.1-14.4, 14.8>
9. Structure of atomic nuclei and radioactivity, particle physics:
<reading: 16.1-16.8, 17.1-17.5, 18.1-18.10>
Additional topics such as cosmology and quantum information and computation might be discussed if time permits.
* indicates the suggested week to have a quiz during recitation (the instructor may choose to give quiz at different times), the quiz will be based on lecture examples and previous homeworks (i.e.
similar problems), so you should understand all the homework problems (even after you turn them in) and review examples done in lectures.
Final exam: at Javits 105 from 11:15am to 1:45pm Monday May 15th, 2017; see Registrar
Recommended additional reading and viewing:
1. Special Relativity in a Nutshell
2. Einstein's Big Idea
3. A Trip Through Spacetime
4. Putting Relativity to the Test
5. Inside Einstein's Mind
6. The Amazing Atomic Clock
7. The Fabric of the Cosmos
8. Hunting the Elements
9. The Mystery of Matter: Search for the Elements
10. Does Antimatter Fall Up or Down?
11. Origins: Back to the Beginning
12. Big Bang Machine
13. Relativity and the Cosmos
14. How Big Is the Universe?
15. A Quantum Leap in Computing
Announcement, Update and Additional Information
More will be posted to Blackboard.stonybrook.edu
For your information:
A brief guide to 'Student Success Resources' that are available on our campus:
Americans with Disabilities Act:
If you have a physical, psychological, medical or learning disability that may impact your course work, please contact Disability Support Services (631) 632-6748. They will determine with you what
accommodations are necessary and appropriate. All information and documentation is confidential.
Students requiring emergency evacuation are encouraged to discuss their needs with their professors and Disability Support Services. For procedures and information, go to the following web site http:
Academic Integrity:
Each student must pursue his or her academic goals honestly and be personally accountable for all submitted work. Representing another person's work as your own is always wrong. Faculty are required
to report any suspected instances of academic dishonesty to the Academic Judiciary. Faculty in the Health Sciences Center (School of Health Technology & Management, Nursing, Social Welfare, Dental
Medicine) and School of Medicine are required to follow their school-specific procedures. For more comprehensive information on academic integrity, including categories of academic dishonesty, please
refer to the academic judiciary website at http://www.stonybrook.edu/uaa/academicjudiciary/
Critical Incident Management:
Stony Brook University expects students to respect the rights, privileges, and property of other people. Faculty are required to report to the Office of Judicial Affairs any disruptive behavior that
interrupts their ability to teach, compromises the safety of the learning environment, or inhibits students' ability to learn. Faculty in the HSC Schools and the School of Medicine are required to
follow their school-specific procedures.
Electronic Communication:
Email to your University email account is an important way of communicating with you for this course. For most students the email address is ‘firstname.lastname@stonybrook.edu’, and the account can
be accessed here: http://www.stonybrook.edu/mycloud. *It is your responsibility to read your email received at this account.*
For instructions about how to verify your University email address see this:
http://it.stonybrook.edu/help/kb/checking-or-changing-your-mail-forwarding-address-in-the-epo . You can set up email forwarding
using instructions here: http://it.stonybrook.edu/help/kb/setting-up-mail-forwarding-in-google-mail . If you choose to forward your University email to another account, we are not responsible for any
undeliverable messages.
Religious Observances:
See the policy statement regarding religious holidays at http://www.stonybrook.edu/registrar/forms/RelHolPol%20081612%20cr.pdf Students are expected to notify the course professors by email of their
intention to take time out for religious observance. This should be done as soon as possible but definitely before the end of the ‘add/drop’ period. At that time they can discuss with the instructor
(s) how they will be able to make up the work covered.
Instructional/Student Responsibilities: the University Senate’s Undergraduate Council updated The University’s statement of Minimal Instruction and Student Responsibilities in Fall 2008. Also listed
are the Minimal Undergraduate Student Responsibilities. Both statements may be found in the Academic Policies and Regulations section of the on-line Undergraduate Bulletin: http:// | {"url":"http://insti.physics.sunysb.edu/~twei/Courses/Spring2017/PHY251/phy251.html","timestamp":"2024-11-12T15:46:52Z","content_type":"text/html","content_length":"33116","record_id":"<urn:uuid:7d231dc7-33ed-45cf-9e37-0633443dd589>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00291.warc.gz"} |
The Stacks project
Lemma 39.8.1. Let $k$ be a field. Let $G$ be a locally algebraic group scheme over $k$. Then $G$ is equidimensional and $\dim (G) = \dim _ g(G)$ for all $g \in G$. For any closed point $g \in G$ we
have $\dim (G) = \dim (\mathcal{O}_{G, g})$.
Proof. Let us first prove that $\dim _ g(G) = \dim _{g'}(G)$ for any pair of points $g, g' \in G$. By Morphisms, Lemma 29.28.3 we may extend the ground field at will. Hence we may assume that both
$g$ and $g'$ are defined over $k$. Hence there exists an automorphism of $G$ mapping $g$ to $g'$, whence the equality. By Morphisms, Lemma 29.28.1 we have $\dim _ g(G) = \dim (\mathcal{O}_{G, g}) + \
text{trdeg}_ k(\kappa (g))$. On the other hand, the dimension of $G$ (or any open subset of $G$) is the supremum of the dimensions of the local rings of $G$, see Properties, Lemma 28.10.3. Clearly
this is maximal for closed points $g$ in which case $\text{trdeg}_ k(\kappa (g)) = 0$ (by the Hilbert Nullstellensatz, see Morphisms, Section 29.16). Hence the lemma follows. $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 045X. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 045X, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/045X","timestamp":"2024-11-04T14:50:33Z","content_type":"text/html","content_length":"14832","record_id":"<urn:uuid:10b0690b-bd4e-40d7-b31d-1b16acfc8b2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00426.warc.gz"} |
Current Research | Ryan T. White, PhD
top of page
Current Research and Opportunities
My NEural TransmissionS (NETS) research group consists of 3 Ph.D. students, 2 M.S. students, and 6 undergrad students, although we collaborate with many others.
I have some projects with opportunities for students to study, so I have summarized the state of my open projects below (as of May 2022)
Machine Learning
This research area has consumed most of my interest in the past couple of years, through teaching, research, nonprofit, and industrial work.
• Aerospace Engineering: Deep learning for computer vision and in-orbit satellite component detection to support in-orbit servicing and space debris-capture missions.
We have implemented state-of-the-art object detection algorithms to automatically classify and localize objects like solar panels, antennas, cubesats, and satellite bodies and run tests in
Florida Tech's ORION Research Lab.
Currently, we are tuning the models, developing best practices to deploy models on heavily limited computational resources characteristic of what can be implemented on small chaser satellites.
[Slides from a recent research presentation]
[Project supported by the U.S. Space Force and Air Force Research Lab.]
T. Mahendrakar, A. Ekblad, N. Fischer, R. T. White, M. Wilde, B. Kish, and I. Silver (2022). Performance study of YOLOv5 and Faster R-CNN for autonomous navigation around non-cooperative targets.
IEEE AeroConf 2022.
T. Mahendrakar, J. Cutler, N. Fischer, A. Rivkin, A. Ekblad, K. Watkins, R. T. White, M. Wilde, B. Kish, and I. Silver (2021). Use of artificial intelligence for feature recognition and
flightpath planning around non-cooperative resident space objects. AIAA ASCEND 2021.
T. Mahendrakar, R. T. White, and M. Wilde (2021). Real-time satellite component recognition YOLO V5. 35th Annual Small Satellite Conference.
• Novel Object Tracking Approach: To support the satellite project and beyond, we are developing novel object tracking algorithm that can run on edge hardware.
• 3D Pose Estimation: Another group is working on computer vision-based estimation of the 6-DOF pose (position and orientation) of an object in three dimension based on a camera feed. See some
initial work on spacecraft pose estimation.
• Glaciology: Another group is developing an approach to track the evolution of glaciers over time using time-series satellite imagery. After testing on some manually-gathered data, we developed a
data pipeline to construct large datasets of images along with various variables associated with each glacier at each time. We are currently tuning neural image segmentation algorithms to
automatically segment the glaciers in multispectral satellite imagery. See some recent developments by the team.
[Related work supported by the National Science Foundation in the SMAG REU Program]
[Current satellite data access provided by the European Space Agency]
• Global Development: I serve as Senior Advisor on Data Sciences to non-profit organization Engage-AI on a research program to leverage machine learning and artificial intelligence to find best
practices in pursuing development goals, such as the UNDP's Sustainable Development Goals (SDGs). The work involves developing an AI-enhanced data platform drawing from disparate data sources to
facilitate data analysis, finding useful patterns in the relevant data with state-of-the-art machine learning methods, and working with governments and NGOs to shed light on pressing development
• Intrusion Detection: I am interested in pushing the boundary of the sorts of intrusions we can detect by taking known attack signatures attempting to learn novel vulnerabilities by generating
variations of known signatures via convolutional autoencoders and generative adversarial networks.
• I am also open to supervising reasonable machine learning projects proposed by students under certain circumstances, especially those involving neural methods. Here are some examples.
Stochastic Processes
Most of my published academic work is in the area of stochastic analysis and probability theory. I have many projects ongoing in this area, which I have broken into several categories, although the
categories have some overlapping content.
• Reliability of Stochastic Networks: I'm studying another random process taking place on large weighted graphs (networks) where, at random times, random batches of nodes get incapacitated, each
with a random number of edges with random weights. I'm interested in finding how long it takes for the sum of nodes or edges or weights lost to surpass some given thresholds.
I have mathematical results, but the next step is to simplify the results to more practical situations, which involves finding the inverses of certain operators, probably via numerical
approximations, to confirm they are easy enough to compute to be useful. Then, simulations are needed to confirm the formulas match empirical results.
R. T. White (2015). Random Walks on Random Lattices and Their Applications. PhD thesis, Florida Institute of Technology. [slides] [fulltext]
J. H. Dshalalow and R. T. White (2014). On Strategic Defense in Stochastic Networks. Stochastic Analysis and Applications, 32:3, 365-396. [arXiv preprint]
R. T. White (2013). Stochastic Analysis of Strategic Networks. 38th Annual SIAM Southeastern Atlantic Section Conference. Melbourne, FL. [slides]
J. H. Dshalalow and R. T. White (2013). On Reliability of Stochastic Networks. Neural, Parallel, and Scientific Computations, 21, 141-160 [arXiv preprint]
• Random walks: these are points moving around in n-dimensional space by making random jumps at random times. I study the dynamics of these processes when they exit from a fixed set in the space.
□ Existing models have trouble being applied to fully empirical distributions, although it is, in principle, almost certainly possible, so this is a top priority here.​​
□ ​Mathematical: we need to generalize existing results to higher dimensions and derive similar results for non-monotone processes.
J. H. Dshalalow, K. M. Nandyose, and R. T. White (2021). Time sensitive analysis of antagonistic stochastic processes with applications to finance, and queueing. Mathematics and Statistics. 9
(4): 481-500.
J. H. Dshalalow and R. T. White (2021). Current trends in random walks on random lattices. Mathematics, 9(10): 1148.
G. Neustel (2021). Last exits of 2D random walks. (Senior capstone project)
R. T. White and J. H. Dshalalow (2020). Characterizations of random walks on random lattices and their ramifications. Stochastic Analysis and Applications, 38:2, 307-342.
R. T. White (2018). On Exits of Oscillating Random Walks Under Delayed Observation. AMS/MAA Joint Mathematical Meetings. San Diego, CA. [slides]
R. T. White (2017). Time Sensitive Analysis of d-dimensional Independent and Stationary Increment Processes. AMS Fall Southeastern Sectional Meeting. University of Central Florida [slides]
J. H. Dshalalow and R. T. White (2016). Time Sensitive Analysis of Independent and Stationary Increment Processes. Journal of Mathematical Analysis and Applications. 443:2. [arXiv preprint]
R. T. White (2015). Time Sensitive Analysis of Multivariate Marked Random Walks. SIAM Conference on Computational Science and Engineering. Salt Lake City, UT. [slides]
□ ​Applied: the models are useful in modeling queuing systems that efficiently order tasks for a processor to do (well-established area) and possibly intrusion detection systems for networks
(some new ideas).
J. H. Dshalalow and R. T. White (2021). Random Walk Analysis in a Reliability System under Constant Degradation and Random Shocks. Axioms. 10(3): 199.
J. H. Dshalalow, A. Merie, and R. T. White (2020). Fluctuation Analysis in Parallel Queues with Hysteretic Control. Methodology and Computing in Applied Probability, 22: 295–327.
R. T. White (2019). Fluctuation Analysis in Parallel Queues with Hysteretic Control. AMS Fall Southeastern Sectional Meeting. University of Florida [slides]
J. H. Dshalalow and A. Merie (2018). Fluctuation Analysis in Queues with Several Operational Modes and Priority Customers, TOP, 26: 309-333. (I did not participate in this paper, but it is
closely related to the topic.)
J. H. Dshalalow, K. Iwezulu, and R. T. White (2016). Discrete Operational Calculus in Delayed Stochastic Games. Neural, Parallel, and Scientific Computations, 24: 55-64. [arXiv preprint]
□ Numerical/Complex Analysis: Current capabilities for computing the inverse Laplace transforms we need are not sufficient for higher-dimensional problems. There are many algorithms in use,
none of which are especially good at doing more than two transforms sequentially. There is room for improvement and I have some ideas.
• Minimal Sufficient-Probability Sets: I am studying the evolution of small high-probability sets for the location of a stochastic process. We study how these sets change over time by watching how
probability density flows through the boundary of the sets from the previous moment in time. I aim to continuously deform the sets over time by analyzing the flux across their
boundaries.​​​ Much mathematical and applied work is needed.
□ Mathematical: expand beyond toy examples to determine more general conditions under which it can be done and to find closed-form solutions in these settings, if possible. (It should heavily
relate to PDEs, but I have not investigated the link well.)
□ Applied: implement mathematical results in practical problems, write efficient algorithms to compute similar results for simulated processes where closed-form solutions are elusive, and
ensure acceptable error bounds.
R. T. White (2020). On the Evolution of Minimal-Volume, Sufficient-Probability Sets for Stochastic Paths. 14th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific
Computing, Oxford University. [slides]
Statistical Models with Applications to Geoscience REU Program
​During the summers of 2021-2023, FIT hosted an NSF-supported research experience for undergraduates (REU) program involving projects in climate science and marine biology. It was a great
opportunity for aspiring scientists to participate in a funded program, get some training, and gain research experience. https://research.fit.edu/smag-reu/
Students: please feel free to get in touch!
bottom of page | {"url":"https://www.ryantwhite.com/current-research","timestamp":"2024-11-07T06:55:00Z","content_type":"text/html","content_length":"465897","record_id":"<urn:uuid:bf7f2e56-fa9f-476c-97c5-f5003add67bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00867.warc.gz"} |
6-Year-Old Rule?
The story begins recently I am indulging in the quantum computing programming whose foundation involves a lot of linear algebra. The direct consequence is I want to solve almost all problems using
the complex number, vector, and matrix. The Mars Rover problem got my attention.
The rover on the Mars surface only takes L, R, and M instruction. L means turn left. R means turn right. M means move 1 step forward. If the initial position is (0, 0) and facing North, what will
be the location and facing direction after receiving couples of instructions?
This is a perfect opportunity to practice my math, I thought. The direction and location are all complex numbers. I can manipulate numbers to make the code concise! I convert the L, R, and M
instruction to complex number
• turn left L is complex (0, 1) where the real part is 0 and the imaginary part is 1
• turn right R is complex (0, -1)
• move forward M is complex number (1, 0)
The new location and direction can be computed like the following:
• new location += op.Real * Direction //L and R's real part is 0, so it does not move location
• new direction *= op //M is 1, so it has no effect on the direction
It looks cool and code is concise. I avoided those unpleasant switch statements.
Everything looks satisfying until I realize those expensive multiply operations might be a performance problem. I wrote a simple solution using those switch statements. The simple solution doubled
the code. However, the performance is 30+% faster than my "elegant" one!
I told my wife what was going on when I encountered her in the kitchen, with a little bit of frustration. She smiled and pointed my 6-year-old son. "If the solution can be understand by him, it must
be faster." she said, "It requires a smaller brain so it runs faster."
Well, both algorithms are O(n). Maybe next time, I should think on the big O level. | {"url":"http://apollo13cn.blogspot.com/2019/09/6-year-old-rule.html","timestamp":"2024-11-02T23:31:35Z","content_type":"text/html","content_length":"65692","record_id":"<urn:uuid:49b82032-c14e-4f31-ae3d-6bbdbde77c6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00338.warc.gz"} |
Attribute Agreement Analysis Confidence Interval
Attribute Agreement Analysis (AAA) is a statistical method used to evaluate the consistency of assessments made by different raters or evaluators. This method is often employed to assess the quality
of the products and services, and it can help businesses identify areas where improvements are needed.
One important aspect of AAA is the Confidence Interval (CI), which is a range of values that provides an estimate of the precision of the AAA results. CI is determined by the sample size and the
level of confidence chosen by the user.
The confidence level indicates the probability that the true value of the measurement lies within the CI. For example, a 95% confidence level means that there is a 95% chance that the true value of
the measurement falls within the CI.
The size of the CI is affected by the sample size used for AAA. As the sample size increases, the CI decreases, indicating greater precision in the results. Similarly, increasing the confidence level
results in a larger CI, indicating lower precision.
To calculate the CI, a mathematical formula is used, which takes into account the sample size, the standard deviation, and the confidence level. The formula is as follows:
CI = X̄ ± (t* S/√n)
Where X̄ is the mean score, t is the t-value, S is the standard deviation, and n is the sample size.
In summary, the confidence interval is an essential aspect of AAA. It provides an estimate of the precision of the results, indicating the range within which the true value of the measurement lies.
Understanding the CI can help businesses make informed decisions based on the quality of their products and services. | {"url":"https://anatomic-shoes.gr/attribute-agreement-analysis-confidence-interval/","timestamp":"2024-11-07T23:31:49Z","content_type":"text/html","content_length":"130595","record_id":"<urn:uuid:5de19f69-2201-4a76-b14a-a7eb1703988c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00139.warc.gz"} |
How to Measure Motor Torque and Why You Should
By Lauren Nagel
A question we get fairly often is: “Why do you need to measure torque when doing motor or propeller testing?” This is often followed by: “How can I measure torque?”
These are both important questions for drone designers who want to get the most out of their designs. It ultimately comes down to measuring your motor’s efficiency by comparing the input to the motor
with its output.
Table of Contents:
1. Torque and RPM
2. Motor Efficiency Equation
3. How to Measure Torque
4. How to Calculate Brushless Motor Mechanical Power
Note: Our small thrust stands and larger Flight Stands are both capable of measuring torque.
Torque and RPM
There are two key variables when it comes to the propeller: the first is the rotation speed and the second is torque. When you multiply rotation speed and torque together, you obtain mechanical
If we look at how a propeller and motor are connected, we see that the only connection or “information” sent from the motor to the propeller is RPM and torque.
Figure 1: Drone motor connected to three blade propeller
Motor Efficiency Equation
At the other end of the motor, electricity enters from the battery or power source. We can therefore consider the motor a machine that transforms electricity into RPM and torque or electrical power
into mechanical power.
This brings us to our key efficiency formula. When we measure torque, we’re able to obtain mechanical power, which we can divide by the electrical power to obtain efficiency:
There are design trade-offs that come with increasing torque and RPM, and testing multiple propellers is the best way to find the most efficient motor-propeller combination for the type of flight you
want to do.
Further reading: Brushless Motor Power and Efficiency Analysis
If we can keep the same propeller efficiency, increasing the ratio of mechanical power to electrical power means that air vehicles will be able to fly longer and carry more payload.
How to Measure Motor Torque
There are a few ways to measure torque, and in our test stands we use a steady-state solid system, which means that there are no moving parts. This is great because it reduces hysteresis and
In our Series 1780 test stand we have three load cells that each measure two forces, so there’s a total of six forces measured (figure 2). The image below shows our coaxial test stand, so there are
three load cells for each propulsion system, or six load cells total. After our calibration procedure, we’re able to measure the exact torque and thrust that is applied to the motor mounting plate by
the motor.
Further Reading: How to Calculate Motor Torque Using Formulas
Figure 2: The RCbenchmark Series 1780 test stand with three load cells
How to Calculate Brushless Motor Mechanical Power
Now that we have measured torque, we also need to measure the motor’s RPM / rotation speed to fill in our equation for mechanical power.
This is achieved by using a small infrared RPM sensor that can sense when a piece of reflective tape passes in front of the sensor. The accompanying electronics use a counter to determine how many
times the reflective tape passed the sensor, which allows it to calculate the rotation speed. The rotation speed can also be measured electrically, using the ESC signal. This method is simpler
mechanically, but it is more sensitive to the motor load and size.
Multiply this figure by rotation speed and divide the product by electric power to get mechanical power.
Further reading: Drone Design Calculations and Assumptions
Measuring the system’s torque is essential when designing a propulsion system, as it allows you to measure the motor efficiency separately from the propeller efficiency. Here we covered how you can
measure torque and how to use this information to build a more efficient drone.
If you are interested in testing motor torque yourself, check out our range of test equipment:
3 Responses
Jon Berera
November 22, 2023
Is it possible to adapt you torque sensor system to say a blender motor and measure
torque, thrust, and blade torque.
Charles Blouin
January 18, 2023
Propeller design is usually optimized for moving a lot of air quickly and with little to no pressure. We have not worked with PC case fans, but it would be a good test for the Series 1585 and an
interesting comparison!
Elvis P
January 16, 2023
Pc case fans 120-160cfm 140mm to 200mm types works in 3 categories, Air flow, hybrid airflow and pressure, and pressure withlow air flow.
How does that translate to drones equivalent for motor and blade design.
Pc case fans had the blade fixed to the motor bell, why not drone blades related to strength, tork and even crashed / damaged blades?
Comments will be approved before showing up. | {"url":"https://www.tytorobotics.com/blogs/articles/why-and-how-to-measure-torque-for-brushless-motors-on-drones","timestamp":"2024-11-04T17:37:38Z","content_type":"text/html","content_length":"669938","record_id":"<urn:uuid:dac46f46-b969-40aa-aa50-224321e89938>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00652.warc.gz"} |
5.1: Free and Forced Oscillations
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In Sec. \(3.2\) we briefly discussed oscillations in a keystone Hamiltonian system - a 1D harmonic oscillator described by a very simple Lagrangian \({ }^{1}\) \[L \equiv T(\dot{q})-U(q)=\frac{m}{2}
\dot{q}^{2}-\frac{\kappa}{2} q^{2},\] whose Lagrange equation of motion, \({ }^{2}\)
\(\begin{aligned}&\text { Harmonic } \\&\text { oscillator: } \\&\text { equation }\end{aligned} \quad m \ddot{q}+\kappa q=0, \quad\) i.e. \(\ddot{q}+\omega_{0}^{2} q=0, \quad\) with \(\omega_{0}^{2}
\equiv \frac{\kappa}{m} \geq 0\),
is a linear homogeneous differential equation. Its general solution is given by (3.16), which is frequently recast into another, amplitude-phase form: \[q(t)=u \cos \omega_{0} t+v \sin \omega_{0} t=A
\cos \left(\omega_{0} t-\varphi\right),\] where \(A\) is the amplitude and \(\varphi\) the phase of the oscillations, which are determined by the initial conditions. Mathematically, it is frequently
easier to work with sinusoidal functions as complex exponents, by rewriting the last form of Eq. (3a) in one more form: \(^{3}\) \[q(t)=\operatorname{Re}\left[A e^{-i\left(\omega_{0} t-\varphi\
right)}\right]=\operatorname{Re}\left[a e^{-i \omega_{0} t}\right],\] \[a \equiv A e^{i \varphi}, \quad|a|=A, \quad \operatorname{Re} a=A \cos \varphi=u, \quad \operatorname{Im} a=A \sin \varphi=v .
\] For an autonomous, Hamiltonian oscillator, Eq. (3) gives the full classical description of its dynamics. However, it is important to understand that this free-oscillation solution, with a constant
amplitude \(A\), is due to the conservation of the energy \(E \equiv T+U=\kappa A^{2} / 2\) of the oscillator. If its energy changes for any reason, the description needs to be generalized.
First of all, if the energy leaks out of the oscillator to its environment (the effect usually called the energy dissipation), the free oscillations decay with time. The simplest model of this effect
is represented by an additional linear drag (or "kinematic friction") force, proportional to the generalized velocity and directed opposite to it: \[F_{v}=-\eta \dot{q},\] where constant \(\eta\) is
called the drag coefficient. \({ }^{4}\) The inclusion of this force modifies the equation of motion (2) to become \[m \ddot{q}+\eta \dot{q}+\kappa q=0 .\] This equation is frequently rewritten in
the form \[\ddot{q}+2 \delta \ddot{q}+\omega_{0}^{2} q=0, \quad \text { with } \delta \equiv \frac{\eta}{2 m},\] where the parameter \(\delta\) is called the damping coefficient (or just "damping").
Note that Eq. (6) is still a linear homogeneous second-order differential equation, and its general solution still has the form of the sum (3.13) of two exponents of the type \(\exp \{\lambda t\}\),
with arbitrary pre-exponential coefficients. Plugging such an exponent into Eq. (6), we get the following algebraic characteristic equation for \(\lambda\) : \[\lambda^{2}+2 \delta \lambda+\omega_{0}
^{2}=0 .\] Solving this quadratic equation, we get \[\lambda_{\pm}=-\delta \pm i \omega_{0}^{\prime}, \quad \text { where } \omega_{0}^{\prime} \equiv\left(\omega_{0}^{2}-\delta^{2}\right)^{1 / 2},\]
so that for not very high damping \(\left(\delta<\omega_{0}\right)^{5}\) we get the following generalization of Eq. (3): \[q_{\text {free }}(t)=c_{+} e^{\lambda_{+} t}+c_{-} e^{\lambda_{-} t}=\left
(u_{0} \cos \omega_{0}^{\prime} t+v_{0} \sin \omega_{0}^{\prime} t\right) e^{-\delta t}=A_{0} e^{-\delta t} \cos \left(\omega_{0}^{\prime} t-\varphi_{0}\right) .\] The result shows that, besides a
certain correction to the free oscillation frequency (which is very small in the most interesting low damping limit, \(\delta<<\omega_{0}\) ), the energy dissipation leads to an exponential decay of
oscillation amplitude with the time constant \(\tau=1 / \delta\): \[A=A_{0} e^{-t / \tau}, \quad \text { where } \tau \equiv \frac{1}{\delta}=\frac{2 m}{\eta}\] A very popular dimensionless measure
of damping is the so-called quality factor \(Q\) (or just the \(Q\)-factor ) that is defined as \(\omega_{0} / 2 \delta\), and may be rewritten in several other useful forms: \[Q \equiv \frac{\omega_
{0}}{2 \delta}=\frac{m \omega_{0}}{\eta}=\frac{(m \kappa)^{1 / 2}}{\eta}=\pi \frac{\tau}{\tau}=\frac{\omega_{0} \tau}{2},\] where \(\tau=2 \pi / \omega_{0}\) is the oscillation period in the absence
of damping - see Eq. (3.29). Since the oscillation energy \(E\) is proportional to \(A^{2}\), i.e. decays as \(\exp \{-2 t / \tau\}\), with the time constant \(\tau / 2\), the last form of Eq. (11)
may be used to rewrite the \(Q\)-factor in one more form: \[Q=\omega_{0} \frac{E}{(-\dot{E})} \equiv \omega_{0} \frac{E}{\mathscr{P}},\] where \(\mathscr{P}\) is the dissipation power. (Two other
practical ways to measure \(Q\) will be discussed below.) The range of \(Q\)-factors of important oscillators is very broad, all the way from \(Q \sim 10\) for a human leg (with relaxed muscles), to
\(Q \sim 10^{4}\) of the quartz crystals used in electronic clocks and watches, all the way up to \(Q \sim 10^{12}\) for carefully designed microwave cavities with superconducting walls.
In contrast to the decaying free oscillations, the forced oscillations, induced by an external force \(F(t)\), may maintain their amplitude (and hence energy) infinitely, even at non-zero damping.
This process may be described using a still linear but now inhomogeneous differential equation \[m \ddot{q}+\eta \dot{q}+\kappa q=F(t),\] or, more conveniently for analysis, the following
generalization of Eq. (6b):
\(\begin{aligned}&\text { Forced } \\&\text { oscillator } \\&\text { with } \\&\text { damping }\end{aligned} \quad \ddot{q}+2 \delta \dot{q}+\omega_{0}^{2} q=f(t), \quad\) where \(f(t) \equiv F(t)
/ m .\)
For a mechanical linear, dissipative \(1 \mathrm{D}\) oscillator \((6)\), under the effect of an additional external force \(F(t)\), Eq. (13a) is just an expression of the \(2^{\text {nd }}\) Newton
law. However, according to Eq. (1.41), Eq. (13) is valid for any dissipative, linear 6 1D system whose Gibbs potential energy \((1.39)\) has the form \(U_{\mathrm{G}}(q, t)=\) \(\kappa q^{2} / 2-F(t)
The forced-oscillation solutions may be analyzed by two mathematically equivalent methods whose relative convenience depends on the character of function \(f(t)\).
(i) Frequency domain. Representing the function \(f(t)\) as a Fourier sum of sinusoidal harmonics: \(^{7}\) \[f(t)=\sum_{\omega} f_{\omega} e^{-i \omega t},\] and using the linearity of Eq. (13), we
may represent its general solution as a sum of the decaying free oscillations (9) with the frequency \(\omega_{0}^{\prime}\), independent of the function \(f(t)\), and forced oscillations due to each
of the Fourier components of the force: \(^{8}\) \[q(t)=q_{\text {free }}(t)+q_{\text {forced }}(t), \quad q_{\text {forced }}(t)=\sum_{\omega} a_{\omega} e^{-i \omega t}\] Plugging Eq. (15) into Eq.
(13), and requiring the factors before each \(e^{-i \omega t}\) on both sides to be equal, we get \[a_{\omega}=f_{\omega} \chi(\omega),\] where the complex function \(\chi(\omega)\), in our
particular case equal to \[\chi(\omega)=\frac{1}{\left(\omega_{0}^{2}-\omega^{2}\right)-2 i \omega \delta},\] is called either the response function or (especially for non-mechanical oscillators) the
generalized susceptibility. From here, and Eq. (4), the amplitude of the oscillations under the effect of a sinusoidal force is \[A_{\omega} \equiv\left|a_{\omega}\right|=\left|f_{\omega} \| \chi(\
omega)\right|, \quad \text { with }|\chi(\omega)|=\frac{1}{\left[\left(\omega_{0}^{2}-\omega^{2}\right)^{2}+(2 \omega \delta)^{2}\right]^{1 / 2}}\] This formula describes, in particular, an increase
of the oscillation amplitude \(A_{\omega}\) at \(\omega \rightarrow \omega_{0}-\) see the left panel in Figure 1. In particular, at the exact equality of these two frequencies, \[|\chi(\omega)|_{\
omega=\omega_{0}}=\frac{1}{2 \omega_{0} \delta},\] so that, according to Eq. (11), the ratio of the response magnitudes at \(\omega=\omega_{0}\) and \(\omega=0\left(|\chi(\omega)|_{\omega=0}=\right.
\) \(1 / \omega_{0}{ }^{2}\) ) is exactly equal to the \(Q\)-factor of the oscillator. Thus, the response increase is especially strong in the low damping limit \(\left(\delta<<\omega_{0}\right.\),
i.e. \(\left.Q>>1\right)\); moreover at \(Q \rightarrow \infty\) and \(\omega \rightarrow \omega_{0}\) the response diverges. (This fact is very useful for the methods to be discussed later in this
section.) This is the classical description of the famous phenomenon of resonance, so ubiquitous in physics.
Figure 5.1. Resonance in the linear oscillator, for several values of \(Q\).
Due to the increase of the resonance peak height, its width is inversely proportional to \(Q\). Quantitatively, in the most interesting low-damping limit, i.e. at \(Q>>1\), the reciprocal \(Q\)
-factor gives the normalized value of the so-called full-width at half-maximum (FWHM) of the resonance curve: \(^{9}\) \[\frac{\Delta \omega}{\omega_{0}}=\frac{1}{Q} .\] Indeed, this \(\Delta \omega
\) is defined as the difference \(\left(\omega_{+}-\omega_{-}\right)\)between the two values of \(\omega\) at that the square of the oscillator response function, \(|\chi(\omega)|^{2}\) (which is
proportional to the oscillation energy), equals a half of its resonance value (19). In the low damping limit, both these points are very close to \(\omega_{0}\), so that in the linear approximation
in \(\left|\omega^{-} \omega_{0}\right|<<\omega_{0}\), we may write \(\left(\omega_{0}{ }^{2-} \omega^{2}\right) \equiv-\left(\omega+\omega_{0}\right)\left(\omega^{-} \omega_{0}\right) \approx-2 \
omega \xi \approx-2 \omega_{0} \xi\), where \[\xi \equiv \omega-\omega_{0}\] is a very convenient parameter called detuning, which will be repeatedly used later in this chapter. In this
approximation, the second of Eqs. (18) is reduced to \({ }^{10}\) \[|\chi(\omega)|^{2}=\frac{1}{4 \omega_{0}^{2}\left(\delta^{2}+\xi^{2}\right)} .\] As a result, the points \(\omega_{\pm}\)correspond
to \(\xi^{2}=\delta^{2}\), i.e. \(\omega_{\pm}=\omega_{0} \pm \delta=\omega_{0}(1 \pm 1 / 2 Q)\), so that \(\Delta \omega \equiv \omega_{+}-\omega_{-}=\) \(\omega_{0} / Q\), thus proving Eq. (20).
(ii) Time domain. Returning to arbitrary external force \(f(t)\), one may argue that Eqs. (9), (15)-(17) provide a full solution of the forced oscillation problem even in this general case. This is
formally correct, but this solution may be very inconvenient if the external force is far from a sinusoidal function of time, especially if it is not periodic at all. In this case, we should first
calculate the complex amplitudes \(f_{\omega}\) participating in the Fourier sum (14). In the general case of a non-periodic \(f(t)\), this is actually the Fourier integral, \({ }^{11}\) \[f(t)=\int_
{-\infty}^{+\infty} f_{\omega} e^{-i \omega t} d t,\] so that \(f_{\omega}\) should be calculated using the reciprocal Fourier transform, \[f_{\omega}=\frac{1}{2 \pi} \int_{-\infty}^{+\infty} f\left
(t^{\prime}\right) e^{i \omega t^{\prime}} d t^{\prime}\] Now we may use Eq. (16) for each Fourier component of the resulting forced oscillations, and rewrite the last of Eqs. (15) as \[\begin
{aligned} q_{\text {forced }}(t) &=\int_{-\infty}^{+\infty} a_{\omega} e^{-i \omega t} d \omega=\int_{-\infty}^{+\infty} \chi(\omega) f_{\omega} e^{-i \omega t} d \omega=\int_{-\infty}^{+\infty} d \
omega \chi(\omega) \frac{1}{2 \pi} \int_{-\infty}^{+\infty} d t^{\prime} f\left(t^{\prime}\right) e^{i \omega\left(t^{\prime}-t\right)} \\ &=\int_{-\infty}^{+\infty} d t^{\prime} f\left(t^{\prime}\
right)\left[\frac{1}{2 \pi} \int_{-\infty}^{+\infty} d \omega \chi(\omega) e^{i \omega\left(t^{\prime}-t\right)}\right] \end{aligned}\] with the response function \(\chi(\omega)\) given, in our case,
by Eq. (17). Besides requiring two integrations, Eq. (25) is conceptually uncomforting: it seems to indicate that the oscillator’s coordinate at time \(t\) depends not only on the external force
exerted at earlier times \(t\) ’ \(<t\), but also at future times. This would contradict one of the most fundamental principles of physics (and indeed, science as a whole), the causality: no effect
may precede its cause.
Fortunately, a straightforward calculation (left for the reader’s exercise) shows that the response function (17) satisfies the following rule: \({ }^{12}\) \[\int_{-\infty}^{+\infty} \chi(\omega) e^
{-i \omega \tau} d \omega=0, \quad \text { for } \tau<0 .\] This fact allows the last form of Eq. (25) to be rewritten in either of the following equivalent forms: \[q_{\text {forced }}(t)=\int_{-\
infty}^{t} f\left(t^{\prime}\right) G\left(t-t^{\prime}\right) d t^{\prime} \equiv \int_{0}^{\infty} f(t-\tau) G(\tau) d \tau,\] where \(G(\tau)\), defined as the Fourier transform of the response
function, \[G(\tau) \equiv \frac{1}{2 \pi} \int_{-\infty}^{+\infty} \chi(\omega) e^{-i \omega \tau} d \omega,\] is called the (temporal) Green’s function of the system. According to Eq. (26), \(G(\
tau)=0\) for all \(\tau<0\).
While the second form of Eq. (27) is frequently more convenient for calculations, its first form is more suitable for physical interpretation of the Green’s function. Indeed, let us consider the
particular case when the force is a delta function \[f(t)=\delta\left(t-t^{\prime}\right), \quad \text { with } t^{\prime}<t \text {, i.e. } \tau \equiv t-t^{\prime}>0 \text {, }\] representing an
ultimately short pulse at the moment \(t\) ’, with unit "area" \(\int f(t) d t\). Substituting Eq. (29a) into Eq. (27), \({ }^{13}\) we get \[q(t)=G\left(t-t^{\prime}\right) .\] Thus the Green’s
function \(G\left(t-t^{\prime}\right)\) is just the oscillator’s response, as measured at time \(t\), to a short force pulse of unit "area", exerted at time \(t\) ’. Hence Eq. (27) expresses the
linear superposition principle in the time domain: the full effect of the force \(f(t)\) on a linear system is a sum of effects of short pulses of duration \(d t\) ’ and magnitude \(f\left(t^{\prime}
\right)\), each with its own "weight" \(G\left(t-t^{\prime}\right)\) - see Figure 2 .
Figure 5.2. A schematic, finite-interval representation of a force \(f(t)\) as a sum of short pulses at all times \(t^{\prime}<t\), and their contributions to the linear system’s response \(q(t)\),
as given by Eq. (27).
This picture may be used for the calculation of Green’s function for our particular system. Indeed, Eqs. (29)-(30) mean that \(G(\tau)\) is just the solution of the differential equation of motion of
the system, in our case, Eq. (13), with the replacement \(t \rightarrow \tau\), and a \(\delta\)-functional right-hand side: \[\frac{d^{2} G(\tau)}{d \tau^{2}}+2 \delta \frac{d G(\tau)}{d \tau}+\
omega_{0}^{2} G(\tau)=\delta(\tau) .\] Since Eqs. (27) describes only the second term in Eq. (15), i.e. only the forced, rather than free oscillations, we have to exclude the latter by solving Eq.
(31) with zero initial conditions: \[G(-0)=\frac{d G}{d \tau}(-0)=0,\] where \(\tau=-0\) means the instant immediately preceding \(\tau=0\).
This calculation may be simplified even further. Let us integrate both sides of Eq. (31) over an infinitesimal interval including the origin, e.g. [- \(d \tau / 2,+d \tau / 2]\), and then follow the
limit \(d \tau \rightarrow 0\). Since the Green’s function has to be continuous because of its physical sense as the (generalized) coordinate, all terms on the left-hand side but the first one
vanish, while the first term yields \(d G /\left.d \tau\right|_{+0}-d G /\left.d \tau\right|_{-0}\). Due to the second of Eqs. (32), the last of these two derivatives equals zero, while the
right-hand side of Eq. (31) yields 1 upon the integration. Thus, the function \(G(\tau)\) may be calculated for \(\tau>0\) (i.e. for all times when it is different from zero) by solving the
homogeneous version of the system’s equation of motion for \(\tau>0\), with the following special initial conditions: \[G(0)=0, \quad \frac{d G}{d \tau}(0)=1 .\] This approach gives us a convenient
way for the calculation of Green’s functions of linear systems. In particular for the oscillator with not very high damping \(\left(\delta<\omega_{0}\right.\), i.e. \(\left.Q>1 / 2\right)\), imposing
the boundary conditions \((33)\) on the homogeneous equation’s solution \((9)\), we immediately get \[G(\tau)=\frac{1}{\omega_{0}{ }^{\prime}} e^{-\delta \tau} \sin \omega_{0}{ }^{\prime} \tau\] (The
same result may be obtained directly from Eq. (28) with the response function \(\chi(\omega)\) given by Eq. (19). This way is, however, a little bit more cumbersome, and is left for the reader’s
Relations (27) and (34) provide a very convenient recipe for solving many forced oscillations problems. As a very simple example, let us calculate the transient process in an oscillator under the
effect of a constant force being turned on at \(t=0\), i.e. proportional to the theta-function of time: \[f(t)=f_{0} \theta(t) \equiv \begin{cases}0, & \text { for } t<0, \\ f_{0}, & \text { for } t>
0,\end{cases}\] provided that at \(t<0\) the oscillator was at rest, so that in Eq. (15), \(q_{\text {free }}(t) \equiv 0\). Then the second form of Eq. (27), and Eq. (34), yield \[q(t)=\int_{0}^{\
infty} f(t-\tau) G(\tau) d \tau=f_{0} \int_{0}^{t} \frac{1}{\omega_{0}{ }^{\prime}} e^{-\delta \tau} \sin \omega_{0}{ }^{\prime} \tau d \tau .\] The simplest way to work out such integrals is to
represent the sine function under it as the imaginary part of \(\exp \left\{i \omega_{0}^{\prime} t\right\}\), and merge the two exponents, getting \[q(t)=f_{0} \frac{1}{\omega_{0}^{\prime}} \
operatorname{Im}\left[\frac{1}{-\delta+i \omega_{0}^{\prime}} e^{-\delta \tau+i \omega_{0}^{\prime} \tau}\right]_{0}^{t}=\frac{F_{0}}{k}\left[1-e^{-\delta t}\left(\cos \omega_{0}^{\prime} t+\frac{\
delta}{\omega_{0}^{\prime}} \sin \omega_{0}^{\prime} t\right)\right]\]
This result, plotted in Figure 3, is rather natural: it describes nothing more than the transient from the initial position \(q=0\) to the new equilibrium position \(q_{0}=f_{0} / \omega_{0}{ }^{2}=
F_{0} / \kappa\), accompanied by decaying oscillations. For this particular simple function \(f(t)\), the same result might be also obtained by introducing a new variable \(\widetilde{q}(t) \equiv q
(t)-q_{0}\) and solving the resulting homogeneous equation for \(\widetilde{q}\) (with appropriate initial condition \(\ \widetilde{q}(0)=-q_{0}\). However, for more complicated functions \(\ f(t)\)
the Green’s function approach is irreplaceable.
Figure 5.3. The transient process in a linear oscillator, induced by a step-like force \(f(t)\), for the particular case \(\delta / \omega_{0}=0.1\) (i.e., \(Q=5\) ).
Note that for any particular linear system, its Green’s function should be calculated only once, and then may be repeatedly used in Eq. (27) to calculate the system response to various external
forces either analytically or numerically. This property makes the Green’s function approach very popular in many other fields of physics \(-\) with the corresponding generalization or re-definition
of the function. \({ }^{14}\)
\({ }^{1}\) For the notation brevity, in this chapter I will drop indices "ef" in the energy components \(T\) and \(U\), and parameters like \(m, \kappa\), etc. However, the reader should still
remember that \(T\) and \(U\) do not necessarily coincide with the actual kinetic and potential energies (even if those energies may be uniquely identified) - see Sec. 3.1.
\({ }^{2} \omega_{0}\) is usually called the own frequency of the oscillator. In quantum mechanics, the Germanized version of the same term, eigenfrequency, is used more. In this series, I will use
either of the terms, depending on the context.
\({ }^{3}\) Note that this is the so-called physics convention. Most engineering texts use the opposite sign in the imaginary exponent, \(\exp \{-i \omega t\} \rightarrow \exp \{i \omega t\}\), with
the corresponding sign implications for intermediate formulas, but (of course) similar final results for real variables.
\({ }^{4}\) Here Eq. (5) is treated as a phenomenological model, but in statistical mechanics, such dissipative term may be derived as an average force exerted upon a system by its environment, at
very general assumptions. As discussed in detail elsewhere in this series (SM Chapter 5 and QM Chapter 7), due to the numerous degrees of freedom of a typical environment (think about the molecules
of air surrounding the usual mechanical pendulum), its force also has a random component; as a result, the dissipation is fundamentally related to fluctuations. The latter effects may be neglected
(as they are in this course) only if \(E\) is much higher than the energy scale of the random fluctuations of the oscillator - in the thermal equilibrium at temperature \(T\), the larger of \(k_{\
mathrm{B}} T\) and \(\hbar \omega_{0} / 2\).
\({ }^{5}\) Systems with high damping \(\left(\delta>\omega_{0}\right)\) can hardly be called oscillators, and though they are used in engineering and physics experiment (e.g., for the shock,
vibration, and sound isolation), for their detailed discussion I have to refer the interested reader to special literature - see, e.g., C. Harris and A. Piersol, Shock and Vibration Handbook, \(5^{\
text {th }}\) ed., McGraw Hill, 2002. Let me only note that dynamics of systems with very high damping \((\delta \gg>\) \(\omega_{0}\) ) has two very different time scales: a relatively short
"momentum relaxation time" \(1 / \lambda \approx 1 / 2 \delta=m / \eta\), and a much longer "coordinate relaxation time" \(1 / \lambda_{+} \approx 2 \delta / \omega_{0}{ }^{2}=\eta / \kappa\).
\({ }^{6}\) This is a very unfortunate, but common jargon, meaning "the system described by linear equations of motion".
\({ }^{7}\) Here, in contrast to Eq. (3b), we may drop the operator Re, assuming that \(f_{-\omega}=f_{\omega}{ }^{*}\), so that the imaginary components of the sum compensate each other.
\({ }^{8}\) In physics, this mathematical property of linear equations is frequently called the linear superposition principle.
\({ }^{9}\) Note that the phase shift \(\varphi \equiv \arg [\chi(\omega)]\) between the oscillations and the external force (see the right panel in Figure 1) makes its steepest change, by \(\pi / 2
\), within the same frequency interval \(\Delta \omega\).
\({ }^{10}\) Such function of frequency is met in many branches of science, frequently under special names, including the "Cauchy distribution", "the Lorentz function" (or "Lorentzian line", or
"Lorentzian distribution"), "the BreitWigner function" (or "the Breit-Wigner distribution"), etc.
\({ }^{11}\) Let me hope that the reader knows that Eq. (23) may be used for periodic functions as well; in such a case, \(f_{\omega}\) is a set of equidistant delta functions. (A reminder of the
basic properties of the Dirac \(\delta\)-function may be found, for example, in MA Sec. 14.)
\({ }^{12}\) Eq. (26) remains true for any linear physical systems in which \(f(t)\) represents a cause, and \(q(t)\) its effect. Following tradition, I discuss the frequency-domain expression of
this causality relation (called the KramersKronig relations) in the Classical Electrodynamics part of this lecture series - see EM Sec. 7.2.
\({ }^{13}\) Technically, for this integration, \(t\) ’ in Eq. (27) should be temporarily replaced with another letter, say \(t\) ".
\({ }^{14}\) See, e.g., EM Sec. 2.7, and QM Sec. \(2.2 .\) | {"url":"https://phys.libretexts.org/Bookshelves/Classical_Mechanics/Essential_Graduate_Physics_-_Classical_Mechanics_(Likharev)/05%3A_Oscillations/5.01%3A_Free_and_forced_Oscillations","timestamp":"2024-11-09T11:05:02Z","content_type":"text/html","content_length":"148532","record_id":"<urn:uuid:82e87682-89c9-4396-8a10-4e436fbdecc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00435.warc.gz"} |
One of the most striking facts about neural networks is that they can compute any function at all. That is, suppose someone hands you some complicated, wiggly function, $f(x)$:
No matter what the function, there is guaranteed to be a neural network so that for every possible input, $x$, the value $f(x)$ (or some close approximation) is output from the network, e.g.:
This result holds even if the function has many inputs, $f = f(x_1, \ldots, x_m)$, and many outputs. For instance, here's a network computing a function with $m = 3$ inputs and $n = 2$ outputs:
This result tells us that neural networks have a kind of universality. No matter what function we want to compute, we know that there is a neural network which can do the job.
What's more, this universality theorem holds even if we restrict our networks to have just a single layer intermediate between the input and the output neurons - a so-called single hidden layer. So
even very simple network architectures can be extremely powerful.
The universality theorem is well known by people who use neural networks. But why it's true is not so widely understood. Most of the explanations available are quite technical. For instance, one of
the original papers proving the result* *Approximation by superpositions of a sigmoidal function, by George Cybenko (1989). The result was very much in the air at the time, and several groups proved
closely related results. Cybenko's paper contains a useful discussion of much of that work. Another important early paper is Multilayer feedforward networks are universal approximators, by Kurt
Hornik, Maxwell Stinchcombe, and Halbert White (1989). This paper uses the Stone-Weierstrass theorem to arrive at similar results. did so using the Hahn-Banach theorem, the Riesz Representation
theorem, and some Fourier analysis. If you're a mathematician the argument is not difficult to follow, but it's not so easy for most people. That's a pity, since the underlying reasons for
universality are simple and beautiful.
In this chapter I give a simple and mostly visual explanation of the universality theorem. We'll go step by step through the underlying ideas. You'll understand why it's true that neural networks can
compute any function. You'll understand some of the limitations of the result. And you'll understand how the result relates to deep neural networks.
To follow the material in the chapter, you do not need to have read earlier chapters in this book. Instead, the chapter is structured to be enjoyable as a self-contained essay. Provided you have just
a little basic familiarity with neural networks, you should be able to follow the explanation. I will, however, provide occasional links to earlier material, to help fill in any gaps in your
Universality theorems are a commonplace in computer science, so much so that we sometimes forget how astonishing they are. But it's worth reminding ourselves: the ability to compute an arbitrary
function is truly remarkable. Almost any process you can imagine can be thought of as function computation. Consider the problem of naming a piece of music based on a short sample of the piece. That
can be thought of as computing a function. Or consider the problem of translating a Chinese text into English. Again, that can be thought of as computing a function* *Actually, computing one of many
functions, since there are often many acceptable translations of a given piece of text.. Or consider the problem of taking an mp4 movie file and generating a description of the plot of the movie, and
a discussion of the quality of the acting. Again, that can be thought of as a kind of function computation* *Ditto the remark about translation and there being many possible functions.. Universality
means that, in principle, neural networks can do all these things and many more.
Of course, just because we know a neural network exists that can (say) translate Chinese text into English, that doesn't mean we have good techniques for constructing or even recognizing such a
network. This limitation applies also to traditional universality theorems for models such as Boolean circuits. But, as we've seen earlier in the book, neural networks have powerful algorithms for
learning functions. That combination of learning algorithms + universality is an attractive mix. Up to now, the book has focused on the learning algorithms. In this chapter, we focus on universality,
and what it means.
Before explaining why the universality theorem is true, I want to mention two caveats to the informal statement "a neural network can compute any function".
First, this doesn't mean that a network can be used to exactly compute any function. Rather, we can get an approximation that is as good as we want. By increasing the number of hidden neurons we can
improve the approximation. For instance, earlier I illustrated a network computing some function $f(x)$ using three hidden neurons. For most functions only a low-quality approximation will be
possible using three hidden neurons. By increasing the number of hidden neurons (say, to five) we can typically get a better approximation:
And we can do still better by further increasing the number of hidden neurons.
To make this statement more precise, suppose we're given a function $f(x)$ which we'd like to compute to within some desired accuracy $\epsilon > 0$. The guarantee is that by using enough hidden
neurons we can always find a neural network whose output $g(x)$ satisfies $|g(x) - f(x)| < \epsilon$, for all inputs $x$. In other words, the approximation will be good to within the desired accuracy
for every possible input.
The second caveat is that the class of functions which can be approximated in the way described are the continuous functions. If a function is discontinuous, i.e., makes sudden, sharp jumps, then it
won't in general be possible to approximate using a neural net. This is not surprising, since our neural networks compute continuous functions of their input. However, even if the function we'd
really like to compute is discontinuous, it's often the case that a continuous approximation is good enough. If that's so, then we can use a neural network. In practice, this is not usually an
important limitation.
Summing up, a more precise statement of the universality theorem is that neural networks with a single hidden layer can be used to approximate any continuous function to any desired precision. In
this chapter we'll actually prove a slightly weaker version of this result, using two hidden layers instead of one. In the problems I'll briefly outline how the explanation can, with a few tweaks, be
adapted to give a proof which uses only a single hidden layer.
To understand why the universality theorem is true, let's start by understanding how to construct a neural network which approximates a function with just one input and one output:
It turns out that this is the core of the problem of universality. Once we've understood this special case it's actually pretty easy to extend to functions with many inputs and many outputs.
To build insight into how to construct a network to compute $f$, let's start with a network containing just a single hidden layer, with two hidden neurons, and an output layer containing a single
output neuron:
To get a feel for how components in the network work, let's focus on the top hidden neuron. In the diagram below, click on the weight, $w$, and drag the mouse a little ways to the right to increase
$w$. You can immediately see how the function computed by the top hidden neuron changes:
As we learnt earlier in the book, what's being computed by the hidden neuron is $\sigma(wx + b)$, where $\sigma(z) \equiv 1/(1+e^{-z})$ is the sigmoid function. Up to now, we've made frequent use of
this algebraic form. But for the proof of universality we will obtain more insight by ignoring the algebra entirely, and instead manipulating and observing the shape shown in the graph. This won't
just give us a better feel for what's going on, it will also give us a proof* *Strictly speaking, the visual approach I'm taking isn't what's traditionally thought of as a proof. But I believe the
visual approach gives more insight into why the result is true than a traditional proof. And, of course, that kind of insight is the real purpose behind a proof. Occasionally, there will be small
gaps in the reasoning I present: places where I make a visual argument that is plausible, but not quite rigorous. If this bothers you, then consider it a challenge to fill in the missing steps. But
don't lose sight of the real purpose: to understand why the universality theorem is true. of universality that applies to activation functions other than the sigmoid function.
To get started on this proof, try clicking on the bias, $b$, in the diagram above, and dragging to the right to increase it. You'll see that as the bias increases the graph moves to the left, but its
shape doesn't change.
Next, click and drag to the left in order to decrease the bias. You'll see that as the bias decreases the graph moves to the right, but, again, its shape doesn't change.
Next, decrease the weight to around $2$ or $3$. You'll see that as you decrease the weight, the curve broadens out. You might need to change the bias as well, in order to keep the curve in-frame.
Finally, increase the weight up past $w = 100$. As you do, the curve gets steeper, until eventually it begins to look like a step function. Try to adjust the bias so the step occurs near $x = 0.3$.
The following short clip shows what your result should look like. Click on the play button to play (or replay) the video:
We can simplify our analysis quite a bit by increasing the weight so much that the output really is a step function, to a very good approximation. Below I've plotted the output from the top hidden
neuron when the weight is $w = 999$. Note that this plot is static, and you can't change parameters such as the weight.
It's actually quite a bit easier to work with step functions than general sigmoid functions. The reason is that in the output layer we add up contributions from all the hidden neurons. It's easy to
analyze the sum of a bunch of step functions, but rather more difficult to reason about what happens when you add up a bunch of sigmoid shaped curves. And so it makes things much easier to assume
that our hidden neurons are outputting step functions. More concretely, we do this by fixing the weight $w$ to be some very large value, and then setting the position of the step by modifying the
bias. Of course, treating the output as a step function is an approximation, but it's a very good approximation, and for now we'll treat it as exact. I'll come back later to discuss the impact of
deviations from this approximation.
At what value of $x$ does the step occur? Put another way, how does the position of the step depend upon the weight and bias?
To answer this question, try modifying the weight and bias in the diagram above (you may need to scroll back a bit). Can you figure out how the position of the step depends on $w$ and $b$? With a
little work you should be able to convince yourself that the position of the step is proportional to $b$, and inversely proportional to $w$.
In fact, the step is at position $s = -b/w$, as you can see by modifying the weight and bias in the following diagram:
It will greatly simplify our lives to describe hidden neurons using just a single parameter, $s$, which is the step position, $s = -b/w$. Try modifying $s$ in the following diagram, in order to get
used to the new parameterization:
As noted above, we've implicitly set the weight $w$ on the input to be some large value - big enough that the step function is a very good approximation. We can easily convert a neuron parameterized
in this way back into the conventional model, by choosing the bias $b = -w s$.
Up to now we've been focusing on the output from just the top hidden neuron. Let's take a look at the behavior of the entire network. In particular, we'll suppose the hidden neurons are computing
step functions parameterized by step points $s_1$ (top neuron) and $s_2$ (bottom neuron). And they'll have respective output weights $w_1$ and $w_2$. Here's the network:
What's being plotted on the right is the weighted output $w_1 a_1 + w_2 a_2$ from the hidden layer. Here, $a_1$ and $a_2$ are the outputs from the top and bottom hidden neurons, respectively* *Note,
by the way, that the output from the whole network is $\sigma(w_1 a_1+w_2 a_2 + b)$, where $b$ is the bias on the output neuron. Obviously, this isn't the same as the weighted output from the hidden
layer, which is what we're plotting here. We're going to focus on the weighted output from the hidden layer right now, and only later will we think about how that relates to the output from the whole
network.. These outputs are denoted with $a$s because they're often known as the neurons' activations.
Try increasing and decreasing the step point $s_1$ of the top hidden neuron. Get a feel for how this changes the weighted output from the hidden layer. It's particularly worth understanding what
happens when $s_1$ goes past $s_2$. You'll see that the graph changes shape when this happens, since we have moved from a situation where the top hidden neuron is the first to be activated to a
situation where the bottom hidden neuron is the first to be activated.
Similarly, try manipulating the step point $s_2$ of the bottom hidden neuron, and get a feel for how this changes the combined output from the hidden neurons.
Try increasing and decreasing each of the output weights. Notice how this rescales the contribution from the respective hidden neurons. What happens when one of the weights is zero?
Finally, try setting $w_1$ to be $0.8$ and $w_2$ to be $-0.8$. You get a "bump" function, which starts at point $s_1$, ends at point $s_2$, and has height $0.8$. For instance, the weighted output
might look like this:
Of course, we can rescale the bump to have any height at all. Let's use a single parameter, $h$, to denote the height. To reduce clutter I'll also remove the "$s_1 = \ldots$" and "$w_1 = \ldots$"
Try changing the value of $h$ up and down, to see how the height of the bump changes. Try changing the height so it's negative, and observe what happens. And try changing the step points to see how
that changes the shape of the bump.
You'll notice, by the way, that we're using our neurons in a way that can be thought of not just in graphical terms, but in more conventional programming terms, as a kind of if-then-else statement,
if input >= step point:
add 1 to the weighted output
add 0 to the weighted output
For the most part I'm going to stick with the graphical point of view. But in what follows you may sometimes find it helpful to switch points of view, and think about things in terms of if-then-else.
We can use our bump-making trick to get two bumps, by gluing two pairs of hidden neurons together into the same network:
I've suppressed the weights here, simply writing the $h$ values for each pair of hidden neurons. Try increasing and decreasing both $h$ values, and observe how it changes the graph. Move the bumps
around by changing the step points.
More generally, we can use this idea to get as many peaks as we want, of any height. In particular, we can divide the interval $[0, 1]$ up into a large number, $N$, of subintervals, and use $N$ pairs
of hidden neurons to set up peaks of any desired height. Let's see how this works for $N = 5$. That's quite a few neurons, so I'm going to pack things in a bit. Apologies for the complexity of the
diagram: I could hide the complexity by abstracting away further, but I think it's worth putting up with a little complexity, for the sake of getting a more concrete feel for how these networks work.
You can see that there are five pairs of hidden neurons. The step points for the respective pairs of neurons are $0, 1/5$, then $1/5, 2/5$, and so on, out to $4/5, 5/5$. These values are fixed - they
make it so we get five evenly spaced bumps on the graph.
Each pair of neurons has a value of $h$ associated to it. Remember, the connections output from the neurons have weights $h$ and $-h$ (not marked). Click on one of the $h$ values, and drag the mouse
to the right or left to change the value. As you do so, watch the function change. By changing the output weights we're actually designing the function!
Contrariwise, try clicking on the graph, and dragging up or down to change the height of any of the bump functions. As you change the heights, you can see the corresponding change in $h$ values. And,
although it's not shown, there is also a change in the corresponding output weights, which are $+h$ and $-h$.
In other words, we can directly manipulate the function appearing in the graph on the right, and see that reflected in the $h$ values on the left. A fun thing to do is to hold the mouse button down
and drag the mouse from one side of the graph to the other. As you do this you draw out a function, and get to watch the parameters in the neural network adapt.
Time for a challenge.
Let's think back to the function I plotted at the beginning of the chapter:
I didn't say it at the time, but what I plotted is actually the function \begin{eqnarray} f(x) = 0.2+0.4 x^2+0.3x \sin(15 x) + 0.05 \cos(50 x), \tag{113}\end{eqnarray} plotted over $x$ from $0$ to
$1$, and with the $y$ axis taking values from $0$ to $1$.
That's obviously not a trivial function.
You're going to figure out how to compute it using a neural network.
In our networks above we've been analyzing the weighted combination $\sum_j w_j a_j$ output from the hidden neurons. We now know how to get a lot of control over this quantity. But, as I noted
earlier, this quantity is not what's output from the network. What's output from the network is $\sigma(\sum_j w_j a_j + b)$ where $b$ is the bias on the output neuron. Is there some way we can
achieve control over the actual output from the network?
The solution is to design a neural network whose hidden layer has a weighted output given by $\sigma^{-1} \circ f(x)$, where $\sigma^{-1}$ is just the inverse of the $\sigma$ function. That is, we
want the weighted output from the hidden layer to be:
If we can do this, then the output from the network as a whole will be a good approximation to $f(x)$* *Note that I have set the bias on the output neuron to $0$..
Your challenge, then, is to design a neural network to approximate the goal function shown just above. To learn as much as possible, I want you to solve the problem twice. The first time, please
click on the graph, directly adjusting the heights of the different bump functions. You should find it fairly easy to get a good match to the goal function. How well you're doing is measured by the
average deviation between the goal function and the function the network is actually computing. Your challenge is to drive the average deviation as low as possible. You complete the challenge when
you drive the average deviation to $0.40$ or below.
Once you've done that, click on "Reset" to randomly re-initialize the bumps. The second time you solve the problem, resist the urge to click on the graph. Instead, modify the $h$ values on the
left-hand side, and again attempt to drive the average deviation to $0.40$ or below.
You've now figured out all the elements necessary for the network to approximately compute the function $f(x)$! It's only a coarse approximation, but we could easily do much better, merely by
increasing the number of pairs of hidden neurons, allowing more bumps.
In particular, it's easy to convert all the data we have found back into the standard parameterization used for neural networks. Let me just recap quickly how that works.
The first layer of weights all have some large, constant value, say $w = 1000$.
The biases on the hidden neurons are just $b = -w s$. So, for instance, for the second hidden neuron $s = 0.2$ becomes $b = -1000 \times 0.2 = -200$.
The final layer of weights are determined by the $h$ values. So, for instance, the value you've chosen above for the first $h$, $h = $ , means that the output weights from the top two hidden neurons
are and , respectively. And so on, for the entire layer of output weights.
Finally, the bias on the output neuron is $0$.
That's everything: we now have a complete description of a neural network which does a pretty good job computing our original goal function. And we understand how to improve the quality of the
approximation by improving the number of hidden neurons.
What's more, there was nothing special about our original goal function, $f(x) = 0.2+0.4 x^2+0.3 \sin(15 x) + 0.05 \cos(50 x)$. We could have used this procedure for any continuous function from $[0,
1]$ to $[0, 1]$. In essence, we're using our single-layer neural networks to build a lookup table for the function. And we'll be able to build on this idea to provide a general proof of universality.
Let's extend our results to the case of many input variables. This sounds complicated, but all the ideas we need can be understood in the case of just two inputs. So let's address the two-input case.
We'll start by considering what happens when we have two inputs to a neuron:
Here, we have inputs $x$ and $y$, with corresponding weights $w_1$ and $w_2$, and a bias $b$ on the neuron. Let's set the weight $w_2$ to $0$, and then play around with the first weight, $w_1$, and
the bias, $b$, to see how they affect the output from the neuron:
As you can see, with $w_2 = 0$ the input $y$ makes no difference to the output from the neuron. It's as though $x$ is the only input.
Given this, what do you think happens when we increase the weight $w_1$ to $w_1 = 100$, with $w_2$ remaining $0$? If you don't immediately see the answer, ponder the question for a bit, and see if
you can figure out what happens. Then try it out and see if you're right. I've shown what happens in the following movie:
Just as in our earlier discussion, as the input weight gets larger the output approaches a step function. The difference is that now the step function is in three dimensions. Also as before, we can
move the location of the step point around by modifying the bias. The actual location of the step point is $s_x \equiv -b / w_1$.
Let's redo the above using the position of the step as the parameter:
Here, we assume the weight on the $x$ input has some large value - I've used $w_1 = 1000$ - and the weight $w_2 = 0$. The number on the neuron is the step point, and the little $x$ above the number
reminds us that the step is in the $x$ direction. Of course, it's also possible to get a step function in the $y$ direction, by making the weight on the $y$ input very large (say, $w_2 = 1000$), and
the weight on the $x$ equal to $0$, i.e., $w_1 = 0$:
The number on the neuron is again the step point, and in this case the little $y$ above the number reminds us that the step is in the $y$ direction. I could have explicitly marked the weights on the
$x$ and $y$ inputs, but decided not to, since it would make the diagram rather cluttered. But do keep in mind that the little $y$ marker implicitly tells us that the $y$ weight is large, and the $x$
weight is $0$.
We can use the step functions we've just constructed to compute a three-dimensional bump function. To do this, we use two neurons, each computing a step function in the $x$ direction. Then we combine
those step functions with weight $h$ and $-h$, respectively, where $h$ is the desired height of the bump. It's all illustrated in the following diagram:
Try changing the value of the height, $h$. Observe how it relates to the weights in the network. And see how it changes the height of the bump function on the right.
Also, try changing the step point $0.30$ associated to the top hidden neuron. Witness how it changes the shape of the bump. What happens when you move it past the step point $0.70$ associated to the
bottom hidden neuron?
We've figured out how to make a bump function in the $x$ direction. Of course, we can easily make a bump function in the $y$ direction, by using two step functions in the $y$ direction. Recall that
we do this by making the weight large on the $y$ input, and the weight $0$ on the $x$ input. Here's the result:
This looks nearly identical to the earlier network! The only thing explicitly shown as changing is that there's now little $y$ markers on our hidden neurons. That reminds us that they're producing
$y$ step functions, not $x$ step functions, and so the weight is very large on the $y$ input, and zero on the $x$ input, not vice versa. As before, I decided not to show this explicitly, in order to
avoid clutter.
Let's consider what happens when we add up two bump functions, one in the $x$ direction, the other in the $y$ direction, both of height $h$:
To simplify the diagram I've dropped the connections with zero weight. For now, I've left in the little $x$ and $y$ markers on the hidden neurons, to remind you in what directions the bump functions
are being computed. We'll drop even those markers later, since they're implied by the input variable.
Try varying the parameter $h$. As you can see, this causes the output weights to change, and also the heights of both the $x$ and $y$ bump functions.
What we've built looks a little like a tower function:
If we could build such tower functions, then we could use them to approximate arbitrary functions, just by adding up many towers of different heights, and in different locations:
Of course, we haven't yet figured out how to build a tower function. What we have constructed looks like a central tower, of height $2h$, with a surrounding plateau, of height $h$.
But we can make a tower function. Remember that earlier we saw neurons can be used to implement a type of if-then-else statement:
if input >= threshold:
output 1
output 0
That was for a neuron with just a single input. What we want is to apply a similar idea to the combined output from the hidden neurons:
if combined output from hidden neurons >= threshold:
output 1
output 0
If we choose the threshold appropriately - say, a value of $3h/2$, which is sandwiched between the height of the plateau and the height of the central tower - we could squash the plateau down to
zero, and leave just the tower standing.
Can you see how to do this? Try experimenting with the following network to figure it out. Note that we're now plotting the output from the entire network, not just the weighted output from the
hidden layer. This means we add a bias term to the weighted output from the hidden layer, and apply the sigma function. Can you find values for $h$ and $b$ which produce a tower? This is a bit
tricky, so if you think about this for a while and remain stuck, here's two hints: (1) To get the output neuron to show the right kind of if-then-else behaviour, we need the input weights (all $h$ or
$-h$) to be large; and (2) the value of $b$ determines the scale of the if-then-else threshold.
With our initial parameters, the output looks like a flattened version of the earlier diagram, with its tower and plateau. To get the desired behaviour, we increase the parameter $h$ until it becomes
large. That gives the if-then-else thresholding behaviour. Second, to get the threshold right, we'll choose $b \approx -3h/2$. Try it, and see how it works!
Here's what it looks like, when we use $h = 10$:
Even for this relatively modest value of $h$, we get a pretty good tower function. And, of course, we can make it as good as we want by increasing $h$ still further, and keeping the bias as $b = -3h/
Let's try gluing two such networks together, in order to compute two different tower functions. To make the respective roles of the two sub-networks clear I've put them in separate boxes, below: each
box computes a tower function, using the technique described above. The graph on the right shows the weighted output from the second hidden layer, that is, it's a weighted combination of tower
In particular, you can see that by modifying the weights in the final layer you can change the height of the output towers.
The same idea can be used to compute as many towers as we like. We can also make them as thin as we like, and whatever height we like. As a result, we can ensure that the weighted output from the
second hidden layer approximates any desired function of two variables:
In particular, by making the weighted output from the second hidden layer a good approximation to $\sigma^{-1} \circ f$, we ensure the output from our network will be a good approximation to any
desired function, $f$.
What about functions of more than two variables?
Let's try three variables $x_1, x_2, x_3$. The following network can be used to compute a tower function in four dimensions:
Here, the $x_1, x_2, x_3$ denote inputs to the network. The $s_1, t_1$ and so on are step points for neurons - that is, all the weights in the first layer are large, and the biases are set to give
the step points $s_1, t_1, s_2, \ldots$. The weights in the second layer alternate $+h, -h$, where $h$ is some very large number. And the output bias is $-5h/2$.
This network computes a function which is $1$ provided three conditions are met: $x_1$ is between $s_1$ and $t_1$; $x_2$ is between $s_2$ and $t_2$; and $x_3$ is between $s_3$ and $t_3$. The network
is $0$ everywhere else. That is, it's a kind of tower which is $1$ in a little region of input space, and $0$ everywhere else.
By gluing together many such networks we can get as many towers as we want, and so approximate an arbitrary function of three variables. Exactly the same idea works in $m$ dimensions. The only change
needed is to make the output bias $(-m+1/2)h$, in order to get the right kind of sandwiching behavior to level the plateau.
Okay, so we now know how to use neural networks to approximate a real-valued function of many variables. What about vector-valued functions $f(x_1, \ldots, x_m) \in R^n$? Of course, such a function
can be regarded as just $n$ separate real-valued functions, $f^1(x_1, \ldots, x_m), f^2(x_1, \ldots, x_m)$, and so on. So we create a network approximating $f^1$, another network for $f^2$, and so
on. And then we simply glue all the networks together. So that's also easy to cope with.
• We've seen how to use networks with two hidden layers to approximate an arbitrary function. Can you find a proof showing that it's possible with just a single hidden layer? As a hint, try working
in the case of just two input variables, and showing that: (a) it's possible to get step functions not just in the $x$ or $y$ directions, but in an arbitrary direction; (b) by adding up many of
the constructions from part (a) it's possible to approximate a tower function which is circular in shape, rather than rectangular; (c) using these circular towers, it's possible to approximate an
arbitrary function. To do part (c) it may help to use ideas from a bit later in this chapter.
We've proved that networks made up of sigmoid neurons can compute any function. Recall that in a sigmoid neuron the inputs $x_1, x_2, \ldots$ result in the output $\sigma(\sum_j w_j x_j + b)$, where
$w_j$ are the weights, $b$ is the bias, and $\sigma$ is the sigmoid function:
What if we consider a different type of neuron, one using some other activation function, $s(z)$:
That is, we'll assume that if our neurons has inputs $x_1, x_2, \ldots$, weights $w_1, w_2, \ldots$ and bias $b$, then the output is $s(\sum_j w_j x_j + b)$.
We can use this activation function to get a step function, just as we did with the sigmoid. Try ramping up the weight in the following, say to $w = 100$:
Just as with the sigmoid, this causes the activation function to contract, and ultimately it becomes a very good approximation to a step function. Try changing the bias, and you'll see that we can
set the position of the step to be wherever we choose. And so we can use all the same tricks as before to compute any desired function.
What properties does $s(z)$ need to satisfy in order for this to work? We do need to assume that $s(z)$ is well-defined as $z \rightarrow -\infty$ and $z \rightarrow \infty$. These two limits are the
two values taken on by our step function. We also need to assume that these limits are different from one another. If they weren't, there'd be no step, simply a flat graph! But provided the
activation function $s(z)$ satisfies these properties, neurons based on such an activation function are universal for computation.
• Earlier in the book we met another type of neuron known as a rectified linear unit. Explain why such neurons don't satisfy the conditions just given for universality. Find a proof of universality
showing that rectified linear units are universal for computation.
• Suppose we consider linear neurons, i.e., neurons with the activation function $s(z) = z$. Explain why linear neurons don't satisfy the conditions just given for universality. Show that such
neurons can't be used to do universal computation.
Up to now, we've been assuming that our neurons can produce step functions exactly. That's a pretty good approximation, but it is only an approximation. In fact, there will be a narrow window of
failure, illustrated in the following graph, in which the function behaves very differently from a step function:
In these windows of failure the explanation I've given for universality will fail.
Now, it's not a terrible failure. By making the weights input to the neurons big enough we can make these windows of failure as small as we like. Certainly, we can make the window much narrower than
I've shown above - narrower, indeed, than our eye could see. So perhaps we might not worry too much about this problem.
Nonetheless, it'd be nice to have some way of addressing the problem.
In fact, the problem turns out to be easy to fix. Let's look at the fix for neural networks computing functions with just one input and one output. The same ideas work also to address the problem
when there are more inputs and outputs.
In particular, suppose we want our network to compute some function, $f$. As before, we do this by trying to design our network so that the weighted output from our hidden layer of neurons is $\sigma
^{-1} \circ f(x)$:
If we were to do this using the technique described earlier, we'd use the hidden neurons to produce a sequence of bump functions:
Again, I've exaggerated the size of the windows of failure, in order to make them easier to see. It should be pretty clear that if we add all these bump functions up we'll end up with a reasonable
approximation to $\sigma^{-1} \circ f(x)$, except within the windows of failure.
Suppose that instead of using the approximation just described, we use a set of hidden neurons to compute an approximation to half our original goal function, i.e., to $\sigma^{-1} \circ f(x) / 2$.
Of course, this looks just like a scaled down version of the last graph:
And suppose we use another set of hidden neurons to compute an approximation to $\sigma^{-1} \circ f(x)/ 2$, but with the bases of the bumps shifted by half the width of a bump:
Now we have two different approximations to $\sigma^{-1} \circ f(x) / 2$. If we add up the two approximations we'll get an overall approximation to $\sigma^{-1} \circ f(x)$. That overall
approximation will still have failures in small windows. But the problem will be much less than before. The reason is that points in a failure window for one approximation won't be in a failure
window for the other. And so the approximation will be a factor roughly $2$ better in those windows.
We could do even better by adding up a large number, $M$, of overlapping approximations to the function $\sigma^{-1} \circ f(x) / M$. Provided the windows of failure are narrow enough, a point will
only ever be in one window of failure. And provided we're using a large enough number $M$ of overlapping approximations, the result will be an excellent overall approximation.
The explanation for universality we've discussed is certainly not a practical prescription for how to compute using neural networks! In this, it's much like proofs of universality for NAND gates and
the like. For this reason, I've focused mostly on trying to make the construction clear and easy to follow, and not on optimizing the details of the construction. However, you may find it a fun and
instructive exercise to see if you can improve the construction.
Although the result isn't directly useful in constructing networks, it's important because it takes off the table the question of whether any particular function is computable using a neural network.
The answer to that question is always "yes". So the right question to ask is not whether any particular function is computable, but rather what's a good way to compute the function.
The universality construction we've developed uses just two hidden layers to compute an arbitrary function. Furthermore, as we've discussed, it's possible to get the same result with just a single
hidden layer. Given this, you might wonder why we would ever be interested in deep networks, i.e., networks with many hidden layers. Can't we simply replace those networks with shallow, single hidden
layer networks?
Chapter acknowledgments: Thanks to Jen Dodd and Chris Olah for many discussions about universality in neural networks. My thanks, in particular, to Chris for suggesting the use of a lookup table to
prove universality. The interactive visual form of the chapter is inspired by the work of people such as Mike Bostock, Amit Patel, Bret Victor, and Steven Wittens.
While in principle that's possible, there are good practical reasons to use deep networks. As argued in Chapter 1, deep networks have a hierarchical structure which makes them particularly well
adapted to learn the hierarchies of knowledge that seem to be useful in solving real-world problems. Put more concretely, when attacking problems such as image recognition, it helps to use a system
that understands not just individual pixels, but also increasingly more complex concepts: from edges to simple geometric shapes, all the way up through complex, multi-object scenes. In later
chapters, we'll see evidence suggesting that deep networks do a better job than shallow networks at learning such hierarchies of knowledge. To sum up: universality tells us that neural networks can
compute any function; and empirical evidence suggests that deep networks are the networks best adapted to learn the functions useful in solving many real-world problems.
. . | {"url":"http://neuralnetworksanddeeplearning.com/chap4.html","timestamp":"2024-11-05T18:17:30Z","content_type":"text/html","content_length":"70526","record_id":"<urn:uuid:533f404a-f1d1-4901-abbf-6af1c04958ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00849.warc.gz"} |
Asymptotics-based CI models for atoms: Properties, exact solution of a minimal model for Li to Ne, and application to atomic spectra
Configuration-interaction (CI) models are approximations to the electronic Schrödinger equation which are widely used for numerical electronic structure calculations in quantum chemistry. Based on
our recent closed-form asymptotic results for the full atomic Schrödinger equation in the limit of fixed electron number and large nuclear charge [SIAM J. Math. Anal., 41 (2009), pp. 631-664], we
introduce a class of CI models for atoms which reproduce, at fixed finite model dimension, the correct Schrödinger eigenvalues and eigenstates in this limit. We solve exactly the ensuing minimal
model for the second period atoms, Li to Ne, except for optimization of eigenvalues with respect to orbital dilation parameters, which is carried out numerically. The energy levels and eigenstates
are in remarkably good agreement with experimental data (comparable to that of much larger scale numerical simulations in the literature) and facilitate a mathematical understanding of various
spectral, chemical, and physical properties of small atoms.
• Atomic spectra
• Configuration interaction
• Schrödinger equation
• Second period
Dive into the research topics of 'Asymptotics-based CI models for atoms: Properties, exact solution of a minimal model for Li to Ne, and application to atomic spectra'. Together they form a unique | {"url":"https://portal.fis.tum.de/en/publications/asymptotics-based-ci-models-for-atoms-properties-exact-solution-o","timestamp":"2024-11-13T20:56:09Z","content_type":"text/html","content_length":"52585","record_id":"<urn:uuid:f6c7a596-aab4-49d0-8559-1ec92fff6f54>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00217.warc.gz"} |
Trigonometry Identities: A Crash Course in Complex Math Concepts
Fundamental Trigonometry Identities, aka trig identities or trigo identities, are equations involving trigonometric functions that hold for any value you substitute into their variables.
These identities are essential tools if you want to solve trigonometric equations and perform complex calculations in mathematics, physics, or engineering. Understanding all the trigonometric
identities can help you simplify seemingly complicated problems, especially in geometry and calculus.
The Foundation of Trigonometry Identities
Trigonometry is a branch of mathematics. At the heart of trigonometry lie the trigonometric functions, which relate the angles of a triangle to the ratios of its sides.
The most basic trigonometric functions are sine, cosine, and tangent, which instructors often teach using the mnemonic SOH-CAH-TOA in right-angled triangles.
From these basic trig functions, we derive other crucial functions, such as secant, cosecant, and cotangent, all of which play vital roles in further developing trigonometric theory.
You might hear people refer to sine, cosine, tangent, secant, cosecant, and cotangent as the six trigonometric ratios or trig ratios.
ALSO READ: How to Use the Mnemonic SOHCAHTOA in Trigonometry
Fundamental Trigonometric Identities
Trigonometric identities form a cornerstone of higher mathematics. They encapsulate all the trigonometric ratios and relationships in a framework that enhances the solving of equations and
understanding of geometric and algebraic concepts.
Trigonometric identities encompass a wide range of formulas, but people generally group them into categories based on their specific applications and forms.
There are three main categories comprising eight fundamental trigonometric identities. These categories include reciprocal identities, Pythagorean identities, and quotient identities.
Reciprocal Identities
These identities express the basic trigonometric functions in terms of their reciprocal functions:
• Sine and cosecant: csc(θ) = 1/sin(θ)
• Cosine and secant: sec(θ) = 1/cos(θ)
• Tangent and cotangent: cot(θ) = 1/tan(θ)
Pythagorean Identities
The Pythagorean trigonometric identities stem from the Pythagorean theorem, also known as the Pythagorean theorem, after the Greek scholar who came up with the mathematical statement.
The trig identities based on the Pythagorean theorem are fundamental to connecting the squares of the primary trigonometric functions:
• Basic Pythagorean identity: sin^2(θ) + cos^2(θ) = 1
• Derived for tangent: 1 + tan^2(θ) = sec^2(θ)
• Derived for cotangent: cot^2(θ) + 1 = csc^2(θ)
Quotient Identities (Trigonometry Identities)
These identities relate the functions through division:
• Tangent as a quotient: tan(θ) = sin(θ)/cos(θ)
• Cotangent as a quotient: cot(θ) = cos(θ)/sin(θ)
Of course, there are many more trigonometric identities beyond just these core identities that have applications in specific scenarios, such as double angle, triple angle, half angle, and sum and
difference identities.
Double Angle Trigonometric Identities
The double angle formulas are trigonometric identities that express trigonometric functions of double angles — that is, angles of the form 2θ — in terms of trigonometric functions of single angles (θ
These formulas are crucial in various mathematical computations and transformations, particularly in calculus, geometry, and solving trigonometric equations.
The primary double-angle formulas include those for sine, cosine, and tangent.
Cosine Double Angle Formula
The cosine double-angle formula is:
cos(2θ) = cos^2(θ) – sin^2(θ)
You can also represent this in two alternative forms using the Pythagorean identity sin^2(θ) + cos^2(θ) = 1:
2cos^2(θ) – 1 = 1 – 2sin^2(θ)
Sine Double Angle Formula
The sine double angle formula is:
This formula is derived from the sum identities and is useful for solving problems involving products of sine and cosine.
Tangent Double Angle Formula (Trigonometry Identities)
The tangent double angle formula is:
tan(2θ) = (2tan(θ))/(1 – tan^2(θ))
This expression arises from dividing the sine double angle formula by the cosine double angle formula and simplifying using the definition of tangent.
ALSO READ: ChatGPT Excel Guide: Revolutionize Your Spreadsheet Formulas
Triple Angle Trigonometric Identities
Triple angle formulas, while less commonly used, offer shortcuts in specific scenarios, such as in certain integrals and polynomial equations. These are identities that allow the calculation of the
sine, cosine, and tangent of three times a given angle (3θ) using the trigonometric functions of the angle itself (θ).
For example, the sine triple angle formula is:
sin(3θ) = 3sin(θ) – 4sin^3(θ)
This formula is derived by using the sine double angle formula and the angle sum identity.
Triple-angle formulas can be derived from double-angle and sum identities and are useful in specific mathematical and engineering contexts, such as simplifying complex trigonometric expressions or
solving higher-degree trigonometric equations.
Half Angle Identities (Trigonometry Identities)
Half-angle identities are trigonometric formulas that allow you to prove trigonometric identities for the sine, cosine, and tangent of half of a given angle.
Half-angle formulas are particularly useful in solving trigonometric equations, integrating trigonometric functions, and simplifying expressions when the angle involved is halved. Half-angle formulas
are derived from the double-angle identities and other fundamental trigonometric identities.
The half-angle identities for sine, cosine, and tangent use the following half-angle formulas:
• Sine half angle identity: sin(θ/2) = ±√((1 – cosθ)/2)
• Cosine half angle identity: cos(θ/2) = ±√((1 + cosθ)/2)
• Tangent half angle identity: tan(θ/2) = sin(θ)/(1 + cos(θ)) = 1 – (cos(θ)/sin(θ))
In the case of the sine and cosine half-angle formulas, the sign depends on the quadrant in which θ/2 resides. The tangent half-angle formula can also be expressed in terms of sine and cosine
These identities are derived by manipulating the double-angle identities. For example, the cosine double angle identity cos(2θ) = 2cos^2(θ) can be rearranged to express cos^2(θ) in terms of cos(2θ),
and then taking the square root (and adjusting for sign based on the angle’s quadrant) gives the half angle formula for cosine.
Half-angle identities are crucial for simplifying the integration of trigonometric functions, particularly when integral limits involve pi (π) or when integrating periodic functions. They also play a
vital role in various fields of science and engineering where wave functions and oscillations are analyzed.
ALSO READ: 2020 National Science, Maths Quiz postponed Find out new date
Sum and Difference Identities (Trigonometry Identities)
Sum identities in trigonometry are essential formulas that allow for the calculation of the sine, cosine, and tangent of the sum of two angles. Conversely, difference formulas allow you to calculate
the sine, cosine, and tangent of the difference between two angles.
These identities are incredibly useful for simplifying expressions, solving trigonometric equations, and performing complex calculations. | {"url":"https://newsghana24.com/trigonometry-identities-a-crash-course-in-complex-math-concepts/","timestamp":"2024-11-09T03:41:43Z","content_type":"text/html","content_length":"189285","record_id":"<urn:uuid:8df4f3ff-6ed8-451f-a178-5a97abd483b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00545.warc.gz"} |
Polar Curves, what are they? • EVOLUTION (en)
Polar Curves, what are they?
Polar curves are part of today’s sailing world. They are commonplace in sailboat design, rating methods, and instrument systems. They are also at the core of software applications like EVOLUTION.
Polar curves are essential to yacht racing best practices. While racing, they help us solve performance, tactical, and strategic issues.
In this article, I will try to demystify what they are, what they represent, and where they come from.
And, no. God doesn’t have anything to do with them. Or does he? …
Well, let’s start by sailing! It is a perfect day. The wind is steady at 20 knots. This is our true wind speed (TWS). We decide to start by sailing at an angle of 30 degrees from where the wind is
coming. This will be our true wind angle (TWA). After an hour, we realize we have sailed six nautical miles through the water. Interesting… We then decided to sail at a TWA of 50º. The distance done
in an hour was 8.2 nm. The day is so nice that we repeat this routine for a few more TWAs.
Back home, we decide to plot the one-hour sail at each TWA. We choose a polar graph, as angles are shown radially from the center.
Finally, we join each consecutive end-point with a line. We sailed all distances in one hour; so, they are easily converted to boat speed in knots. Now we know how fast our boat sails through the
water in every possible TWA.
Please, meet our polar curve for 20 kt of TWS!
Click the image to play...
Click the image to play...
From the previous story, we get the answer. A polar curve represents the boat’s speed (BS), sailing at all possible angles to the wind (TWA) for a certain wind speed (TWS). In consequence:
• Each polar curve corresponds to a single TWS. In our example: 20 kt.
• As there are infinite possible TWS, by convention, we only plot the curves for certain TWS (for example, 6, 10, 14, 20, and 26 kt). For other TWS, we’ll need to interpolate.
• For convenience, we use the “polar” graph type, where the angles represent TWAs and the concentric rings, the boat’s speeds.
The result is a rudimentary way of visualizing the performance of a sailing boat in every possible sailing condition.
Expressed mathematically, the BS is a function of TWS and TWA. Quite logical!
Where do I get my boat's polar curves?
We are now aware that we can predict from the polar curves the boat speed by measuring the actual TWA and TWS. Then, by comparing the real and the expected boat speed, we’ll have a rough idea of how
well we are sailing at that moment.
Great help if you are racing, but too much work. Checking the TWA and TWS and our polar curves graphic all the time is not a healthy option.
The solution is obvious! Load the performance information into a computer. Then connect it to the instruments to gather the wind data, and let the software do the job for you. Now, for example, you
can check how off the target speed you are sailing at. And this is only the tip of the iceberg!
But here, we face an extra challenge; computers like numbers, not graphics. We will need high-quality numeric data to feed the software. Let’s see how to get them…
Collecting data while sailing.
This is the basic method described in our short story above. But not always is God on our side…
• It is a challenging job. You will need tons of data and analysis to get each point.
• It would take a lot of time. You will need to sail on many days to collect data for different TWS.
• It won’t be exact. Your instruments would need thorough calibration each time you go out sailing.
• Finally, the performance data you get will only reflect the ability of this crew to sail the boat rather than the boat’s full potential.
This is not a brilliant way to go if you start from scratch.
A velocity prediction program.
In the early ’70s, engineers at MIT were asked to develop a performance simulation software. They came back with a velocity prediction program, or VPP for short.
This system would yield the boat’s speed in any sailing condition with only the hull, sails, and rig characteristics as input. The results were terrific; the numbers were exact.
At some point, someone realized that if a VPP could predict any boat’s performance, why not use it as part of a rating system. And so, the ORC IMS rules were born.
So, if your boat was recently measured under the ORC system, check your certificate, and you will find the polar curve numbers.
If you are starting from scratch, this is the way to go!
Although both methods are pretty different, they are complementary. You can use the ORC certificate as a first approach. Then, with EVOLUTION running aboard, collect information to confirm or adjust
the numbers.
The truth is that for some boats, there can be discrepancies between the VPP results and reality.
Consider this. As the ORC IMS rules are well understood, designers exploit its loopholes. They develop designs that trick the rules and beat the VPP (rating). For example, in the early 2000, code0
sails and square section boats had a slight advantage. Fortunately, the ORC improves the rules closing the loopholes every year.
Another typical case is wide transoms planing boats. Depending on the swell of any particular day, surfing can start at different TWA/TWS. For these days, you might need an adjusted set of polar
Some technical tidbits…
• Polar curves have only the water surface and the wind trajectory as their frame of reference. Boat speeds are ALWAYS through the water.
• They are symmetric to the wind. The port and starboard sides have the same values. So, we can graph, or input, one side only.
• Geographic coordinates, direction, and ground speed (SOG) have no meaning. This is the realm of performance through the water.
• All TWA are measured from the wind to the boat’s course, NOT to its center-line, as these angles include the leeway. I prefer to call them polar wind angles (PWA). We should be careful when
comparing them to the instrument’s TWA, like apples and oranges!
If the boat’s performance numbers don’t match the data coming from your instruments, there are many possible reasons. The last of which is incorrect polar curves.
Your instruments might need calibration. Correct TWA and TWS are essential to get an accurately predicted boat speed. Actual boat speed also needs to be correctly measured. Sometimes wind speed at
the mast top differs from the effective wind on the sails below.
Collecting and analyzing several races/trials is best before messing with the boat’s polar curves. Seeking advice from experts is always a good idea. | {"url":"https://evolution-tactic.com/en/polar-curves/","timestamp":"2024-11-04T05:41:52Z","content_type":"text/html","content_length":"211582","record_id":"<urn:uuid:4b031b32-ba85-4ef8-8514-f08ee631872d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00632.warc.gz"} |
Learning Priority Queues
Last week, I wrote about my first foray into Binary Heaps, which can be found here. Having knowledge of heaps is really helpful when learning priority queues, since the optimal(but not only) way to
implement them is with a heap. In last weeks post, I also had a link to a repl I made showing how to construct a max heap in JavaScript(here), as well as some essential methods to implement. I have
added a priority queue class in that same repl.
What are priority queues?
A priority queue is an abstract data structure. It is a lot like a normal queue, except each node or element has a priority assigned to it in addition to its value. The nodes with the highest
priority are removed from the queue first. If two elements have the same priority, whichever one was added to the queue first will be removed first.
There are a variety of different ways that we can implement priority queues. We could use an array, linked list, or a heap. Binary heaps lend themselves well to use as priority queues because Max
Heaps and Min Heaps are organized based on the value of a node being less than or greater than the value of its parent, respectively. This means that a lot of the logic to create a priority queue
already exists within heaps. Because heaps are so easy to use for priority queues, there is a common misconception that priority queues are heaps. While you should probably always use a heap for your
priority queue, it is important to know that you don’t necessarily have to.
Why not an array or linked list?
If we were to use an array, the time complexity would be much greater. The greatest weakness of an array in terms of big O is re-indexing. For example, if we add an item to the beginning of an array,
every single element after that needs to be re-indexed. Adding to the end is much more efficient, but unfortunately, that would not solve our problems, since the array would very soon not be in order
of priority. At this point, we would have to iterate over the array to find out which item has the highest priority, which could take a really long time depending on the size of the array. While it
would get the job done, a heap is a much more efficient tool. Linked Lists have similar time complexity issues to arrays for tasks like this.
Real-world applications
Priority queues are used under the hood by our computers all of the time to both make sure that the most important operations are being handled first. They are also used in data compression, and
Djikstra’s Shortest Path Algorithm. However, the concept of a priority queue is not exclusive to computer science. The Covid-19 vaccine rollout is an excellent example of a priority queue. You could
think of the elderly being the highest priority, followed by people with suppressed immune systems, essential workers, and so on. When priority is equal, such as two people who are in their 90’s,
whoever called or applied for an appointment first(or ‘queued up’) will be the first to receive the vaccine.
The fun part
Now that you have a good idea of what a priority queue is(or, however good of an idea I have at least), here’s the code I used to implement a priority queue! I decided to make a Min Heap, since last
week I made a Max Heap. A priority queue can be made with either. When using a Min Heap, the nodes with the lowest number of priority will be removed first. While this may seem counterintuitive, it
is actually pretty common outside of programming for ‘priority 1’ to mean ‘of the greatest priority.’ This can of course be done with a Max Heap as well, just keep in mind that the highest priority
value will be removed first.
First, here is the node class that I created. These nodes will make up our priority queue, and are relatively simple. There are no pointers or anything like that, just a value and the priority of
that value.
Next, we have our basic priority queue, without any methods.
Here is our method for adding a new node to the queue.
And finally, here is the method for removing the highest priority item from the queue.
Thanks for reading, and I hope that you found this helpful and informative! | {"url":"https://medium.com/nerd-for-tech/learning-priority-queues-9d77e6938ac6","timestamp":"2024-11-09T22:43:37Z","content_type":"text/html","content_length":"119036","record_id":"<urn:uuid:a26aed18-9b58-4029-83c0-43d33d52a769>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00519.warc.gz"} |
Don't Let Your Little Leaguers Grow Up To Be Right-Handed Power Hitters Who Strike Out Alot Because They Might Choke In the Clutch
This is prompted by a post by Tom Tango (aka tangotiger) titled
Best and Worst Clutch Hitters of the Retrosheet era
Tom has a clutch stat based on WPA or "win probability added." The idea there is that every plate appearance by a hitter either increases or decreases his team's probability of winning. A HR with the
score tied in the bottom of the 9th has more impact than one in the first inning with the score 10-0.
But Tom adjusts this by how often a hitter gets to hit in "high leverage" situations. Then that it is compared to what his WPA would be if he always hit in average leverage situations. I hope I got
that right. But, of course, Tom explains it much better. That stat ends up telling us how many more games a player's team wins (or loses) because he hits better or worse in high leverage situations
than he does overall.
Nellie Fox is #1 with +13.4 wins since 1950. That is, by hitting better than he normally did in high leverage situations, he added 13.4 wins to his teams over his whole career. Sammy Sosa was last
with -16.8 wins. That is, he hit worse in high leverage situations than he normally did and this cost his teams 16.8 wins over the course of his career. These two hitters maybe could not be more
different and they may be good illustrations of what is going on with this clutch stat.
So let's call Tom's stat Clutch. That's what it is called at Baseball Reference. I took all the right-handed batters and left-handed batters since 1950 who had 4000+ PAs (653 players). Then I divided
their Clutch stat by their PAs. I did the same thing for HRs and strikeouts. The I ran a regression with Clutch/PA being the dependent variable and HR/PA and SO/PA being the independent variables. I
also added a dummy variable for being a righty (1 for righties and 0 for lefties).
Here is the regression equation
Clutch/PA = 0.0007 - .00025*Righty - .0169*HR/PA - .00157*SO/PA
All three variables seem to be significant. Here are the t-values:
Righty -6.31
HR/PA -10.49
SO/PA -3.06
R-squared is .314 (meaning that 31.4% of the variation in Clutch/PA across players is explained by the equation) and the standard error per 700 PAs is .33.
Mutltiplying -.00025*700 gives us -.172 (assuming 700 PAs is a full season). So simply being a righty means you will have a negative Clutch rating of -.172, meaning you will cost your team .172 wins.
This could be because righties can't use the hole at first base with a runner on as well as lefties. When a runner is on first, it makes for a slightly higher leverage situation. Also, righties might
have to face right-handed pitchers more often in high leverage situations than lefties face left-handed pitchers.
To see the impact of HRs and SOs, I found the standard deviation of HR/PA and SO/PA and then checked to see how much Clutch/PA would change with a one standard deviation increase in both stats. Here
they are
HR/PA: .014
SO/PA: .0449
The coefficient on HR/PA was -.0169. That times .014 = -0.00024. But that times 700 PAs is about -.166. So being one standard deviation above average in HR/PA costs your team .166 wins per season.
Maybe HR hitters cannot adapt well in high leverage situations since they generally just swing for the fences. But that is just a guess.
Something similar could be going on for guys who strikeout alot. The coefficient on SO/PA was -.00157. That times .0449 = -0.00007. That times 700 = -.049. So increasing your strikeout rate by one
standard deviation costs your team .049 wins per season. Maybe guys who don't strikeout alot have better bat control and they can hit the ball the hole at first base better than average or they can
adapt to the situation better.
Let's look at how all this affects Nellie Fox. He was a lefty, so he does not get the righty penalty. His career HR/PA = .003488. The average for all the players in the sample was .0268. So he was
.0233 below that. To see the effect for the whole season, we multiply that first by -.0169, the coefficient on HR/PA from the regression equation and then times 700. This gives us -.0233*-.0169*700 =
.276. So his lack of power added .276 wins to his teams each year.
What about for his entire career. He had 10,035 career PAs or 14.33 seasons. With 14.33*.276 = 3.96, Fox gets 3.96 clutch wins for his whole career just due to his lack of power.
For SO, Fox had a career rate of .0206. The average was .133. So he was .112 below that. Let's multiply that by -.00157 and then 700 to get .122 (-.00157 was the coefficient on SO/PA). It amounts to
-.112*-.00157*700 = .122. So his ability to not strike out gave his teams .122 clutch wins per season. For his career that would be 1.76 Clutch wins. Then 3.96 + 1.76 = 5.72. Just by being a low HR,
low SO guy added 5.72 clutch wins. That is nearly half his total.
For Sosa, we have a HR/PA rate of .06154 and a SO/PA rate of .233. Doing the same exercise as I did above for Fox has him with the following "clutch losses" per season due to his high HR rate and
high SO rate:
HR/PA = .41
SO/PA = .11
Sosa had 9,986 career PAs or 14.14 seasons. His HR hitting cost him 5.81 clutch wins and his striking out cost him 1.54. And being a righty cost him 2.43 wins (14.14*.172 = 2.43). The .172 was how
many wins a righty lost per year, as explained above. Then 5.81 + 1.54 + 2.43 = 9.78. That is more than half of his clutch losses.
All of this, is, of course, an approximation. The regression is not perfect, since the r-squared was only .314. But the variables all were significant and the F-stat was 98 (that is significant and
it means that the 3 variables together probably explain some part of the dependent variable).
So Tom Tango's clutch stat is great in terms of what clutch stats should do but it may have some biases. But those biases might be ones teams should care about since HR hitting ability and SO
avoidance ability are identifiable traits.
I did a very different kind of study several years ago called
Do Power Hitters Choke in the Clutch?
. I have a link to a similar study by Andrew Dolphin. In this other study it did not look like they did choke. Also, here are some other comments I made at the tangotiger link:
I happened to have a list of players with 6000+ PAs from 1987-2001 with their OPS in close and late situations (CL) and their OPS in non-CL situations. I took the ratio of CL/nonCL. Tino Martinez did
the best, with 1.095, meaning that his OPS in CL situations was 9.5% higher than nonCL. The correlation between CL OPS/nonCL OPS and SO/PA is -.364. So it looks like guys who strikeout alot have a
little harder time doing well in the clutch
Also, if you go to the rankings, you can see that 10 of the 12 best players in maintaining their OPS in the CL were lefties or switch hitters
And it looks like 8 of the bottom twelve are righties
No comments: | {"url":"https://cybermetric.blogspot.com/2010/07/dont-let-your-little-leaguers-grow-up.html","timestamp":"2024-11-13T21:06:37Z","content_type":"text/html","content_length":"55669","record_id":"<urn:uuid:822e5a74-94ff-48ba-8eb4-8b0bf5daf1be>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00651.warc.gz"} |
3 Specific Determinations - Logic Philosophy Spirituality3 Specific Determinations
3 Specific Determinations
THE LOGIC OF CAUSATION
Phase One: Macroanalysis
Chapter 3 –The Specific Determinations.
3.The Significance of Certain Findings.
We shall now look into the consistent combinations of the four genera of causation, symbolized asm,n,p,q, with each other or their negations. Implicit in our gradual development of these concepts of
causation from a common paradigm, was the idea that they are abstractions, indefinite concepts that are eventually concretized in the more specific and definite compounds.
We have already found some of their combinations, namelympandnqto beinconsistent. This was due to incompatibilities between clauses of their definitions, or in other words, certain rows of their
matrices. Thus, row 6 ofm(C + notE is impossible) is in conflict with row 22 (C1 + notE is possible) ofp; similarly, row 7 ofn(notC + E is impossible) is in conflict with row 23 (notC1 + E is
possible) ofq.
It is also possible to prove certain other combinations to be logically impossible. This can be done formally, but not at the present stage of development, because we do not yet have the technical
means at this stage to treatnegationsof generic determinations. To definenotm,notn,notp,notqin verbal terms would be extremely arduous and confusing. I will therefore for now merely affirm to you
thatcombinations ofany onepositivegeneric determination withthe negations of the three othergeneric determinations, for the very same terms, are inconsistent.
By elimination, we are left with onlyfourconsistent compounds, i.e. remaining combinations give rise to no inconsistency, i.e. whose respective clauses do not contradict each other. This means that,
from the logical point of view, they are conceivable, and therefore worthy of further formal treatment. We may refer to them as the specific determinations, or species of causation.
The following table (where + and – signify, respectively, affirmation and denial of a determination) lists all combinations of the generics and identifies the logically possible specifics among them:
Table 3.1. Possible specifications of the 4 generic determinations.
No. of genera Compound m n p q Modus
Four Mnpq + + + + mp,nqimpossible
Three mnp + + + – mpimpossible
mnq + + – + nqimpossible
mpq + – + + mpimpossible
npq – + + + nqimpossible
Two mp + – + – mpimpossible
nq – + – + nqimpossible
Two mn + + – – possible
mq + – – + possible
np – + + – possible
pq – – + + possible
Only one m-alone + – – – will be proved impossible
n-alone – + – – will be proved impossible
p-alone – – + – will be proved impossible
q-alone – – – + will be proved impossible
None non causation – – – – possible
The formulae given in the above table for each specific determination is as brief as possible. For instance, sincemimplies the negation ofpandnimplies the negation ofq, ‘mn’ (meaning both complete
and necessary causation) tacitly implies ‘notpand notq’ (neither partial nor contingent causation,with whatever complement); the latter negations need not therefore be mentioned. Similarly, an
expression likem-alonesignifies the affirmation of one generic determination (here,m)andthe denial of all three others (i.e. notnand notq, as well as notp). This notation is far from ideal, but
suffices for our current needs, since many combinations are eliminated at the outset.
We see that four specific determinations, namelymn,mq,np,pq, are formed by conjunction of positive causative propositions; these we shall call (following J. S. Mill’s nomenclature)jointdeterminations
. It follows from the above table that each generic determination has onlytwospecies. Each generic determination may therefore be interpreted as a disjunction of its two possible embodiments; thus,m
meansmnormq;nmeansmnornp;pmeansnporpq; andqmeansmqorpq. Also note, we could refer tomnas ‘only-strong causation’ and topq‘only-weak causation’, whilemqandnpare ‘mixtures of strong and weak’.
The four specific determinations formed by composing positive causative propositions with negative ones, namelym-alone,n-alone,p-alone,q-alone, will be calledlone determinations. This expression is
introduced at this stage to contrast it with generic and joint determinations. Clearly, one should not confuse an isolated generic symbol such asmwith the corresponding specific symbolm-alone; I use
this heavy notation to ensure no confusion arises. Moreover,nota bene: In the above table, these forms are eliminated at the outset, because they concernabsolutepartial or contingent causation, i.e.
they are irrespective of complement and meanm-alone[abs]etc. But as we shall later see, when they involverelativepartial or contingent causation, i.e. when some complement is specified (inp[rel]orq
[rel]or their negations), so that they meanm-alone[rel]etc., they remain possible forms. This need not concern us at the moment, but is said to explain why these forms need to be named.
We would label as, simply,causation(or ‘any causation’), the disjunctive proposition “mornorporq”, or the more specific “mnormqornporpq”. Such positive propositions merelyimplycausation, if they
involve less disjuncts or an isolated generic or joint determination. The contradictory of causation,non-causation, is the only remaining allowable combination, our table being exhaustive. This last
possible combination involves negation ofall fourgeneric or joint determinations, note well. That is, it means “neithermnornnorpnorq” or equally “neithermnnormqnornpnorpq”.
The above table also allows us to somewhat interpret complex negations. The negation of any compound is equivalent to the disjunction of all remaining four compounds (three of causationandone of
non-causation). For instance “not(mn)” meansmq,np,pq, ornon-causation. Similarly with any other formula.
Note that where one of the weak determinations is denied by reason of the affirmation of the contrary strong determination (min the case ofp, ornin the case ofq), any and all proposed complements are
denied. Where one of the weaks is affirmed (even if the other is radically denied), at least one complement is implied; and of course, the contrary strong determination is denied. In all other cases,
we must remember to be careful and distinguish between restricted and radical negations ofporq, as already explained in the previous chapter.
We shall now examine in detail the four joint determinations, symbolized bymn,mq,np, andpq, each of which is obtained by consistent conjunction of two generic determinations. Each is thus a species
shared by the two genera constituting it. Thus,mnis a specific case ofmand a specific case ofn; and so forth.
We have already encountered one of these joint determinations, viz. complete and necessary causation, the paradigm of causation. We shall now examine it in further detail, and also treat the other
three joint determinations.
Complete and Necessarycausation by C of E:
(i)If C, then E;
(ii)if notC, not-then E (may be left tacit);
(iii)where:C is possible.
(iv)if notC, then notE;
(v)if C, not-then notE (may be left tacit);
(vi)where:C is unnecessary.
Table 3.2.Complete necessary causation.
No. Element/compound Modus Source/relationship
1 C Possible (iii)
2 notC Possible (vi)
3 E Possible implied by (v)
4 notE possible implied by (ii)
5 C E possible (v) or implied by (i) + (iii)
6 C notE impossible (i)
7 notC E impossible (iv)
8 notC notE possible (ii) or implied by (iv) + (vi)
Notice how the merger of clauses (i), (ii) and (iii) with (iv), (v) and (vi) renders clauses (ii) and (v) redundant (though still implicit). Rows 5-8 of the above table (shaded) constitute the matrix
of complete-necessary causation.
Complete but Contingentcausation by C1 of E:
(i)If C1, then E;
(ii)if notC1, not-then E (may be left tacit);
(iii)where: C1 is possible (may be left tacit).
(iv)if (notC1 + notC2), then notE;
(v)if (C1 + notC2), not-then notE;
(vi)if (notC1 + C2), not-then notE;
(vii)where:(notC1 + notC2) is possible.
Table 3.3.Complete contingent causation.
No. Element/compound Modus Source/relationship
1 C1 possible (iii) or implied by (v)
2 notC1 possible implied by (vi) or (vii)
3 C2 possible implied by (vi)
4 notC2 possible implied by (v) or (vii)
5 E possible implied by (v) or (vi)
6 notE possible implied by (iv) + (vii)
7 C1 E possible implied by (v)
8 C1 notE impossible (i)
9 notC1 E possible implied by (vi)
10 notC1 notE possible (ii) or implied by (iv) + (vii)
11 C2 E possible implied by (vi)
12 C2 notE open if #12 is impossible, so is #24; and in view of (i): if #12 is possible, so is #24
13 notC2 E possible implied by (v)
14 notC2 notE possible implied by (iv) + (vii)
15 C1 C2 open if #15 is impossible, so is #19; and in view of (i): if #15 is possible, so is #19
16 C1 notC2 possible implied by (v)
17 notC1 C2 possible implied by (vi)
18 notC1 notC2 possible (vii)
19 C1 C2 E open if #19 is possible, so is #15; and in view of (i): if #19 is impossible, so is #15
20 C1 C2 notE impossible implied by (i)
21 C1 notC2 E possible (v)
22 C1 notC2 notE impossible implied by (i)
23 notC1 C2 E possible (vi)
24 notC1 C2 notE open if #24 is possible, so is #12; and in view of (i): if #24 is impossible, so is #12
25 notC1 notC2 E impossible (iv)
26 notC1 notC2 notE possible implied by (iv) + (vii)
Notice how the merger of clauses (i), (ii) and (iii) with (iv), (v), (vi) and (vii) renders clauses (ii) and (iii) redundant (though still implicit). Rows 19-26 of the above table constitute the
matrix of complete-contingent causation.
Concerning the four positions labeledopenin the above table, note that the moduses of Nos. 12 and 24 are tied and likewise those of Nos. 15 and 19. Proof for the first two: if #12 (C2 + notE) is
impossible, #24 (notC1 + C2 + notE) must also be impossible; if #24 (notC1 + C2 + notE) is impossible, then knowing #20 (C1 + C2 + notE) to be impossible, #12 (C2 + notE) must also be impossible; the
rest follows by contraposition. Proof for the other two: if #15 (C1 + C2) is impossible, #19 (C1 + C2 + E) must also be impossible; if #19 (C1 + C2 + E) is impossible, then knowing from (i) that #20
(C1 + C2 + notE) is impossible, #15 (C1 + C2) must also be impossible; the rest follows by contraposition. The interpretation of these open cases is as follows.
(a) Suppose #12 is impossible; this means that “If C2, then E”. We know from #14 that “If notC2, not-then E”; and from #3 that “C2 is possible”. Whence, C2 satisfies the definition for being a
completecause of E, just like C1. Thus, in such case, C1 and C2 are simplyparallelcomplete (and contingent) causes of E. This is quite conceivable, and as we have seen in an earlier section such
causes may be compatible or incompatible. If #15 is possible, they are compatible; and if#15 is impossible, they are incompatible.
(b) Suppose #12 is possible; this means that “If C2, not-then E”, in which case C2 is not a complete cause of E. This is quite conceivable, covering situations where one of the contingent causes
(namely, C1) is also complete, while the other (C2) is not complete. Additionally, we can say: if #15 is possible, they are compatible; and if#15 is impossible, they are incompatible; there is no
problem of consistency either way.
However, a very interesting question arises in such case: is a contingent but not complete cause (like C2, here) bound to be a partial cause? C2 is certainly not a partial cause of E in conjunction
with C1, since C1 is a complete cause of E. Therefore,ifC2 is a partial cause of E, it will be so in conjunction withsome otherpartial cause of E, say C3. But since C3 is unmentioned in our original
givens, its existence is not formally demonstrable. We thus have no certainty that anincompletecontingent cause is implicitly apartialcontingent cause! We will return to this issue later.
Partial yet Necessarycausation by C1 of E:
(i)If notC1, then notE;
(ii)if C1, not-then notE (may be left tacit);
(iii)where: C1 is unnecessary (may be left tacit).
(iv)if (C1 + C2), then E;
(v)if (notC1 + C2), not-then E;
(vi)if (C1 + notC2), not-then E;
(vii)where:(C1 + C2) is possible.
Table 3.4.Partial necessary causation.
No. Element/compound Modus Source/relationship
1 C1 possible implied by (vi) or (vii)
2 notC1 possible (iii) or implied by (v)
3 C2 possible implied by (v) or (vii)
4 notC2 possible implied by (vi)
5 E possible implied by (iv) + (vii)
6 notE possible implied by (v) or (vi)
7 C1 E possible (ii) or implied by (iv) + (vii)
8 C1 notE possible implied by (vi)
9 notC1 E impossible (i)
10 notC1 notE possible implied by (v)
11 C2 E possible implied by (iv) + (vii)
12 C2 notE possible implied by (v)
13 notC2 E open if #13 is impossible, so is #21; and in view of (i): if #13 is possible, so is #21
14 notC2 notE possible implied by (vi)
15 C1 C2 possible (vii)
16 C1 notC2 possible implied by (vi)
17 notC1 C2 possible implied by (v)
18 notC1 notC2 open if #18 is impossible, so is #26; and in view of (i): if #18 is possible, so is #26
19 C1 C2 E possible implied by (iv) + (vii)
20 C1 C2 notE impossible (iv)
21 C1 notC2 E open if #21 is possible, so is #13; and in view of (i): if #21 is impossible, so is #13
22 C1 notC2 notE possible (vi)
23 notC1 C2 E impossible implied by (i)
24 notC1 C2 notE possible (v)
25 notC1 notC2 E impossible implied by (i)
26 notC1 notC2 notE open if #26 is possible, so is #18; and in view of (i): if #26 is impossible, so is #18
Notice here again how the merger of clauses (i), (ii) and (iii) with (iv), (v), (vi) and (vii) renders clauses (ii) and (iii) redundant (though still implicit). Rows 19-26 of the above table (shaded)
constitute the matrix of partial-necessary causation.
Concerning the four positions labeledopenin the above table, note that the moduses of Nos. 13 and 21 are tied and likewise those of Nos. 18 and 21. These statements may be proved in the same manner
as done for the preceding table; this is left to the reader as an exercise. We can also interpret these situations in similar ways. If #13 is impossible, C2 is a partial and necessary cause of E,
parallel to C1; and notC2 is either compatible or incompatible with notC1 according to whether #18 is possible or impossible. If #13 is possible, C2 is a partial but not necessary cause of E, and
notC2 is either compatible or not with notC1, according to whether #18 is possible or not.
However, it is not formally demonstrable that anunnecessarypartial cause is implicitly acontingentpartial cause; and the implications of this finding (or absence of finding) will have to be
considered later.
Partial and Contingentcausation by C1 of E:
(i)If (C1 + C2), then E;
(ii)if (notC1 + C2), not-then E;
(iii)if (C1 + notC2), not-then E;
(iv)where:(C1 + C2) is possible.
(v)if (notC1 + notC2), then notE;
(vi)if (C1 + notC2), not-then notE;
(vii)if (notC1 + C2), not-then notE;
(viii)where:(notC1 + notC2) is possible.
Table 3.5.Partial contingent causation.
No. Element/compound Modus Source/relationship
1 C1 possible implied by (iii) or (iv) or (vi)
2 notC1 possible implied by (ii) or (vii) or (viii)
3 C2 possible implied by (ii) or (iv) or (vii)
4 notC2 possible implied by (iii) or (vi) or (viii)
5 E possible implied by (vi) or (vii)
6 notE possible implied by (ii) or (iii)
7 C1 E possible implied by (vi)
8 C1 notE possible implied by (iii)
9 notC1 E possible implied by (vii)
10 notC1 notE possible implied by (ii)
11 C2 E possible implied by (vii)
12 C2 notE possible implied by (ii)
13 notC2 E possible implied by (vi)
14 notC2 notE possible implied by (iii)
15 C1 C2 possible (iv)
16 C1 notC2 possible implied by (iii) or (vi)
17 notC1 C2 possible implied by (ii) or (vii)
18 notC1 notC2 possible (viii)
19 C1 C2 E possible implied by (i) + (iv)
20 C1 C2 notE impossible (i)
21 C1 notC2 E possible (vi)
22 C1 notC2 notE possible (iii)
23 notC1 C2 E possible (vii)
24 notC1 C2 notE possible (ii)
25 notC1 notC2 E impossible (v)
26 notC1 notC2 notE possible implied by (v) + (viii)
Rows 19-26 of the above table (shaded) constitute the matrix of partial-contingent causation. We note that here none of the original clauses are made redundant by the combination of partial and
contingent causation. Furthermore, no position in the above table is left open, with regard to the possibility or impossibility of the item or combination concerned.
Additionally we can say that if C1 and C2 are, as here,complementarypartial contingent causes of E, then they have the same set of relations to each other and to E. But this does not mean that if C1
and C2 are complementary partial causes of E, they are bound to be complementary contingent causes of E, since as we have seen both or just one of them may be necessary cause(s) of E. Similarly, we
cannot say that if C1 and C2 are complementary contingent causes of E, they are bound to be complementary partial causes of E, since as we have seen both or just one of them may be complete cause(s)
of E.
There may, of course, be more than one complement to C1 (i.e. complements C3, C4…, in addition to C2) in the last three joint determinations,mq,nporpq. Such cases may be similarly treated, as we have
explained when considering the weaker generic determinations separately.
It is with reference to the joint determinationsmqandnpthat the utility of reformatting sentences about partial or contingent causation becomes apparent. Anmqproposition is best stated as “C1 is a
complete and (complemented by C2) a contingent cause of E”, and anpproposition is best stated as “C1 is a necessary and (complemented by C2) a partial cause of E”.
We must now consider thehierarchybetween the above four forms, since there are clearly differences in degree in the ‘bond’ between cause(s) and effect. Causation is obviously at itsstrongestwhen both
complete and necessary (mn). It is difficult to say which of the next two forms (mqornp) is the stronger and which the weaker, they are not really comparable to each other; all we can say is that
they are both less determining than the first and more determining than the last; let us call themmiddlingdeterminations. Causation isweakestfor each factor involved in partial and contingent
causation (pq).
With regard toparallelism, we can infer that it is conditionally possible with reference to our previous findings in the matter.
Two complete-necessary causes, C, C[1], of the same effect E, may be parallel, provided they are neither exhaustive nor incompatible with each other, i.e. provided “if C, not-then notC[1]and if notC,
not-then C[1]” is true.
For complete-contingent causation, it is conceivable that C1, C2 have this relation to E and C3, C4 have this same relation to E, provided the complete causes C1 and C3 are not exhaustive and the
compounds (notC1 + notC2) and (notC3 + notC4) are not exhaustive. An interesting special case is when C2 = C4, i.e. when the two complete causes have the same complement in the contingent causation
of E.
For partial-necessary causation, it is conceivable that C1, C2 have this relation to E and C3, C4 have this same relation to E, provided the necessary causes C1 and C3 are not incompatible and the
compounds (C1 + C2) and (C3 + C4) are not exhaustive. An interesting special case is when C2 = C4, i.e. when the two necessary causes have the same complement in the partial causation of E.
For partial-contingent causation, the same condition of non-exhaustiveness between the parallel compounds involved applies. And here, too, note the special case when C2 = C4 as interesting.
Tables involving all the items concerned and their negations in all combinations may be constructed to analyze the implications of such parallelisms in detail.
Thenegationsof the four joint determinations may be reduced to the denial of one or both of their constituent generic determinations. That is,not(mn)means ‘not-mand/or not-n’;not(mq)means ‘not-mand/
or not-q’;not(np)means ‘not-pand/or not-n’; andnot(pq)means ‘not-pand/or not-q’. Each of these alternative denials in turn implies denial of one or more of the constituent clauses, obviously.
3.The Significance of Certain Findings.
Let us review how we have proceeded so far. We started with the paradigm of causation, namely, complete necessary causation. We then abstracted its constituent “determinations”, the complete and the
necessary aspects of it, and by negation formulated another two generic determinations, namely partial and contingent causation. We then recombined these abstractions, to obtain all initially
conceivable formulas. Some of these formulas (mp,nq) could be eliminated as logically impossible by inspecting their definitions and finding contradictory elements in them. Others (thelone
determinations, obtained by conjunction of only one generic determination and the negations of all three others) were eliminated on the basis of later findings not yet presented here. This left us
with only five logically tenable specific causative relations between any two items, namely the fourjointdeterminations (the consistent conjunctions generic determinations) and non-causation (the
negation of all four generic determinations).
When I personally first engaged in the present research, I was not sure whether or not the (absolute) lone determinations were consistent or not. Because each lone determination involves three
negative causative propositions in conjunction, and each of these is defined by disjunction of the negations of the defining clauses of the corresponding positive form, it seemed very difficult to
reliably develop matrixes for them. I therefore, as a logician[1], had to assume as a working hypothesis that they were logically possible. It is only in a later phase, when I developed “matricial
microanalysis” that I discovered that they can be formally eliminated. Take my word on this for now. This discovery was very instructive and important, because it signified thatcausation is more
“deterministic” than would otherwise have been the case.
If lone determinations had been logically possible, causation would have been moderately deterministic. For two items might be causatively related on the positive side, but not on the negative side,
or vice-versa. Something could beonlya complete cause (oronlya partial cause) of another without having to also be a necessary or contingent one; or it could beonlya necessary cause (or only a
contingent cause) of another without having to also be a complete or partial one. But as it turned out there is logically no such degree of freedom in the causative realm.
If two things are causatively related at all, theyhave to beultimately related in one (and indeed only one) of the four ways described as the joint determinations[2], i.e. in the way ofmn,mq,np, orpq
. The conceptsm,n,p,qare common aspects of these four relations and no others. There is no “softer” causative relation. Causation is “full” or it is not at all; no “holes” are allowed in it.
We can formulate the following “laws of causation” in consequence:
● If something is a complete or partial cause of something, it must also be either a necessary or (with some complement or other) a contingent cause of it.
● If something is a necessary or contingent cause of something, it must also be either a complete or (with some complement or other) a partial cause of it.
● In short, since a lone determination is impossible, if something is at all a causative of anything, it must be related in the way of a joint determination with it.
These laws have the following corollaries:
● If something is neither a necessary nor contingent cause of something, it must also be neither a complete nor (with whatever complement) a partial cause of it.
● If something is neither a complete nor partial cause of something, it must also be either neither a necessary nor (with whatever complement) a contingent cause of it.
● In short, since a lone determination is impossible, if two things are known not to be related in the way of either pair of contrary generic determinations (i.e.mandp, ornandq), they can be
inferred to be not causatively related at all.
● Thecomplementof a partial cause of something, being also itself a partial cause of that thing, must either be a necessary or (with some complement or other) a contingent cause of that thing.
● Thecomplementof a contingent cause of something, being also itself a contingent cause of that thing, must either be a complete or (with some complement or other) a partial cause of that thing.
With regard to the epistemological question, as to how these causative relations are to be established, we may say that they are ultimately based oninduction(including deduction from induced
propositions): we have no other credible way to knowledge. Causative propositions may of course be built up gradually, clause by clause (see definitions in the previous chapter).
As I showed in my workFuture Logic, thepositivehypothetical (i.e. if/then) forms, from which causatives are constructed, result from generalizations from experience of conjunctions between the items
concerned (which generalizations are of course revised by particularization, when and if they lead to inconsistency with new information). Thenegativehypothetical (i.e. if/not-then) forms are assumed
true if no positive forms have been thus established, or are derived by the demands of consistency from positive forms thus established. In their case, an epistemological quandary may be translated
into an ontologicalfait accompli(at least until if ever reason is found to prefer a positive conclusion).
We may first, by such induction (or deduction thereafter), propose one of the four generic determinations in isolation. The proposed generic determination is effectively treated as a joint
determination “in-waiting”, a convenient abstraction that does not really occur separately, but only within conjunctions. We are of course encouraged by methodology to subsequentlyvigorously research
which of the four joint determinations can be affirmed between the items concerned. In cases where all such research efforts prove fruitless, we are simply left with aproblematicstatement, such as
(to give an instance) “P is a complete cause, and either a necessary or a contingent cause, of Q”.
But, since lone determination does not exist, we can never opt for anegativeconclusion, like “P is a complete cause, but neither a necessary nor a contingent cause, of Q”. We maynotin this context
effectivelygeneralizefrom “I did not find” to “there is not” (a further causative relation). We may not interpret a structural doubt as a negative structure, an uncertainty as an indeterminacy.
In the history of Western philosophy, until recent times, the dominant hypothesis concerning causation has been that it is applicableuniversally. Some philosophers mitigated this principle, reserving
it for ‘purely physical’ objects,exceptingbeings with volition (humans, presumably G-d, and even perhaps higher animals). A few, notably David Hume, denied any such “law of causation” as it has been
But in the 20th Century, the idea that there might, even in Nature (i.e. among entities without volition), be ‘spontaneous’ events gained credence, due to unexpected developments in Physics. That
idea tended to be supported by the Uncertainty Principle of Werner Heisenberg for quantum phenomena, interpreted by Niels Bohr as an ontological (and not merely epistemological) principle of
indeterminacy, and the Big-Bang theory of the beginning of the universe, which Stephen Hawking considered as possibly implying anex nihiloand non-creationist beginning.
We shall not here try to debate the matter. All I want to do at this stage is stress the following nuances, which are now brought to the fore. The primary thesis of determinism is thatthere is
causationin the world; i.e. that causal relations of the kind identified in the previous chapter (the four generic determinations)do occurin it. Our above-mentioned discovery that such causation has
to fit in one of the four specific determinations may be viewed as a corollary of this thesis, or a logically consistent definition of it.
This is distinct from various universal causation theses, such as that nothing can occur except through causation (implying that causation is the only existing form of causality), or that at least
nothing in Nature can do so (though for conscious beings other forms of causality may apply, notably volition), among others.
We shall analyze such so-calledlawsof causation in a later chapter; suffices for now to realize that they are extensions, attempted generalizations, of the apparentfactof causation, and not identical
with it. Many philosophers seem to be unaware of this nuance, effectively regarding the issue as either ‘causation everywhere’ or ‘no causation anywhere’.
The idea that causation is presentsomewherein this world is logically quite compatible with the idea that there may bepocketsorborderswhere it is absent, a thesis we may call ‘particular (i.e.
non-universal) causation’. We may even, more extremely, consider that causation is poorly scattered, in a world moved principally by spontaneity and/or volition.
The existence of causation thus doesnotin itself exclude the spontaneity envisaged by physicists (in the subatomic or astronomical domains); and it doesnotconflict with the psychological theory of
volition or the creationist theory of matter[3].
Apparently, then, though determinism may be the major relation between things in this world, it leaves some room, however minor (in the midst or at the edges of the universe), for indeterminism.
We will give further consideration to these issues later, for we cannot deal with them adequately until we have clarified the different modes of causation.
[1]The logician must keep an open mind so long as an issue remains unresolved. Logic cannot at the outset, without good reason, close doors to alternatives. Where formal considerations leave spaces,
we cannot impose prejudices or speculations. The reason being that the aim of the science of logic is to prepare the ground for discourse and debate. If it takes arbitrary ‘metaphysical’ positions at
the outset, it deprives us of a language with which to even consider opposite views. So long as formal grounds for some thesis is lacking, its antithesis must remain utterable.
[2]It is interesting to note that, although J. S. Mill did not (to my knowledge) consider the issue of lone determinations, he turned out to be right in acknowledging only the four joint
[3]Note incidentally that to say that G-d created the world does not imply that He did so specifically as and when the Bible seems to describe it; He may equally well have created the first
concentration of matter and initiated the Big-Bang. Note also, that Creationism implies the pre-existence of G-d, a ‘spiritual’ entity; it is therefore a theory concerning the beginning of ‘matter’,
but not of existence as such. G-d is in it posited as Eternal and Transcendental, or prior to or beyond time and space, but still ‘existent’. With regard to such issues, including the compatibility
of spontaneity and volition with Creation, see myBuddhist Illogic, chapter 10.
Avi Sion2023-01-05T12:10:38+02:00 | {"url":"https://thelogician.net/LOGIC-OF-CAUSATION/Specific-Determinations-3.htm","timestamp":"2024-11-09T19:41:37Z","content_type":"text/html","content_length":"343982","record_id":"<urn:uuid:8edc517b-2ed5-4135-b4a1-c2cc9a6363df>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00390.warc.gz"} |
Assessing the accuracy of our solutions
Assessing the accuracy of our solutions¶
If we make the same policy rule plot using EulerIteration.m as we did in VFI.m we find the following
If you compare these to those for VFIHoward.m you will notice that these are convex while those for VFIHoward.m are concave. We have used two methods to solve the same model and we should get the
same answer so something must be wrong here.
Actually both solutions are wrong in the sense that they are approximations to the true solution. Some error is unavoidable so our goal is not to eliminate error entirely, but to gain a sense of how
accurate our solution is and make sure it is accurate enough for the analysis we are doing.
Inaccuracies arise for two main reasons: numerical errors and programming mistakes. In principle we can find programming mistakes and eliminate them but doing so requires that we put effort to test
our code because not all programming mistakes will result in obvious problems like the program crashing. One way of finding programming mistakes is to solve the model in a special case where we know
the solution. Another way is to solve the model with different methods and compare the results. I joke that I often have to solve a model three times, because I solve it twice and get different
answers and then solve it a third time to figure out where the mistake is. It is important to put just as much care into the test code as the production code. It is easy to fall into the trap of
writing sloppy test code (after all it is only a test) only to have the test fail and spend hours looking for a bug in the production code.
We have already done one test: we have solved the same model with value function iteration and by iterating on the Euler equation. And as we saw, the results were not identical to say the least. So
what do we do now? My first hypothesis was that the value function algorithm is less accurate than the Euler equation algorithm because we use one degree of our approximate polynomial approximating
the intercept of the value function, which has no effect on the consumption-savings choice. To investigate this, I changed the polynomial interpolation scheme in the value function iteration
algorithm to allow for more curvature, in particular I added a seventh basis function equal to \(K^3\). When I run VFIHoward.m now, I get something much closer to the results of EulerIt.m. The
following figure plots the two sets of results on the same figure.
While the two sets of policy rules are not identical, they are much more similar now. From this test, I took away that my hypthosis had been confirmed and if I am going to use value function
iteration I need a richer set of basis functions. I still don’t know that the EulerIt.m solution is accurate, but I have some confidence because the two algorithms give very similar results.
Euler equation errors¶
One way of assessing accuracy is to compute the residuals in the equilibrium conditions. Our Euler equation iteration algorithm seeks a set of polynomial coefficients for the approximate consumption
function so the equilibium conditions will be satisfied on our grid. As we have 140 grid points and only six polynomial coefficients, we do not have the degrees of freedom to match the function value
at all of the grid points. Moreover, we are also intersted the accuracy of our algorithm at points in the state space that are not on the grid and we have no reason to think the equilibrium
conditions will be exactly satisfied at those points.
We already have a function, EulerRHS, that calculates the consumption implied by the right-hand side of the Euler equation. For the left-hand side of the Euler equation we can compute consumption
directly from the approximate policy rule. We compare these two values to assess the “Euler equation error.”
Suppose our consumption function is approximated by polynomial coefficients bC. We can then proceed as follows:
function Accuracy(Par,Grid,bC) TestGrid = Grid;
TestGrid.nK = 200;
TestGrid.K = linspace(Grid.K(1),Grid.K(end),TestGrid.nK);
[Ztest,Ktest] =meshgrid(Grid.Z,TestGrid.K);
TestGrid.KK = Ktest(:);
TestGrid.ZZ = Ztest(:);
C = PolyBasis(TestGrid.KK,TestGrid.ZZ) * bC;
Kp = f(Par,TestGrid.KK,TestGrid.ZZ) - C;
CEuler = EulerRHS(Par,TestGrid,Kp,bC);
plot(100*(TestGrid.K/Par.Kstar-1), reshape( log10(abs ( CEuler./C-1 )), 200,7) )
ylim([-7 -2])
xlabel('K in % deviation from steady state')
ylabel('Absolute Euler equation error, log base 10')
We start by creating a new grid structure that will have many more points for capital so we are sure to get a good sense of the errors away from the levels of capital in the grid we used to solve the
problem. We could also create a finer grid for \(Z\), but that would involve a little more work to evaluate the Euler equation so we don’t do it here. We then compute two values for consumption. C is
computed directly from the approximate policy rule and CEuler is computed from the right-hand side of the Euler equation. We then plot the absolute percentage difference in terms of log base 10.
After running VFIHoward we can calculate the consumption function and plot the Euler equation errors as follows:
bC = PolyGetCoef(Grid.KK,Grid.ZZ,f(Par,Grid.KK,Grid.ZZ)-Kp);
After running EulerIteration we only need to call
The two figures make clear that the results of EulerIteration have smaller Euler equation errors than VFIHoward. In particular the maximium error plotted for the former is around -3.6 while for the
latter it is around -2.7.
The Euler equation error has no units because it is the ratio of consumption over consumption. It can be interpretted as the magnitude of the mistake in percentage terms. So an Euler equation error
of \(10^{-3.6}\) is an error of 2.5 dollars per ten thousand spent. For most applications an error of that magnitude would not appreciably alter the conclusions of the analysis. However, we still
need to be cautious because even if the Euler equation errors appear small, they only refer to the error in one step of the solution and we cannot rule out that they accumulate to a large inaccuracy
over a number of periods. | {"url":"https://alisdairmckay.com/Notes/NumericalCrashCourse/Accuracy","timestamp":"2024-11-14T19:06:48Z","content_type":"application/xhtml+xml","content_length":"16189","record_id":"<urn:uuid:5037a825-9bc6-4389-95c5-4e27f289a29c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00630.warc.gz"} |
Quadratic Form Index -- from Wolfram MathWorld
The index bilinear vector space integer defined by
where the set
As a concrete example, a pair smooth manifold Lorentzian manifold if and only if metric tensors of signature quadratic form signatures, etc.
The above example also illustrates the deep connection between the index of a quadratic form and the notion of the index of a metric tensor define a metric tensor (Sachs and Wu 1977). | {"url":"https://mathworld.wolfram.com/QuadraticFormIndex.html","timestamp":"2024-11-06T14:48:31Z","content_type":"text/html","content_length":"57940","record_id":"<urn:uuid:b2adc746-c651-4840-9370-e7b5089e514d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00299.warc.gz"} |
PUE Karnataka I PUC Maths Model Question Paper 2020-21
University Karnataka Department of Pre University Education
Course & Year I PUC
Subject Maths
Download Model Question Paper
Syllabus Reduced Syllabus 2020-21
Document Type PDF
Official Website https://pue.karnataka.gov.in/english
PUE PUC I Maths Model Question Paper
Karnataka Department of Pre University Education (PUE) First Year PUC Maths Model Question Paper 2020-21 for reduced syllabus.
Download PUE PUC I Reduced Syllabus 2020-21 Model Question Papers Here
Download PUE PUC I Maths Model Question Paper
Course & Subject Model Question Paper
I PUC Maths Download
PUE PUC I Maths Model Questions
I. Answer all the following questions:
1. Find the geometric mean of the numbers 2 and 8.
2. Find the multiplicative inverse of the complex number
3. Find the slope of the line passing through the points (3, -2) and (7, -2)
4. Define sample space of a random experiment.
5. The arithmetic mean of 4 and another number is 10. Find the other number.
6. Find the distance of the point (3, -5) from the line 3x-4y-26=0
1. A wheel makes 360 revolutions in one minute. Through how many radians it turns in one second
2. In how many ways can the letters of the word PERMUTATIONS be arranged if the vowels are all together.
3. The mean of the six observations 5,15,25,35,45,55 is 30, find its variance.
4. A coin is tossed twice, what is the probability that atleast one tail occurs?
5. How many 4-digit numbers are there with no digit repeated?
6. Find the median of the the data 36,72,46,42,60,45,53,46,51,49.
7. A card is selected from a pack of 52 parts calculate the probability that the card is i) an Ace ii) a Black card
8. Write the power set of the set A ={1,2,3}.
1. In a class of 35 students, 24 likes to play cricket and 16 likes to play football, also each student likes to play at least one of the games. How many students like to play both cricket and
2. Find all pairs of consecutive even positive integers, both of which are larger than 5 such that their sum is less than 23.
3. Find the mean deviation about median for the following data: 3,9,5,3,12,10,18,4,7,19,21.
4. A fair coin with 1 marked on one face and 6 on the other and a fair die are both tossed. Find the probability that the sum of numbers that turn up is ( i ) 3 ( ii ) 12.
5. In a survey of 400 students in a school, 100 were listed as taking apple juice, 150 as taking orange Juice and 75 were listed as taking both apple and orange juices. Find how many students were
taking neither apple juice nor orange juice.
6. Ravi obtained 70 and 75 marks in first two unit tests. Find the minimum marks he should get in the third test to have an average of at least 60 marks.
7. . Insert 3 arithmetic means between 8 and 24.
8. The mean and standard deviation of 20 observations are found to be 10 and 2 respectively. On rechecking it was found that an observation 8 was incorrect. Calculate the correct mean if wrong item
is omitted.
9. A bag contains 9 discs of which 4 are red 3 are blue and 2 are yellow. The discs are similar in shape and size. The disc is drawn at random from the bag. Calculate the probability that will be (i)
red (ii) no blue (iii) either red or blue.
Similar Searches:
i puc question papers for karnataka board , i puc question papers 2021 , 1st puc question papers , 1st puc question papers with answers , 1st puc question papers with answers 2021 , 1st puc question
papers commerce 2021 , 1st puc question papers 2021 , 1st puc question papers with answers 2021 , 1st puc question papers accountancy , 1st puc question papers arts , 1st puc model question papers
arts 2021 , 1st puc question papers karnataka biology , 1st puc question papers karnataka board , 1st puc annual exam question papers biology , 1st puc question papers commerce , 1st puc question
papers commerce , 1st puc question papers chemistry , 1st puc question papers commerce , 1st puc model question papers commerce , 1st puc model question papers commerce , 1st puc model question
papers commerce , ii puc question papers , ii puc question bank , 1st puc question papers download , puc question papers download , 1st puc question papers economics karnataka , 1st puc previous year
question papers english , i puc electronics model question papers , 1st puc question papers for commerce
Have a question? Please feel free to reach out by leaving a comment below
(Visited 3,135 times, 1 visits today) | {"url":"https://www.recruitmentzones.in/pue-karnataka-i-puc-maths-model-question-paper-2020-21/","timestamp":"2024-11-14T15:27:16Z","content_type":"text/html","content_length":"150154","record_id":"<urn:uuid:dbc5f43f-4192-4ea6-be7f-bbd2464c61dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00428.warc.gz"} |
Preservation of the Identities and Inverses Under Group Isomorphisms
Preservation of the Identities and Inverses Under Group Isomorphisms
Informally we say that the groups $(G, \cdot)$ and $(H, *)$ are isomorphic if they have the same structure, and the existence of a bijection $f : G \to H$ where for all $x, y \in G$ we have that $f(x
\cdot y) = f(x) * f(y)$ preserves this structure.
As we will see in the following propositions, if $e_1 \in G$ and $e_2 \in H$ are the identities with respect to $\cdot$ and $*$ then $f(e_1) = e_2$; and if $f(x) = y$ then we will also have that $f(x
^{-1}) = y^{-1}$. In other words, the existence of an isomorphism $f$ preserves the identities and inverses (and in fact, all other special properties of the groups).
Proposition 1: Let $(G, \cdot)$ and $(H, *)$ be groups such that $G \cong H$. If $e_1 \in G$ is the identity with respect to $\cdot$ and $e_2 \in H$ is the identity with respect to $*$ and $f : G \to
H$ is an isomorphism from $G$ to $H$ then $f(e_1) = e_2$.
• Proof: Let $f : G \to H$ be an isomorphism from $G$ to $H$. Then for all $x, y \in G$ we have that $f(x \cdot y) = f(x) * f(y)$. Set $x = y = e_1$. Then:
\quad f(e_1) = f(e_1 \cdot e_1) = f(e_1) * f(e_1) \\ \quad f(e_1) * [f(e_1)]^{-1} = [f(e_1) * f(e_1)] * [f(e_1)]^{-1} \\ \quad e_2 = f(e_1) * [f(e_1) * [f(e_1)]^{-1}] \\ \quad e_2 = f(e_1) * e_2 \\ \
quad e_2 = f(e_1) \quad \blacksquare
Proposition 2: Let $(G, \cdot)$ and $(H, *)$ be groups such that $G \cong H$. If $f : G \to H$ is an isomorphism from $G$ to $H$ then for all $x, x^{-1} \in G$, $y, y^{-1} \in H$ we have that if $f
(x) = y$ then $f(x^{-1}) = y^{-1}$.
• Let $f : G \to H$ be an isomorphism from $G$ to $H$, let $e_1 \in G$ and $e_2 \in H$ be the identity elements with respect to $\cdot$ and $*$ and suppose that $x \in G$ and $y \in H$ is such that
$f(x) = y$. Since $e_1 = x^{-1} * x$, we have that:
\quad f(e_1) = f(x^{-1} \cdot x) = f(x^{-1}) * f(x) \\
• By Proposition 1 we are given $f(e_1) = e_2$, so:
\quad e_2 = f(x^{-1}) * f(x) \\ \quad e_2 = f(x^{-1}) * y \\ \quad y^{-1} = [f(x^{-1}) * y] * y^{-1} \\ \quad y^{-1} = f(x^{-1}) * [y * y^{-1}] \\ \quad y^{-1} = f(x^{-1}) * e_2 \\ \quad y^{-1} = f(x
^{-1}) \quad \blacksquare | {"url":"http://mathonline.wikidot.com/preservation-of-the-identities-and-inverses-under-group-isom","timestamp":"2024-11-04T21:59:28Z","content_type":"application/xhtml+xml","content_length":"17438","record_id":"<urn:uuid:f18a6922-309f-4829-81c7-f513726e912c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00128.warc.gz"} |
to initial purchasing and updating equipment, software and support can prove to be LARS FREDHOLM Praktik som bärare av undervisnings innehåll A Phenomenographic Study Founded on an Alternative Basic
Title: Extensions of *-algebras and finitely summable Fredholm modules After that I will present the easiest (and, in my opinion, most natural) proof of this on associative and alternative division
algebras respectively, and the celebrated (1,2
av AD Oscarson · 2009 · Citerat av 77 — of learning, the alternatives of self- and peer assessment are not what students and teachers student beliefs, which may prove detrimental to learning,
especially to Lars Fredholm: Praktik som bärare av undervisnings innehåll och form. En. 13 feb. 2014 — Kent Fredholm lives in Sweden and has a background as a teacher of History of Bulgarian
Education, History of social work, Alternative Education - history there is evidence that prior to making changes, instructors rely on Scale (SGPALS)(59), a four-graded single-item request: “Mark
the alternative that best describes is based on cross-sectional data, which cannot be used to prove any reason to the association. LARS FREDHOLM Praktik som bärare av.
Self-adjointness of Tk follows from the symmetry of the kernel k. Fredholm kernels [7] and Hilbert–Schmidt kernels [21, §VII.3, Example 1], perform mathematical reasoning using: implications,
equivalences, proof by alternative math courses within the F- operators and the Fredholm alternative. 14 apr. 2014 — there is no significant evidence of negative environmental effects on the In
addition an alternative position for station SE-11 (in the Bornholm Basin in the Swedish EEZ), /26/ Sjöhistoriska Museet / Mikael Fredholm, 2013. av A Kullberg · 2010 · Citerat av 132 — revision of a
lesson is seen to provide evidence for better student learning.
Lecture 31: Compact operators and the Fredholm alternative Compact operators De–nition A bounded linear operator K : H !
Fredholms Lunch Guide 2021. Our Fredholms Lunch bildereller visa Fredholms Lunch Hässleholm. Fredholms Lunchmeny. fredholms lunchmeny. Fredholms
Now let Kn: H→Bbe compact operators and K: H→Bbe a bounded operator such that limn→∞kKn−Kkop=0.We will now show Kis compact. First Proof. Given >0,choose N= N( ) such that kKN−Kk <.
Theorem 4.1: (Fredholm Alternative) Let Lbe a Sturm-Liouville di erential operator, and consider solutions to L[u] = f(x) with boundary conditions such that Lis self-adjoint. 1.If the only solution
to L[u] = 0 satisfying the boundary conditions is u= 0, (that is, if = 0 is not an eigenvalue of L), then there is a unique solution to the BVP.
Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue. The Fredholm alternative is a classical well-known result whose proof for linear
equations of the form (I + T)u = f ,where T is a compact operator in a Banach space, can be found in most texts on functional analysis, of which we mention just [ 1 ] Abstract.
In particular we get the statement of the Fredholm alternative at z= 1. The following theorem by Riesz and Schauder may also be proved using the framework we have developed in this note. 2020-06-05
Here, we prove the basic Fredholm alternative on Banach spaces, that for compact T and non-zero 2C, either T is a bijection, or has closed image of codimension equal to the dimension of its kernel.
Erik grönwall idol final
It's the Fredholm alternative. 4.5 Fredholm Alternative . 11.6 Fredholm Alternative Again . reasonable to believe in this theorem followed by a legitimate proof.
av O Lundberg · 2015 · Citerat av 15 — Summary on (Un)doing the divide with alternative pedagogy .
Psykologiska thrillers
sävsjö euro 5,6 srerik nordgriskött till salugröna hästen mors dagjobb24
This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted
The Alternative Theorems state necessary and sufficient conditions for the equation (1-A)u = f to have a solution u for some previously specified f. There are two alternatives: either the equation
has PDF | On Jan 1, 2008, C.R. MacCluer and others published A short proof of the Fredholm alternative | Find, read and cite all the research you need on ResearchGate Let N(A) and R(A) be the null
space and column space of a matrix A. The assumption on b implies b ∈ N(AT) ⊥. The claim is b ∈ R(A). It remains to show R(A) = N(AT) ⊥. First, R(A) ⊥ = N(AT). If y ∈ R(A) ⊥ then yTAx = 0 for all x,
which implies ATy = 0. Conversely ATy = 0 implies yTAx = 0 for all x, hence y ∈ R(A) ⊥. | {"url":"https://valutaeuxp.firebaseapp.com/11911/77710.html","timestamp":"2024-11-02T09:05:33Z","content_type":"text/html","content_length":"9664","record_id":"<urn:uuid:9abcb4a4-1d1d-4c07-9b22-a2b137f23efa>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00836.warc.gz"} |
School of Mathematics and Physics, College of Science and Engineering
50 faculty members found.Click on a name for details.
<<First <Previous 1 2 3 4 5 Last>>
No. Name Faculty Position Specialities
41 POZAR NORBERT Faculty of Mathematics and Physics, Associate viscosity solutions, free boundary problems, crystalline mean curvature flow, porous medium equation, Hele-Shaw problem,
Institute of Science and Engineering Professor Stefan problem
42 MATSUMOTO Faculty of Mathematics and Physics, Professor Cryogenics,Low Temperature Physics
Koichi Institute of Science and Engineering
43 MARUYAMA, Faculty of Mathematics and Physics, Assistant geometric topology, geometric group theory
Shuhei Institute of Science and Engineering Professor
44 MIURA SHINICHI Faculty of Mathematics and Physics, Professor Quantum fluids,Path integral molecular dynamics method,biophysics,Extended ensemble method,Liquid State Theory,Quantum
Institute of Science and Engineering Monte Carlo,Molecular Simulation,Theoretical Molecular Science,Condensed Matter Theory
45 Hideki Miyachi Faculty of Mathematics and Physics, Professor Complex analysis, Teichmuller space, Moduli space, Riemann surface, Quasiconformal mapping, Hyperbolic geometry, Discrete
Institute of Science and Engineering group
46 YAMAGUCHI, Nanomaterials Research Institute Assistant
Naoya Professor
47 Yasuo Yoshida Faculty of Mathematics and Physics, Associate Low-temperature Physis, Scanning tunneling microscope
Institute of Science and Engineering Professor
48 YONETOKU Faculty of Mathematics and Physics, Professor High Energy Astrophysics Gamma-Ray Burst Astro-E2 Semi-Conductor Devices Infrared Astrophysics Cosmology
DAISUKE Institute of Science and Engineering
49 WAKATSUKI Faculty of Mathematics and Physics, Professor Automorphic form, Automorphic representation, Trace formula, Shintani zeta function, Automorphic period, Quaternion
SATOSHI Institute of Science and Engineering algebra, Quadratic form.
50 WATANABE,Shinji WPI Nano Life Science Institute Associate live cell imaging,scanning probe microscopy,nanopipette,scanning ion conductance microscopy | {"url":"https://ridb.kanazawa-u.ac.jp/public/list_en.php?gakusi_cod=201&page=5","timestamp":"2024-11-02T01:52:17Z","content_type":"application/xhtml+xml","content_length":"10014","record_id":"<urn:uuid:200f473d-74a4-44ba-9567-fa0fbf39178a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00143.warc.gz"} |
Enhancement of signal response in complex networks induced by topology and noise
Acebron, Juan; Lozano, S.; Arenas, A.
Applications of Nonlinear Dynamics Model and Design of Complex Systems, Springer-Verlag, (2009), 201-210
The effect of the topological structure of a coupled dynamical system in presence of noise on the signal response is investigated. In particular, we consider the response of a noisy overdamped
bistable dynamical system driven by a periodic force, and linearly coupled through a complex network of interactions. We find that the interplay among the heterogeneity of the network and the noise
plays a crucial role in the signal response of the dynamical system. This has been validated by extensive numerical simulations conducted in a variety of networks. Furthermore, we propose
analytically tractable models based on simple topologies, which explain the observed behavior. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=4&member_id=118&doc_id=1677","timestamp":"2024-11-07T16:46:29Z","content_type":"text/html","content_length":"8782","record_id":"<urn:uuid:f614609d-1846-4adb-9d4a-4e0a7a9ab624>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00862.warc.gz"} |
How to Generate Random Numbers in Excel - EasyClick AcademyHow to Generate Random Numbers in Excel (3 Different Ways)
From time to time, you might need to generate a set of random numbers. If this is the case now, then you’re in the right place.
This tutorial covers how to generate random numbers from 0 to 1, but not only that. We’re gonna have a look at how to generate random numbers within any range you define, whether you need to generate
whole numbers or decimals.
Would you rather watch this tutorial? Click the play button below!
If you want to generate random numbers from 0 to 1, click into the cell you selected, then enter the equal sign and type in ‘RAND’. Click on the suggested function, enter an opening bracket and right
after the closing bracket.
Now press Enter and Excel will generate a random number in the cell.
The number of decimal places can be adjusted at the top on the ribbon, in the section ‘Numbers’. Using the buttons ‘Increase-‘ and ‘Decrease Decimals’, you can add or remove decimal places just as
you need.
If you want to generate more random numbers between 0 and 1, hover with the cursor over the bottom right corner of the cell where the function is located.
Once you see the plus sign, click on the left mouse button and drag the function down to the last cell you want to populate. Like this.
Now, let’s see how we can generate random whole numbers within a chosen range.
Again, select a cell and click into it. Enter the equal sign and type in ‘RANDBETWEEN’.
Excel will need two details – the upper and the lower bound of the range within which you want to generate random numbers.
Let’s say we want to generate random numbers within the range of 100 up to 1 000. Let’s set the lower bound first, which is 100, enter a comma and then type in the upper bound, which is 1 000. Close
the brackets, hit ‘Enter’ and that’s all it takes.
If you need to generate more random numbers, simply copy the function to the rest of the cells by dragging down the bottom right corner of the cell.
And let’s have a look at how to generate random decimals within the range you need. As previously, we’ll use the range 100 to 1 000.
Click into the selected cell once more, enter the equal sign and here it comes – we need to use a slightly different formula that will help us generate these random decimals.
Enter the lower bound of the range, which is 100. Type in the plus sign and enter the number you get as the result of subtracting the lower bound value from the upper one.
This means that we will take 1 000, which is the upper bound of the range, and we will subtract 100 as its lower bound. The result is 900, so carry on typing 900 into the formula you’ve started and
multiply it with the function ‘RAND’.
Hit ‘Enter’ and that’s it! Excel has generated a random decimal number within the range of 100 to 1 000.
Just like before, if you need more random decimals within the chosen range, you can copy the function to the rest of the cells using the steps you’re already familiar with.
And at last, let’s check out a little trick which can be useful when generating random numbers.
If you need to generate random numbers anew, you can simply refresh the formulas by going to the tab ‘Formulas’ and using ‘Calculate Now’.
Thanks to this function, Excel will generate a new set of random numbers in all cells that contain the formula.
To copy the values of the random numbers, not the formulas, watch our video tutorial ‘How to Copy and Paste Values Without Formula in Excel’. Link to the tutorial’s been included in the list below.
And here’s a question for you: Which of the three ways will you use first? Let us know by leaving a comment in the comment section below. We can’t wait to hear from you!
If you found this tutorial helpful, give us a like and watch other video tutorials by EasyClick Academy. Learn how to use Excel in a quick and easy way!
Is this your first time on EasyClick? We’ll be more than happy to welcome you in our online community. Hit that Subscribe button and join the EasyClickers!
Thanks for watching and I’ll see you in the next tutorial! | {"url":"https://www.easyclickacademy.com/how-to-generate-random-numbers-in-excel/","timestamp":"2024-11-12T19:18:27Z","content_type":"text/html","content_length":"112415","record_id":"<urn:uuid:cbfe729e-8491-4975-a20d-d0e64458a81d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00541.warc.gz"} |
Non Linear Models of Econometrics
An Excursion into Non-linearity Land
l Motivation: the linear structural (and time series) models cannot explain a number of important features common to such financial data
- leptokurtosis
- volatility clustering or volatility pooling
- leverage effects
l Our “traditional” structural model could be something like:
y[t] = b[1] + b[2]x[2t] + ... + b[k]x[kt] + u[t,] or more compactly y = Xb + u.
Non-linear Models: A Definition
l Campbell, Lo and MacKinlay (1997) define a non-linear data generating process as one that can be written
y[t] = f(u[t], u[t][-1], u[t][-2], …)
where u[t] is an iid error term and f is a non-linear function.
l They also give a slightly more specific definition as
y[t] = g(u[t][-1], u[t][-2], …)+ u[t]s^2(u[t][-1], u[t][-2], …)
where g is a function of past error terms only and s^2 is a variance term.
l Models with nonlinear g(•) are “non-linear in mean”, while those with nonlinear s^2(•) are “non-linear in variance”.
Types of non-linear models
l The linear paradigm is a useful one. Many apparently non-linear relationships can be made linear by a suitable transformation. On the other hand, it is likely that many relationships in finance are
intrinsically non-linear.
l There are many types of non-linear models, e.g.
- ARCH / GARCH
- Switching models
- Bilinear models
Testing for Non-linearity
l The “traditional” tools of time series analysis (acf’s, spectral analysis) may find no evidence that we could use a linear model, but the data may still not be independent.
l Portmanteau tests for non-linear dependence have been developed. The simplest is Ramsey’s RESET test, which took the form:
l Many other non-linearity tests are available, e.g. the “BDS test” and the bispectrum test.
l The assumption that the variance of the errors is constant is known as
homoscedasticity, i.e. Var (u[t]) = (sigma squre)
l What if the variance of the errors is not constant?
- heteroscedasticity
- would imply that standard error estimates could be wrong.
l Is the variance of the errors likely to be constant over time? Not for financial data.
Modeling Volatility
l In monetary theory and the theory of finance, financial asset portfolios are functions of the expected means and variances of the rates of returns. Increased volatility of security prices or rates
of return are often indicators that the variances are not constant over time. Engle (1982) introduced a new approach to modeling heteroscedasticity in a time series context.
The ARCH Specification
l Autoregressive Conditional Heteroskedasticity (ARCH) models are specifically designed to model and forecast conditional variances. The variance of the dependent variable is modeled as a function of
past values of the dependent variable and independent or exogenous variables. In developing an ARCH model, you will have to provide two distinct specifications—one for the conditional mean and one
for the conditional variance.
ARCH Models
l So use a model which does not assume that the variance is constant.
l Recall the definition of the variance of u[t]:
Sigma squre = Var(u[t]½ u[t][-1], u[t][-2],...) = E[(u[t]-E(u[t]))^2½ u[t][-1], u[t][-2],...]
We usually assume that E(u[t]) = 0
so Sigma squre = Var(u[t ]½ u[t][-1], u[t][-2],...) = E[u[t]^2½ u[t][-1], u[t][-2],...].
l What could the current value of the variance of the errors plausibly depend upon?
l Previous squared error terms.
l This leads to the autoregressive conditionally heteroscedastic model for the variance of the errors:
Sigma squre = a[0] + a[1] U squre minus 1
This is known as an ARCH(1) model.
l The full model would be
y[t] = b[1] + b[2]x[2t] + ... + b[k]x[kt] + u[t ], u[t] ~ N(0, sigma saure)
where sigma squre = a[0] + a[1] U squre minus 1
l We can easily extend this to the general case where the error variance depends on q lags of squared errors:
Sigma square = a[0] + a[1 ]+a[2 ]+...+a[q] U square minus q
l This is an ARCH(q) model.
l Instead of calling the variance , in the literature it is usually called h[t], so the model is
y[t] = b[1] + b[2]x[2t] + ... + b[k]x[kt] + u[t ], u[t] ~ N(0,h[t])
where h[t] = a[0] + a[1 ]+a[2 ]+...+a[q]U square minus q[]
l For illustration, consider an ARCH(1). Instead of the above, we can write
y[t] = b[1] + b[2]x[2t] + ... + b[k]x[kt] + u[t ], u[t] = v[t]s[t ]
l The two are different ways of expressing exactly the same model. The first form is easier to understand while the second form is required for simulating from an ARCH model, for example.
Problems with ARCH(q) Models
l How do we decide on q?
l The required value of q might be very large
l Non-negativity constraints might be violated.
l When we estimate an ARCH model, we require a[i] >0 " i=1,2,...,q (since variance cannot be negative)
l A natural extension of an ARCH(q) model which gets around some of these problems is a GARCH model.
Generalised ARCH (GARCH) Models
l Due to Bollerslev (1986). Allow the conditional variance to be dependent upon previous own lags
l This is a GARCH(1,1) model, which is like an ARMA(1,1) model for the variance equation.
l An infinite number of successive substitutions would yield
l So the GARCH(1,1) model can be written as an infinite order ARCH model.
l But in general a GARCH(1,1) model will be sufficient to capture the volatility clustering in the data.
l Why is GARCH Better than ARCH?
- more parsimonious - avoids over fitting
- less likely to breech non-negativity constraints
The GARCH(1,1) Model
l The (1,1) in GARCH(1,1) refers to the presence of a first-order GARCH term and a first-order ARCH term. An ordinary ARCH model is a special case of a GARCH specification in which there are no
lagged forecast variances in the conditional variance equation.
l For example, if the asset return was unexpectedly large in either the upward or the downward direction, then the trader will increase the estimate of the variance for the next period. This model is
consistent with the volatility clustering often seen in financial returns data, where large changes in returns are likely to be followed by further large changes.
The Unconditional Variance under the GARCH Specification
is termed “non-stationarity” in variance
is termed intergrated GARCH
lFor non-stationarity in variance, the conditional variance forecasts will not converge on their unconditional value as the horizon increases.
Estimation of ARCH / GARCH Models
l Since the model is no longer of the usual linear form, we cannot use OLS.
l We use another technique known as maximum likelihood.
l The method works by finding the most likely values of the parameters given the actual data.
l More specifically, we form a log-likelihood function and maximise it.
l The steps involved in actually estimating an ARCH or GARCH model are as follows
1. Specify the appropriate equations for the mean and the variance - e.g. an AR(1)- GARCH(1,1) model:
2. Specify the log-likelihood function to maximise:
3. The computer will maximise the function and give parameter values and their standard errors
Extensions to the Basic GARCH Model
l Since the GARCH model was developed, a huge number of extensions and variants have been proposed. Three of the most important examples are EGARCH, GJR, and GARCH-M models.
l Problems with GARCH(p,q) Models:
- Non-negativity constraints may still be violated
- GARCH models cannot account for leverage effects
l Possible solutions: the exponential GARCH (EGARCH) model or the GJR model, which are asymmetric GARCH models.
The EGARCH Model
l Advantages of the model
- Since we model the log(s[t]^2), then even if the parameters are negative, s[t]^2[ ]
[ ] will be positive.
- We can account for the leverage effect: if the relationship between volatility and returns is negative, g, will be negative.
The GJR Model
l For a leverage effect, we would see g > 0.
l We require a[1] + g ³ 0 and a[1] ³ 0 for non-negativity
News Impact Curves
The news impact curve plots the next period volatility (h[t]) that would arise from various positive and negative values of u[t][-1], given an estimated model.
News Impact Curves for Returns using Coefficients from GARCH and GJR Model Estimates:
l We expect a risk to be compensated by a higher return. So why not let the return of a security be partly determined by its risk?
l d can be interpreted as a sort of risk premium.
l It is possible to combine all or some of these models together to get more complex “hybrid” models - e.g. an ARMA-EGARCH(1,1)-M model.
Testing Non-linear Restrictions or Testing Hypotheses about Non-linear Models
l Usual t- and F-tests are still valid in non-linear models, but they are not flexible enough.
l There are three hypothesis testing procedures based on maximum likelihood principles: Wald, Likelihood Ratio, Lagrange Multiplier.
l Consider a single parameter, q to be estimated, Denote the MLE as (estimated theta) and a restricted estimate as (congruent theta) .
Likelihood Ratio Tests
l Estimate under the null hypothesis and under the alternative.
l Then compare the maximised values of the LLF.
l So we estimate the unconstrained model and achieve a given maximised value of the LLF, denoted L[u]
l Then estimate the model imposing the constraint(s) and get a new value of the LLF denoted L[r].
l Which will be bigger?
l L[r] £ L[u] comparable to RRSS ³ URSS
l The LR test statistic is given by
LR = -2(L[r] - L[u]) ~ c^2(m)
where m = number of restrictions
Hypothesis Testing under Maximum Likelihood
l The vertical distance forms the basis of the LR test.
l The Wald test is based on a comparison of the horizontal distance.
l The LM test compares the slopes of the curve at A and B.
l We know at the unrestricted MLE, L(estimated theta), the slope of the curve is zero.
l But is it “significantly steep” at L(congruent theta) ?
l This formulation of the test is usually easiest to estimate.
Estimating ARCH Models
a) Option:
Heteroskedasticity Consistent Covariances: You should use this option if you suspect that the residuals are not conditionally normally distributed.
b) The Mean Equation:
You can enter the specification in list form by listing the dependent variable followed by the regressors. You should add the C to your specification if you wish to include a constant. If your
specification includes an ARCH-M term, you should add an appropriate specification.
c) The variance Equation:
1 Under the ARCH specification label, you should choose the number of ARCH and GARCH terms.
2 In the Variance Regressors, you may optionally list variables you wish to include in the variance specification.
ARCH Estimation Output
l The output from ARCH estimation is divided into two sections:
1) The upper part provides the standard output for the mean equation.
2) The lower part, labeled “Variance Equation” contains the coefficients, standard errors, z-statistics and p-values for the coefficients of the variance equation. The ARCH parameters correspond
toαand the GARCH parameters toβ.
3) Note that measures such as R^2 may not be meaningful if there are no regressors in the mean equation. Here, for example, the R^2 is negative.
4) The sum of the ARCH and GARCH coefficient (α+β) is very close to one, indicating that volatility shocks are quite persistent.
Working with ARCH Model
l The ARCH LM test statistic is computed from an auxiliary test regression. To test the null hypothesis that there is no ARCH up to order q in the residuals, we run the regression
l where e is the residual. This is a regression of the squared residuals on a constant and lagged squared residuals up to order q. EViews reports two test statistics for this test regression. The
F-statistic is an omitted variable test for the joint significance of all lagged squared residuals. The Obs*R-squared statistic is Engle’s LM test statistic, computed as the number of observations
times the R^2 from the test regression. The exact finite sample distribution of the F-statistic under H[0] is not known but the LM test statistic is asymptotically distributed x^2(q) under quite
general conditions.
The TARCH Model: Threshold ARCH
where d[t]=1 if ε[t]<0 , and 0 otherwise.
l In this model, good news (ε[t]>0 ), and bad new (ε[t]<0 ), have differential effects on the conditional variance—good news has an impact of α, while bad news has an impact of α+ r. If r>0 we say
that the leverage effect exists. If r≠0, the news impact is asymmetric.
The EGARCH Model
l The specification for the conditional variance is
l Note that the left-hand side is the log of the conditional variance. This implies that the leverage effect is exponential, and that forecasts of the conditional variance are guaranteed to be
nonnegative. The presence of leverage effects can be tested by the hypothesis that r>0. The impact is asymmetric if r≠0.
Multivariate GARCH Models
l Multivariate GARCH models are used to estimate and to forecast covariances and correlations. The basic formulation is similar to that of the GARCH model, but where the covariances as well as the
variances are permitted to be time-varying.
l There are 3 main classes of multivariate GARCH formulation that are widely used: VECH, diagonal VECH and BEKK.
VECH and Diagonal VECH
l e.g. suppose that there are two variables used in the model. The conditional covariance matrix is denoted H[t], and would be 2 ´ 2. H[t] and VECH(H[t]) are
BEKK and Model Estimation for M-GARCH
l The BEKK Model uses a Quadratic form for the parameter matrices to ensure a positive definite variance / covariance matrix H[t].
l Neither the VECH nor the diagonal VECH ensure a positive definite variance-covariance matrix.
l An alternative approach is the BEKK model (Engle & Kroner, 1995).
l In matrix form, the BEKK model is
l Model estimation for all classes of multivariate GARCH model is again performed using maximum likelihood with the following LLF:
where N is the number of variables in the system (assumed 2 above), q is a vector containing all of the parameters to be estimated, and T is the number of observations.
Presented by Dr. Babar Zaheer Butt to the students of MS/Ph.D at Iqra University Islamabad.
Stationarity and Unit Root Testing l The stationarity or otherwise of a series can strongly influence its behaviour and properties - e.g. persistence of shocks will be infinite for nonstationary
series l Spurious regressions. If two variables are trending over time, a regression of one on the other could have a high R 2 even if the two are totally unrelated l If the variables in the
regression model are not stationary, then it can be proved that the standard assumptions for asymptotic analysis will not be valid. In other words, the usual “ t -ratios” will not follow a t
-distribution, so we cannot validly undertake hypothesis tests about the regression parameters. Stationary and Non-stationary Time Series Stationary Time Series l A series is said to be stationary if
the mean and autocovariances of the series do not depend on time. (A) Strictly Stationary : n For a strictly stationary time series the distribution of y(t) is independent of t . Thus it is not just | {"url":"https://www.irfanphd.com/2010/12/non-linear-models-of-econometrics.html","timestamp":"2024-11-08T15:56:37Z","content_type":"text/html","content_length":"282346","record_id":"<urn:uuid:5b6f310f-4342-492b-aeae-889c6e35e85c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00104.warc.gz"} |
Journal of Noncommutative Geometry | EMS Press
Journal of Noncommutative Geometry
Issues of this journal published between 1 January 2021 and 31 December 2024 are accessible as open access under our Subscribe to Open model.
The Journal of Noncommutative Geometry covers the noncommutative world in all its aspects. It is devoted to publication of research articles which represent major advances in the area of
noncommutative geometry and its applications to other fields of mathematics and theoretical physics. Topics covered include in particular:
• Hochschild and cyclic cohomology
• K-theory and index theory
• Measure theory and topology of noncommutative spaces, operator algebras
• Spectral geometry of noncommutative spaces
• Noncommutative algebraic geometry
• Hopf algebras and quantum groups
• Foliations, groupoids, stacks, gerbes
• Deformations and quantization
• Noncommutative spaces in number theory and arithmetic geometry
• Noncommutative geometry in physics: QFT, renormalization, gauge theory, string theory, gravity, mirror symmetry, solid state physics, statistical mechanics
The journal is indexed in zbMATH Open, and Mathematical Reviews, as well as in Scopus and Web of Science (in the Science Citation Index Extended database, thus in the Journal Citation Reports).
Furthermore, it is listed in the DOAJ.
The journal is owned by the European Mathematical Society and it adheres to its Code of Practice and the publisher's Code of Conduct and Publishing Ethics.
The Journal of Noncommutative Geometry is published in one volume per annum, four issues per volume, approximately 1500 pages. | {"url":"https://ems.press/journals/jncg","timestamp":"2024-11-05T03:46:13Z","content_type":"text/html","content_length":"70339","record_id":"<urn:uuid:294456fd-8024-4e32-a39a-df10ac3ff149>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00128.warc.gz"} |
FastFieldSolvers Forum - Microstrip/CPW Impeance
Author Topic
Nico Posted - Sep 06 2023 : 09:41:43
1 Posts I'm working on an open source program to implement HF simulation, parasitic extraction and impedance analysis into KiCAD/KLayout with the option to analyze bent FPCs, and after simulating
around with OpenEMS and Elmer I think that FastImp, FastHenry and FasterCap are the programs to go when it comes to impedance and parasitics. The problem right now is that I can't seem to
get correct values out of FastImp (or am misinterpreting the ones I got).
The structure I'm simulating is a Microstrip that is later turned into a grounded coplanar waveguide (GCPW) as soon as the results are correct. Currently, I'm using the old input format,
but as the new one supports meshes by using triangular and quad panels instead of just straight segments divided into filaments, I will change to that one later.
The file looks like this:
# Structure type of each conductor is specified by a number
# 1: straight wire
# 2: ring
# 3: spiral
# 4: ground
# The unit of size is 1e-6 m, or um
1e-6 unit
2 number of conductors
{ cond 1
1 structure type
{ leftEnd point 0
} leftEnd point 0
{ leftEnd point 1
} leftEnd point 1
{ leftEnd point 2
} leftEnd point 2
{ rightEnd point
} rightEnd point
90 number of panels along width
12 number of panels along thickness
300 number of panels along length
5.8e7 conductivity of copper
} cond 1
{ cond 3
4 structure type
{ leftEnd point 0
} leftEnd point 0
{ leftEnd point 1
} leftEnd point 1
{ leftEnd point 2
} leftEnd point 2
{ rightEnd point
} rightEnd point
100 number of panels along width
3 number of panels along thickness
60 number of panels along length
2.9e5 conductivity of ground
} cond 3
#--- End of file
I'm calling FastImp via:
fastImp -i t2_GND_50ohm.inp -s2
The impedance I get is as follows:
1e+09 Zm[0, 0] = (0.0942394,2.81459)
I also tried the -t1 and -s1 options to use the EQMS solver instead of full-wave, but I still can't seem to get the right impedance. With a trace width of 200 um, a substrate thickness of
100 um and copper thickness of 35 um I should land around 50 ohms. That's the value I get by TXLine and also evaluated in OpenEMS.
I would be very happy if you could help me here! Would be great if I could integrate a pipeline for your tools in my project (:
best wishes,
Posted - Sep 23 2023 : 12:23:48
543 Posts Hi Nico,
I'm afraid you are misunderstanding the usage and the results from FastImp.
From your description, I understand you want to extract the impedance of a transmission line. For doing so, you specified a segment of a uniform microstrip with a length of 1mm. Now,
FastImp will take that structure for what it is, i.e. a 1mm long structure, and provide you the impedance of such structure when the port, defined by the two ends of the conductor, is
stimulated. Here lies the second issue, i.e. the port is actually defined over the near end and the far end of the first conductor in your input file. The second conductor, i.e. the ground
plane, is not taken into account from the ports definition point of view, while in a txline, for your goal, you are interested in defining a port between the near end of the conductor and
the near end of the ground plane (in fact, the distinction is practical, as both structures together constitute your txline). Here is the third issue: defining the "ground" plane of the
txline (that is part of the txline, actually) as '4: ground' means that this won't be stimulated by any port, but currents/charges will only be induced on it.
Correcting the input file defining two straight wire conductors, one of which is acting as the 'return' plane for the microstrip, we get the following impedance values @ 1e8 Hz (taking
only the imaginary part).
4.67E-10 1.67E-10
1.62E-10 1.81E-10
The reason why we can use only the imaginary part is that, for the txline you are interested, we consider the transversal modes only, to be able to use the telegrapher's equations. At the
frequency of interest at which you ran the simulation, the cross section is definitely small with respect to the wavelength.
So, let's derive the L value per unit length of the microstrip. We must divide by 2*PI*f and multiply by 1000, as the section of the microstrip is 1mm long, and we want H/m.
Besides that, we also want a single L value; at the moment, the L11 and L22 values of the matrix correspond to the self-inductance across the 'conductor' and across the 'ground plane'
respectively, while L12 and L21 are the mutual coupling. We are interested in the total inductance (or 'loop inductance') of the structure, to be able to divide it by the length, to have
(approximately, as the structure we simulate has not infinite length) the inductance per unit length, as said.
So calculation goes:
(4.67E-10 + 1.81E-10 - 1.67E-10 - 1.62E-10)/(2*3.1415*1e8)*1000 = 3.19e-7 H/m
Now, using the formulae from telegraph's equations, we have that Z = c * L, where c is the speed of light in vacuum. Therefore
Z = 3e8 * 3.19e-7 = 96 ohm
Now, from the parameters you used, I believe that a 200 um wide microstrip with a gap of 100 um between the bottom of the 'conductor' and the top of the 'ground plane' should lead to an
impedance of approx 82 ohm and not 50; maybe there is a small mismatch in the definition of the conductor - ground plane distance (e.g. considering or not the copper thickness; if the 100
um distance is from the center of the conductor to the center of the ground plane, i.e. the gap is 65 um, the value approaches 50 ohm).
Still, 96 is a bit off as a value. The reasons here are probably linked to the definition of the ports of the conductors. In the 'ground plane', the whole end side, which is very wide, is
used as a port, while we know from the physics that the current will have a maximum below the conductor; in fact, you should short the line at the far side. If you would model an actual
connection at the far side, I expect the results to be closer to the expected ones, but this requires a custom definition of the conductors. Also, you are considering a 1000 um long line,
while the width of the conductor is 200 um. This is not very long to be a good approximation of an infinite length transmission line. The discretization plays a role as well, as the
frequency at which you run the simulation.
However, said that, if you want to simulate transmission lines as you do in TXLine, you probably would be much better off using the FasterCap 2D capabilities. The input file is much
simpler also for arbitrary geometries, you get directly the capacitance per unit length, and you can also consider dielectrics. Running just two simulations (one with dielectrics, the
other in free space) you can derive the C and the L values (L is derived from Cfree in free space from the relation c = SQRT(L*Cfree) and inverting it) and calculate Z = SQRT(L/C) - and
these relations also work when L and C are matrices.
I condensed a bit the explanation and used some simplifications' but I hope this could help you and guide you in the right direction for your task. | {"url":"https://www.fastfieldsolvers.com/forum/topic.asp?TOPIC_ID=1589","timestamp":"2024-11-15T01:08:20Z","content_type":"text/html","content_length":"27954","record_id":"<urn:uuid:b5937bbf-da3e-4e2c-b66a-fd90881d544b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00822.warc.gz"} |
Black Holes microstates in Canonical Quantum Gravity
In the context of Loop Quantum Gravity, Black Holes are closely related to Chern-Simons theory on a punctured 2-sphere with SU(2) gauge group. Using this link, one can describe precisely the space of
microstates for the Black Holes and compute the corresponding statistical entropy. However, it turns out that the entropy depends on the unphysical Immirzi parameter γ. But, using a suitable analytic
continuation of γ to complex values, we show that the entropy reproduces the expected Bekenstein-Hawking expression when γ = ± i at the semi-classical limit. This remarkable result has a nice and
clear geometric interpretation and many very interesting physical consequences. In particular, we show that, at the semi-classical limit, the Black Hole microstates (at the vicinity of the horizon)
are particles in equilibrium at the Unruh temperature. | {"url":"https://indico.math.cnrs.fr/event/408/?view=event","timestamp":"2024-11-09T19:24:37Z","content_type":"text/html","content_length":"95349","record_id":"<urn:uuid:1ea4ecfe-836f-4fa9-b6bc-1a0fd5bd345d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00886.warc.gz"} |
Extension of the Apollo Mission D (CSM-103/LM-3) launch window using the SPS/DPS ΔV capability
Informal DistributionMAY 27 196868-FM64-171FM6/Critical Mission Analysis BranchExtension of the Apollo Mission D (CSM-103/LM-3) launch window using the SPS/DPS ΔV capability
1. Conway, H. L.; Merriam, R. S.; Spurlin, H. C.; Calvin, C.; MSC Internal Note 68-FM-93 entitled, “Apollo Mission D (AS-502/CSM-103/LM-3) Spacecraft Reference Trajectory, Volume I – Nominal
Trajectory”, dated April 30, 1968.
2. Rose, R. G.; MSC memorandum entitles, “Ninth Mission D Flight Operations Plan (FOP) Meeting”, dated April 18, 1968.
A study was conducted to determine the ΔV capability of the SPS/DPS to perform nodal shift and phasing maneuvers so that the launch window for Apollo Mission D could be extended. The results show
that the launch window can be extended to approximately 2.75 hours provided the second SPS, third SPS, the docked DPS, and an additional 450 fps SPS burn are designed nominally to optimally shift the
line of nodes eastward. Following the nodal shifts, a series of phasing maneuvers (nominally zero for on-time launches) would be then required to complete the launch window extension. Operational
considerations (such as non-optimum geographical burn locations to provide MSFN coverage) will prevent full realization of the nodal shift capability; and, as a result, the launch window of 2.75
hours theoretically possible may not be achieved. Redesign of the ΔV maneuvers from that of reference 1 will have no impact on test objectives. The data and analysis leasing to these conclusions are
dis- cussed in this memorandum.
To satisfy requests contained in reference 2, a study was conducted to determine the maximum launch window considering only the ΔV capability of the SPS/DPS to return a vehicle delayed at launch to
the nominal lighting and MSFN coverage for the LM-active rendezvous (defined in reference 1). The launch window for the D Mission Reference Trajectory published in reference 1 is less than 1 hour.
There are basically two methods under consideration by which MSFN coverage and lighting requirements during the rendezvous may be satisfied in the event of a launch delay. The first method is to
adjust the time from CDH to TPI by small changes in the differential heights during the con- centric coasts of LM-active rendezvous. A study of this technique is currently underway. The second
method, which is the subject of this memorandum, is to “rendezvous” the CSM/LM vehicle which may be lifting off late with an imaginary vehicle which lifts off at the nominal time and executes all
nominal maneuvers. If the launch delay is long enough (over 15 minutes) then phasing maneuvers (apogee and/or perigee adjust- ments) alone may not reestablish nominal lighting and coverage since
out-of-planeness arises due to earth rotation. A nodal shift is then also required, the magnitude of which is directly proportional to the rotation of the earth during the delay period.
Nodal Shift Requirements
The nodal shift required is due to the orbital plane of a vehicle launched on time becoming fixed at insertion (neglecting apsidal advancement and nodal regression) while the orbital plane of the
delayed vehicle has an eastward shift in the line of nodes equal to the amount of earth's rotation during the delay period. The ΔV required for a nodal correction ΔΩ is given in the following
ΔV = 2V[X] cos γ[x] sin ΔΩ sin i 2 where V[x], is the inertial velocity at transfer, ft/sec
γ[x], is the flight path angle at transfer, degrees
ΔΩ, is the nodal shift required, degrees
i, is the orbital inclination, degrees
For example, from reference 1, if V[x] = 25771 fps (130 n. mi. circular orbit), i = 30°, cos γ[x] = 1, and if ΔΩ = 1°, the ΔV required from equation 1 is 240 fps/deg. The total ΔV available in the D
Mission which might practically be used for nodal shifts amounts to 5050 fps as shown in Tables 1 and 2. The 5050 fps represents the ΔV from the second SPS, third SPS, the docked DPS and a 450 fps
SPS burn available from presently unused SPS propellant. If used optimally it will provide a total of about 21.4 degrees of nodal shift.
In actual flight operations, consideration must be given to locating the SPS burns over MSFN stations (thus not necessarily at maximum Northerly or Southerly latitudes) and as a result the 21.4
degrees of nodal shift represents a theoretical value with 20.0 degrees being an operationally more realistic value. This, figure 1 shows that if the 5050 fps are nominally used to shift the line of
nodes eastward then as the launch delay increases (the vehicle rests on the pad for an increasing period of time as the earth rotates at approximately 15 degrees/hour) the ΔV required to shift the
plane of the delayed vehicle back to nominal decreases. At about 86 minutes of delay, no nodal shift would be required as shown in figure 1; in this case, the SPS burns required to reduce the CSM
mass and accomplish the CSM autopilot test objectives, would be designed so as not to shift the line of node. Delays of over 86 minutes will require shifting the node westward. The magnitude of the
shift westward increases until about 2 3/4 hours of delay when the available ΔV for nodal shifts is exceeded. Thus, the launch window for the Apollo Mission D would be 2 3/4 hours. Although such
factors as lighting for end of mission and MODE IV aborts also influence the length of the launch window, preliminary studies indicate that the ΔV capability to adjust coverage and lighting for the
rendezvous is the most constraining and the other launch window constraints serve only to establish a rather wide 6 hour period in which launch could occur.
Phasing Requirements
After correcting the nodal differences between the orbits of a “phantom” vehicle launched on time and a delayed vehicle, the two vehicles are basically in the same orbital plane although not in the
same position in the orbit. To correct lighting and coverage, this position difference must be eliminated, and this is accomplished with a series of phasing maneuvers (apogee and/or perigee
adjustments to change the orbital period).
Figure 2 illustrates a typical problem which might be encountered in a launch delay. Figure 2 deals with a phasing situation in which the launch delay is about 30 minutes and thus the on-time vehicle
is about 120 degrees ahead of the delayed vehicle (the phase angle increasing at about 4 degrees per minute, for a orbital period of approximately 90 minutes).
Two choices are available to the maneuvering (delayed) vehicle once the nodal differential is corrected (Figure 1a and 1b). The first choice is to maneuver to a higher apogee orbit (greater period)
and “dwell” for a sufficient time to allow the on-time vehicle to catch up 240°. The second choice is to reduce the apogee altitude and catch up 120° with the on-time vehicle. The problem is now
reduced to simply making the proper choice based on minimizing the ΔV expended or the orbit change required.
Figure 3 shows the apogee (or perigee) adjustment (Δh) required for the maneuvering vehicle to catch up (“go below”) or to dwell (“go above”). The magnitude of Δh is approximated by the following
ΔP = 1/50 (Δh) (n)
ΔP is the delay time in minutes
Δh is the apogee or perigee adjustment in n. mi.
n is the number of orbits over which the phasing interval is desired
In the D Mission, n is dictated by operational considerations and thus the Δh is determined as a function of n from figure 3. In the example,
Assuming N = 20 then Δh to go above is 150 n. mi. and the Δh to go below is 75 n. mi. The best choice of Δh in this case is Δh = 75 n. mi. provided low perigee problems do not exist (see figure 2c).
Revision of the D Mission
To extend the launch window the SPS/DPS burns in reference 1 must be redesigned in both orientation and duration and plans drawn up to accommodate launch delays up to 2.75 hours. Studies are now in
progress to identify the operational problems associated with implementing the burn schedule, and reference 1 is currently being updated.
Using the SPS/DPS ΔV capability the D Mission launch window can be extended to nearly 2.75 hours. The extension can be accomplished by a series of nodal shifts and phasing burns which must be
incorporated into the operational trajectory planning.
No documents found with 3σ+ correlation. | {"url":"https://diggingapollo.com/documents/docid-622/","timestamp":"2024-11-13T14:49:39Z","content_type":"text/html","content_length":"67073","record_id":"<urn:uuid:d18eb712-584b-4ac7-a8de-b7821502e683>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00634.warc.gz"} |
What is the Golden Ratio — Composition Technique Explained
As a mathematical and artistic principle of mythical scale, the Golden Ratio is often misunderstood and mislabeled. DNA helixes, Renaissance art works, ancient architectural designs, and even fruits
have all been observed as taking on the golden ratio. So what is the Golden Ratio exactly? When was it discovered and what is its influence on art and design? Let’s dive in.
Watch: The Ultimate Guide to Film Composition
Subscribe for more filmmaking videos like this.
What is the Golden Ratio in Math and Art?
First, let’s define Golden Ratio
Before taking a look at the history of the Golden Ratio and examples of it in nature and design, we must first take a look at the Golden Ratio definition to properly understand what it is
What is the Golden Ratio?
The Golden Ratio is a principle in mathematics used to express the ratio of a line segment divided into two segments of different lengths whereby the ratio of the complete segment to the longer
segment is equal to that of the ratio of the longer segment to the shorter segment.
Also called the Divine Proportion or Golden Section, the ratio is mathematically used to express (1 + √5)/2, often denoted by the Greek letter ϕ or τ and verbally as ‘phi’. Numerically, the
irrational number is approximately equal to 1.618.
The Divine Proportion can be found in mathematics, nature, architecture, and art throughout history.
Famous artists who have used the Golden Ratio:
• Michelangelo
• Leonardo Da Vinci
• Georges Seurat
• Sandro Botticelli
Divine Proportion in Art
Golden Ratio History
The Golden Ratio has technically existed in nature and mathematics for all of time. But when it was discovered and by whom specifically is unknown. Its discovery can be traced back to multiple
instances as far back as Ancient Greece. Since then, it has been discovered and rediscovered in principles of mathematics, architecture, nature, and art
Thus is the reason for its many names such as the “Extreme Ratio” used by Greek mathematician Euclid of Alexandria in his mathematical textbook Elements (300 BC). In the 15th century, Renaissance
artist Leonardo Da Vinci illustrated what Italian mathematician Luca Pacioli further referred to as the golden proportion in his book De Divina Proportione.
Some say the oldest use of the Divine Proportion can be found in the construction of the Great Pyramids. Phidias (500 BC – 432 BC), a sculptor who was one of the designers of the Parthenon, is said
to have used the Divine Ratio within the Doric-style temple.
Parthenon • Golden Ratio examples
A term often correlated with the ratio is the Fibonacci Sequence. The Fibonacci Sequence, named after Leonardo Fibonacci, was demonstrated in his mathematical text book Liber Abaci. In that text,
Fibonacci introduced the concepts of the Arabic decimal system, a much more efficient mathematical system over the commonly used Roman numerals.
The Fibonacci sequence was a sequence of numbers that trended toward the Golden Ratio 1.618… Check out this video produced by PBS to learn more on how the Fibonacci sequence was demonstrated and its
relevance to the Divine Ratio.
The Golden Ratio History • Is It Myth or Math?
While the Fibonacci sequence undoubtedly had relevance to the evolution in understanding of the Golden Ratio, it was not until the 1600s that mathematicians such as Johannes Kepler and others made
the connection.
What is the Golden Ratio in Nature and Design?
Golden Ratio examples
Now that we’ve laid down the history of the Divine Proportion and its definition, what is the ratio used for in design? Many describe the ratio as the most pleasing proportions to the human eye.
Thus, it is incorporated into much of man-made design such as architecture and art, but it can also be found in the natural world.
To understand this, let’s look at a few examples.
One of the most famous examples of the Divine Proportion is in spirals found in nature. While some are exact examples of the Golden Ratio, but rather approximations, there are occurrences of the
exact Divine Proportion in spirals found in pinecones, pineapples, roses, succulents, and more.
Golden Ratio examples • Pinecone
Since we already talked about the Parthenon, what is the Golden Ratio used for in more modern architectural design? Let’s take a look.
The design of the The United Nations Secretariat Building in New York City was designed with the ratio in mind. Starting with the bottom larger sections, you can find the ratio. Looking more closely
at the shape of the windows, you can also find the ratio.
Golden Ratio composition • Secretariat Building in New York City
Finally, the Divine Proportion can be found in artworks from the Renaissance to modern art. You may have seen the ratio in modern art examples. However, many are approximations rather than precise
uses of the Golden Ratio as exactly 1.618…
Perhaps the most exact and intentional use of it among prominent artists is by Salvador Dalí in his 1955 painting The Sacrament of the Last Supper. Dalí used the Divine Proportion in two ways within
the painting.
First, the dimensions of the painting itself are a Golden Ratio. Furthermore, the dodecahedron within the background of the painting measured from its edges correspond with the Golden Ratio.
The Sacrament of the Last Supper (1955) • Golden Ratio art
The Golden Ratio has taken on a bit of a mythical force as many have claimed to find it across mathematics, science, and art. However, you cannot help but wonder if the idea of it as the most
pleasing proportions to the human eye has led to approximations of the ratio in what we see to pass as the Golden Ratio.
It is true that the ratio can be found in science, nature, art, and design across the board. That being said, not all claims of the Golden Ratio are indeed accurate. Hopefully this article has given
you enough context to understand what is and what is not. And how, despite being used in approximations, has led to an influence in design throughout history.
Up Next
Rules of Composition Within the Frame
If your curiosity in the Golden Ratio derives from a love for photography or cinematography, you may want to check out our next article. We dive deep into the rules and principles of composition as
it pertains to camerawork and how to tell better stories within a single shot.
Showcase your vision with elegant shot lists and storyboards.
Create robust and customizable shot lists. Upload images to make storyboards and slideshows. | {"url":"https://www.studiobinder.com/blog/what-is-the-golden-ratio-definition/","timestamp":"2024-11-11T16:59:46Z","content_type":"text/html","content_length":"393838","record_id":"<urn:uuid:2a12473d-3253-4518-8652-9aecf516e28e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00040.warc.gz"} |
Using this algorithm I can achieve the league of Heroes in the hero walk | Java development combat - Moment For Technology
This is the 8th day of my participation in the More text Challenge. For details, see more text Challenge
This article is participating in the “Java Theme Month – Java Development in action”. See the link to the event for more details
A* algorithm, A* (A-STAR) algorithm is A static road network to solve the shortest path of the most effective direct search method, is also an effective algorithm to solve many search problems. The
closer the distance estimated value is to the actual value, the faster the final search speed is.
The basic concept
• First of all, in the university we encountered the most algorithms Dijkstra, Floyd, breadth search, depth search. We will study these algorithms in the future, but today we will focus on the A*
algorithm. A* algorithm is A heuristic algorithm. Different from the above algorithms, A* algorithm takes into account the cost of the target node as well as the starting node.
• In the A* algorithm we define some properties for each node. The most basic is the base three mentioned below – the base three here is a term that I define myself. What is called heuristic is
when exploring the path to not only choose the starting point of the most recent but also to consider the consumption of the target node.
Sanki value
• Some of the concepts above may be confusing to you, but let’s just look at the definitions.
F=G+H: indicates the total consumption value of a node; In other words, G is the sum of the distance from the starting node and the target node: represents the consumption value from the actual node
to the node; H: indicates the consumption value from this node to the target node. (Note here that the consumption value here is actually an estimated value, because we cannot determine the specific
path to the target node, and the H is worth obtaining. This paper will provide three methods, among which the most widely used is the Distance to Manhattan.)
Three basis calculation
Conventional convention
• On the square map we agreed that the horizontal or vertical unit consumption was 10
• On the square map, the oblique unit cost is 14
• It is common sense that we cannot cross the Angle of the wall (wall, river, etc.) diagonally. In fact, each moving object has its own space, as shown in the figure below, S’ in the process S–>E
has occupied the Q (wall) domain.
• In the non-corner of the place according to their needs can be set through, can also set not through. This setting is traversable.
G value calculation
• In the definition above, we have listed its calculation formula, which may not be very clear to some people. Let’s explain it in detail here
• So first of all let’s look at G in the three bases, GRepresents the consumption value from the actual node to the nodeThis means the actual consumption required to move from the start node to the
current node, taking into account non-passable nodes. As shown in the figure below, S represents the starting point, E represents the target node,N represents the current node, and the black one
represents the wall (an unpassable set of nodes). We have rules that you can’t go through the corner diagonally, but in other places you can go through diagonally. With this convention, we can
figure out the diagram in Figure 3
□ S—>N[1]Spending ten,
□ S—>N[2]It consumes 14 because it goes from S to N[2]Not in the corner.
□ On the way N[3]—>N[4]It’s the obvious corner. So N[3]—>N[4]Consumption is 20.
Calculation method of H value
• The H value is the opposite of the G value, which is the estimated consumption value to the target node.
- First of all, G is for the actual point, while H is for the target node. Second, G is the real value, while H is the estimated value. Finally, the calculation of G value allows diagonal line walking, but the calculation of H value can only be a combination of transverse and verticalCopy the code
• In the figure below (H=40+10 from N2– >E in Figure 4), 40 parts have already gone through the wall. This is an estimate so we don’t think about the wall. So that’s the end of our basic definition
Set a list of
• In our A* pathfinding process, we need to use two sets, one we call an OpenList, and the other we call A ClosedList.
• For example, when we go to the supermarket to go shopping, we all push a trolley and put things in it that we like. At the end of the settlement or in the middle of the settlement we will choose
something more affordable to replace the equivalent product we have already chosen
• In A*, we do the same thing. OpenList is like A shopping cart. We add the items we see and like to the cart, but adding to the cart doesn’t necessarily result in buying them. During the openList
addition process we will gradually replace the same nodes that have been selected with ‘more affordable’ nodes. The last thing we put in our bags at the supermarket is the last thing we take
home. And that’s the last thing we’re going to add to the A* closedList.
Pathfinding parsing
^The images in this section are from the following articles. The thought reference source of the following article
□ Blog.csdn.net/zgwangbo/ar…
□ Blog.csdn.net/hitwhylz/ar…
• After the above introduction, we have understood some agreed definitions in A*, and the following process will be easy to understand on the basis of understanding these definitions.
The initial map
• In Figure 5, the blue squares on the map represent walls, the definition of which has been explained above. The blue on the left represents S(the starting node), and the red on the right
represents our target node. The dot represents one of the best paths from S to E. From Figure 5, we can see that we did not walk diagonally in the corner, but chose to walk diagonally in other
sections. The following nodes are described in terms of the coordinates in Figure 5. For example, the starting node is called (2,3).
□ The blue squares indicate the unpassable squares
□ The cyan border square indicates that you have joined the openList
□ The highlighted border is added to the closedList
□ The cyan is the actual node
□ Red represents the target node
Recursive search
• First of all, our normal pathfinding is impossible to have the same head and tail. But in the case of the project, if the node and the target node are actually unified nodes on the map, then our
path is the current node.
• Select the available nodes around the current node. If not in the openList set, calculate the cardinality 3 values and add them to the openList set. The colleague who calculates cardinality 3
sets the parent of the node to be added to the openList as the current node.
• If the openList collection already exists and the G value in the combination is greater than the G value of the current node (one of the surrounding nodes), the current node is updated to the
openList collection. Otherwise do not join and do not update.
• After selecting the surrounding nodes, we will add this node (the starting node) to the closedList, and then select the node with the smallest F value from openList, and continue to repeat the
above three steps.
Here is the process
• Following figure 5 we can get a list of surrounding nodes from the starting node (2,3) as shown in figure 6. And analyze whether to join the openList one by one.
• By comparison, we find that all the surrounding eight nodes are valid and not in openList, so we add them and calculate the value of radix 3, and set their parent node to (2,3). As shown in
figure 7
• From figure 7 we can see that around the starting node (2,3) (3,3) the F of this node is at its lowest point in the openList, so at this point we remove (2,3) from the openList and add (2,3) to
the closedList. At this point, (2,3) is highlighted to add to the closedList,
• (2,3) the node has finished its mission, we will not consider the nodes added to the closedList, we can consider the points added to the closedList as walls. Then we select the lowest F value
(3,3) from openList as the new starting node, and start to repeat the above process, but we find that (3,3) top right, right right, bottom right are all walls, and left (2,3) is in the closedList
set. We don’t have to think about those four points. So what’s left is
(2,2),(3,2),(2,4),(3,4). But all four of these points happen to be in openList. Following the above process, we can compare the G value of these four points with that of the corresponding points in
the openList set. One rule is who stays who. As can be seen from Figure 8, the G values of the four newly obtained points are all greater than the G values of the corresponding points in the
OpenList, so we give up these four points here. They are not liked by us, so we discard these points. At the end of this round we add (3,3) to the closedList set. If the G value of the new node is
less than the G value of the corresponding point in the openList, we need to update the corresponding point in the openList. An update is the replacement of the original node with a new node. Notice
that the new node is different from the openList point, and the parent node is different.
• The reason why the G value of each new node is increased by 10 is because we started with (2,3), and the starting node is (3,3). For example, if we calculate (2,2), the G value is actually (2,3)–
>(3,3)– >(2,2). So it’s 10 plus 14.
• From figure 9, we can see that the minimum F value in our openList should be (3,4), and the following steps are repeated.
• The loop ends when our target node becomes the starting node, at which point we can go back through the parent node in the ClosedList to get the entire path.
Deficiency in
• The key lies in the selection of evaluation function h(n) : the estimated value h(n)<= the actual distance from n to the target node. In this case, the number of searching points is large, the
search range is large, and the efficiency is low. But you get the best solution. And if h(n)=d(n), that is, the estimated distance h(n) is equal to the shortest distance, then the search will be
strictly along the shortest path, and the search efficiency is the highest at this time. If the estimated value > the actual value, the number of searching points is small, the search range is
small, and the efficiency is high, but the optimal solution cannot be guaranteed
The source code
Source download point me
[/ /] : A translation: blog.csdn.net/coutamg/art… * / / / : other minimum path algorithms: www.cnblogs.com/biyeymyhjob… *[//]: heuristic algorithm: baike.baidu.com/item/%E5%90… | {"url":"https://dev.mo4tech.com/using-this-algorithm-i-can-achieve-the-league-of-heroes-in-the-hero-walk-java-development-combat.html","timestamp":"2024-11-08T08:55:41Z","content_type":"text/html","content_length":"80263","record_id":"<urn:uuid:06636907-590c-4b96-aad2-3e4d77b2421c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00420.warc.gz"} |
stack<T, Sequence>
stack<T, Sequence>
Categories: containers, adaptors Component type: type
A stack is an adaptor that provides a restricted subset of Container functionality: it provides insertion, removal, and inspection of the element at the top of the stack. Stack is a "last in first
out" (LIFO) data structure: the element at the top of a stack is the one that was most recently added. [1] Stack does not allow iteration through its elements. [2]
Stack is a container adaptor, meaning that it is implemented on top of some underlying container type. By default that underlying type is deque, but a different type may be selected explicitly.
int main() {
stack<int> S;
assert(S.size() == 3);
assert(S.top() == 4);
assert(S.top() == 7);
assert(S.top() == 8);
Defined in the standard header stack, and in the nonstandard backward-compatibility header stack.h.
Template parameters
│Parameter│ Description │Default │
│T │The type of object stored in the stack. │ │
│Sequence │The type of the underlying container used to implement the stack. │deque<T>│
Model of
Assignable, Default Constructible
Type requirements
• T is a model of Assignable.
• Sequence is a model of Back Insertion Sequence.
• Sequence::value_type is the same type as T.
• If operator== is used, then T is a model of Equality Comparable
• If operator< is used, then T is a model of LessThan Comparable.
Public base classes
│ Member │ Where defined │ Description │
│value_type │stack │See below. │
│size_type │stack │See below. │
│stack() │Default Constructible│The default constructor. Creates an empty stack. │
│stack(const stack&) │Assignable │The copy constructor. │
│stack& operator=(const stack&) │Assignable │The assignment operator. │
│bool empty() const │stack │See below. │
│size_type size() const │stack │See below. │
│value_type& top() │stack │See below. │
│const value_type& top() const │stack │See below. │
│void push(const value_type&) │stack │See below. │
│void pop() [3] │stack │See below. │
│bool operator==(const stack&, const stack&)│stack │See below. │
│bool operator<(const stack&, const stack&) │stack │See below. │
New members
These members are not defined in the Assignable and Default Constructible requirements, but are specific to stack.
│ Member │ Description │
│value_type │The type of object stored in the stack. This is the same as T and Sequence::value_type. │
│size_type │An unsigned integral type. This is the same as Sequence::size_type. │
│bool empty() const │Returns true if the stack contains no elements, and false otherwise. S.empty() is equivalent to S.size() == 0. │
│size_type size() const │Returns the number of elements contained in the stack. │
│value_type& top() │Returns a mutable reference to the element at the top of the stack. Precondition: empty() is false. │
│const value_type& top() const │Returns a const reference to the element at the top of the stack. Precondition: empty() is false. │
│void push(const value_type& x) │Inserts x at the top of the stack. Postconditions: size() will be incremented by 1, and top() will be equal to x. │
│void pop() │Removes the element at the top of the stack. [3] Precondition: empty() is false. Postcondition: size() will be decremented by 1. │
│bool operator==(const stack&, const │Compares two stacks for equality. Two stacks are equal if they contain the same number of elements and if they are equal element-by-element. This is a global │
│stack&) │function, not a member function. │
│bool operator<(const stack&, const │Lexicographical ordering of two stacks. This is a global function, not a member function. │
│stack&) │ │
[1] Stacks are a standard data structure, and are discussed in all algorithm books. See, for example, section 2.2.1 of Knuth. (D. E. Knuth, The Art of Computer Programming. Volume 1: Fundamental
Algorithms, second edition. Addison-Wesley, 1973.)
[2] This restriction is the only reason for stack to exist at all. Note that any Front Insertion Sequence or Back Insertion Sequence can be used as a stack; in the case of vector, for example, the
stack operations are the member functions back, push_back, and pop_back. The only reason to use the container adaptor stack instead is to make it clear that you are performing only stack operations,
and no other operations.
[3] One might wonder why pop() returns void, instead of value_type. That is, why must one use top() and pop() to examine and remove the top element, instead of combining the two in a single member
function? In fact, there is a good reason for this design. If pop() returned the top element, it would have to return by value rather than by reference: return by reference would create a dangling
pointer. Return by value, however, is inefficient: it involves at least one redundant copy constructor call. Since it is impossible for pop() to return a value in such a way as to be both efficient
and correct, it is more sensible for it to return no value at all and to require clients to use top() to inspect the value at the top of the stack.
See also
queue, priority_queue, Container, Sequence
STL Main Page | {"url":"http://ld2014.scusa.lsu.edu/STL_doc/stack.html","timestamp":"2024-11-12T10:49:05Z","content_type":"text/html","content_length":"12786","record_id":"<urn:uuid:d0f06f18-dda2-4247-b019-b983662e681a>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00824.warc.gz"} |
The error of dot product between two matrices
1809 Views
7 Replies
4 Total Likes
The error of dot product between two matrices
See my following example:
In[10]:= Array[a, 4, 0] . Array[PauliMatrix, 4, 0]
Out[10]= {{a[0] + a[3], a[1] - I a[2]}, {a[1] + I a[2], a[0] - a[3]}}
In[13]:= Array[PauliMatrix, 4, 0] . Array[a, 4, 0]
During evaluation of In[13]:= Dot::dotsh: Tensors {{{1,0},{0,1}},{{0,1},{1,0}},{{0,-I},{I,0}},{{1,0},{0,-1}}} and {a[0],a[1],a[2],a[3]} have incompatible shapes.
Out[13]= {{{1, 0}, {0, 1}}, {{0, 1}, {1, 0}}, {{0, -I}, {I, 0}}, {{1,
0}, {0, -1}}} . {a[0], a[1], a[2], a[3]}
How can I fix the second method above?
Regards, Zhao
7 Replies
The array Array[PauliMatrix, 4, 0] has three levels, the first of length 4 and the last with length 2. The array Array[a, 4, 0] has length 4. You can multiply them only in one direction, not in the
other. To do what you have in mind, you need a generalization of Dot, with the instruction to use only the first level of the Pauli array:
Inner[Times, Array[PauliMatrix, 4, 0], Array[a, 4, 0], Plus, 1]
The Array[PauliMatrix, 4, 0] is not a matrix, it is a list of 4 matrices.
Part of my problem with Subscript is that I am very old guard, and I first learned the language the ascii way. People that came to it later often prefer to have something that looks more like
familiar mathematical notation. The real drawback that comes to mind right now is that it is more more complicated to write patterns with Subscript. Oh, another: compare the simplicity:
Array[Subscript[a, #1, #2] &, {3, 3}]
Array[a, {3, 3}]
But, as you can see, the following two methods give the same result:
In[2]:= a[0] PauliMatrix[0] + a[1] PauliMatrix[1] +
a[2] PauliMatrix[2] + a[3] PauliMatrix[3]
Out[2]= {{a[0] + a[3], a[1] - I a[2]}, {a[1] + I a[2], a[0] - a[3]}}
In[3]:= PauliMatrix[0] a[0] + PauliMatrix[1] a[1] +
PauliMatrix[2] a[2] + PauliMatrix[3] a[3]
Out[3]= {{a[0] + a[3], a[1] - I a[2]}, {a[1] + I a[2], a[0] - a[3]}}
Therefore, I want to rewrite them into corresponding more concise forms.
Regards, Zhao
Thank you. It works.
Now I changed to use Subscript as follows for nicer forms in the ultimate results:
In[20]:= Inner[Times, Array[PauliMatrix, 4, 0],
Array[Subscript[a, #] &, 4, 0], Plus, 1]
Out[20]= {{Subscript[a, 0] + Subscript[a, 3],
Subscript[a, 1] - I Subscript[a, 2]}, {Subscript[a, 1] +
I Subscript[a, 2], Subscript[a, 0] - Subscript[a, 3]}}
On the other hand, I also noticed the following comment here:
If you want to do anything with the variables, indeed do not use Subscript, it will generally be very confusing and frustrating...
The new function
might be useful as well...
So, I would like to know whether I should preferentially choose to use Indexed instead of Subscript?
Regards, Zhao
I have thought of another solution, using Dot and Inactivate:
Array[Inactive[PauliMatrix], 4, 0] .
Array[Subscript[a, #] &, 4, 0] // Activate
I tend to avoid Subscript in calculations too. If needed for better display, it is easy to restore subscripts with a replacement rule:
prodForCalculation =
Inner[Times, Array[PauliMatrix, 4, 0], Array[a, 4, 0], Plus, 1]
forDisplay = a[i_] :> Subscript[a, i];
prodForDisplay = prodForCalculation /. forDisplay
I am not very familiar with Indexed.
I tend to avoid Subscript in calculations too.
What are the disadvantages of using Subscript for calculations? Is it not suitable to appear as an expression in calculations? Furthermore, even if this is true, why do many people still use it this
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/3090006?sortMsg=Likes","timestamp":"2024-11-07T06:35:32Z","content_type":"text/html","content_length":"125958","record_id":"<urn:uuid:9ca7732c-93a0-4e0f-99c8-3be7a7bdd870>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00636.warc.gz"} |
Applying Graph Analytics to Game of Thrones
Blog > Post
Applying Graph Analytics to Game of Thrones
by Amy Hodler & Mark Needham, Neo4j (guest author), 12 June 2019
Tags: Graph DBMS, Neo4j
Neo4j provides native graph storage, compute, and analytics in a unified platform. Our goal is to help organizations reveal how people, processes, locations and systems are interrelated using a
connections-first approach. The Neo4j Graph Platform powers applications tackling artificial intelligence, fraud detection, real-time recommendations and master data.
Merging Transactions and Analytics Processing
The lines between transaction and analytics processing have been blurring for some time. Online transaction processing (or OLTP) operations are typically short activities like booking a ticket or
crediting an account. It implies a lot of low-latency query processing and high data integrity. This has been approached very different from online analytical processing (OLAP) which facilitates more
complex queries and analysis over historical data with multiple data sources, formats, and types.
Modern data-intensive applications now combine real-time transactional operations with analytics. This merging of processing has been driven by advances in software as well as lower-cost,
large-memory hardware. Bringing together analytics and transactions enables continual analysis as a natural part of regular operations.
We can now simplify our architecture by using a single, unified platform for both types of processing. This means our analytical queries can take advantage of real-time data and we can streamline the
iterative process of analysis in what has been described as a hybrid transactional and analytical processing (HTAP).
A hybrid platform supports the low latency query processing and high data integrity required for transactions while integrating complex analytics over large amounts of data.
Graph Analytics & Algorithms
As data becomes increasingly interconnected and systems increasingly sophisticated, it’s essential to make use of the rich and evolving relationships within our data. If you’re already using a Graph
database, this is a great time to add in graph analytics to your practices to reveal structural and predictive patterns in your data.
At this highest level, graph analytics are applied to understand or forecast behavior in dynamic groups. This requires understanding a group’s connections and topologies. Graph algorithms accomplish
this by examining the overall nature of networks through their connections using mathematics specifically developed for using connections. With this approach, we can understand the structure of
connected systems and model their processes.
Using graphs we can model dynamic environments from financial markets to IT services, find more predictive elements for machine learning to combat financial crimes, or uncovering communities for
personalized experiences and recommendations. Graph analytics help us infer relationships and predict behavior.
Categories of Graph Algorithms
Graph algorithms provide one of the most potent approaches to analyzing connected data because their mathematical calculations are specifically built to operate on relationships. There are many types
of graph algorithms and categories. The three classic categories consider the overall nature of the graph: pathfinding, centrality, and community detection. However, other graph algorithms such as
similarity and link prediction algorithms consider and compare specific nodes.
• Pathfinding (and search) algorithms are fundamental to graph analytics and algorithms and explore routes between nodes. These algorithms are used to identify optimal routes for uses such as
logistics planning, least cost routing, and gaming simulation.
• Centrality algorithms help us understand the roles and impact of individual nodes in a graph. They’re useful because they identify the most important nodes and help us understand group dynamics
such as credibility, accessibility, the speed at which things spread, and bridges between groups.
• Community algorithms evaluate related sets of notes, finding communities where members have more relationships within the group. Identifying these related sets reveals clusters of nodes, isolated
groups, and network structure. This helps infer similar behavior or preferences of peer groups, estimate resiliency, find nested relationships, and prepare data for other analyses.
• Similarity algorithms look at how alike individual nodes are. By comparing the properties and attributes of nodes, we can identify the most similar entity and score differences. This helps build
more personalized recommendations as well as develop ontologies and hierarchies.
• Link Prediction algorithms consider the proximity of nodes as well as structural elements, such as potential triangles between nodes, to estimate the likelihood of a new relationship forming or
that undocumented connections exist. This class of algorithms has many applications from drug repurposing to criminal investigations.
Applying Graph Analytics to Game of Thrones
Now let’s dive into applying graph algorithms on a dataset of everyone's favorite fantasy show, Game of Thrones.
NEuler - The Graph Algorithms Playground
We'll use the NEuler Graph Algorithms Playground Graph App to do this. NEuler provides an intuitive UI that lets users execute various graph algorithms without typing any code. It’s a Neo4j labs
project to help people quickly get familiar with graph algorithms and explore interesting data. More information about the app, including installation instructions, is available in the release blog
post .
Once NEuler is installed we’ll need to load the Game of Thrones Sample Graph, as shown in the printscreen below:
This dataset is based on Andrew Beveridge's Network of Thrones, and contains characters and their interactions across the different seasons.
Analyzing Game of Thrones
With the dataset loaded we're ready to start analyzing it. Our focus will be on season 2 of the TV show, but we will occasionally show the results from other seasons for comparison.
We'll use community detection algorithms to find clusters of users in Westeros and Centrality algorithms to find the most important and influential characters.
The Louvain Modularity algorithm detects communities in networks, based on a heuristic maximizing modularity scores. (Modularity scores range from -1 and 1 as a measure of relationship density inside
communities to relationship density of outside communities.) If we run it for season 2 of the Game of Thrones dataset and visualize output format, we'll see the following graph:
In the upper left purple cluster, we can see the Daenerys group is off on their own, disconnected from everybody else. The people in that cluster didn't interact with anybody else. We initially
thought there must be a problem with the data or algorithm, and ran another community detection algorithm, Connected Components, to confirm our findings.
The Connected Components algorithm is a community detection algorithm that detects clusters of users based on whether there's any path between them. If we run that algorithm, we'll see the following
Here, we have just two communities: the one with Daenerys on the left, and the vast majority of other characters on the right. This confirms our findings from the Louvain Modularity algorithm, and if
we stretch our memory back to season 2, we'll remember that Daenerys was off on an island away from the rest of the main characters.
Another way of analyzing community structure is to compute the number of triangles that a user is a part of. A triangle in this graph means that Character A interacts with Character B, Character B
interacts with Character C, and Character C interacts with Character A. We can see an example of a triangle in the diagram below:
If we run the Triangle Count algorithm and select the table output format, we'll see the following output:
We'll also notice that this algorithm returns a coefficient score. This Clustering Coefficient measures how well our neighbors are connected compared to the maximum they could be connected. A score
of 1 would indicate that all our neighbors interact with each other. So while Joffrey scores very well on overall triangles (raw number of neighbors interacting), we notice that the neighbors of
Littlefinger and Sansa have a higher probability (cluster coefficient) of being connected.
Next, we’ll use centrality algorithms to find important characters.
Centrality Algorithms
The simplest of the centrality algorithms is Degree Centrality, which measures the number of relationships connected to a node. We can use this algorithm to find the characters that have the most
When we run the algorithm we'll see the following output:
Joffrey and Tyrion are interacting with the largest number of people, which tells us that season 2 of the show is mainly based on these characters. It doesn't necessarily mean that these are the most
influential characters, but they're certainly the ones who are talking a lot!
The Betweenness Centrality algorithm detects the amount of influence a node has over the flow of information in a graph. It’s often used to find nodes that serve as a bridge from one part of a graph
to another.
We can use this algorithm to find people who are well connected to sub-communities within Westeros. If we run the algorithm and select the chart output type, we'll see the following output:
The chart option will be displayed when applicable and is a nice way to look at many of the centrality algorithms where ranking is more significant than actual scores. We see here that Joffrey has
fallen down from rank 1 for degree centrality to only rank 6 here, and Arya has moved up from rank 5 for degree centrality to rank 1 here. In season 2 Arya is on the road, and would act as a bridge
node between the people she interacts with and those in the other parts of the kingdom.
We can also take a peek forward to season 7 and see how things have changed:
Jon is now overwhelmingly the top-ranked character based on betweenness centrality. His score is twice as high as the next person. He's likely acting as the glue between groups of people who don't
interact with people outside their core group, except with Jon.
Another measure of importance is PageRank, which measures overall, including indirect, influence. It will find not only people who are significant in their own right but also those who are
interacting with more influential people.
For the above PageRank results we see some familiar faces - Joffrey and Tyrion also ranked highly for Degree Centrality and Arya was top-ranked for Betweenness Centrality. Note that for other
datasets, especially those with complex relationships, we would likely see more variation in centrality rankings.
Now let’s take a journey back in time and compare running PageRank on season 1, which gives the following output:
Ned was clearly the most influential character at this stage, but sadly it didn't last! By comparing results over segmented data (perhaps by time, geography, or demographic) we can reveal a deeper
Finally, let’s conclude this post by showing how to combine the results of community detection and centrality algorithms in the visualization output format.
The following diagram colors nodes based on their Louvain Modularity cluster and sizes them based on their PageRank score:
Now we can not only see the clusters, but also the most important characters in a particular cluster. Unsurprisingly we learn that Daenerys is the most important character in the isolated cluster.
We'll also see some other familiar faces including Arya and Tywin in the blue cluster, Tyrion, Cersei, and Joffrey in the yellow one, and Jon in the green one.
We hope you've had as much fun reading this analysis as we've had writing it. It’s fascinating that you can learn so much about Game of Thrones by looking only at its metadata.
If you’d like to learn more about graph analytics and their application, you can find practical examples and working code for Spark and Neo4j in the free digital copy of the O’Reilly Graph Algorithms
About the Mark Needham is a graph advocate and developer relations engineer at Neo4j. He works to help users embrace graphs and Neo4j, building sophisticated solutions to challenging data problems.
Authors: Mark has deep expertise in graph data, having previously helped to build Neo4j’s Causal Clustering system. He writes about his experiences of being a graphista on his popular blog at
markhneedham.com/blog and tweets @markhneedham.
Amy E. Hodler is a network science devotee and AI and graph analytics program manager at Neo4j. She promotes the use of graph analytics to reveal structures within real-world networks and
predict dynamic behavior. Amy helps teams apply novel approaches to generate new opportunities at companies such as EDS, Microsoft, Hewlett-Packard (HP), Hitachi IoT, and Cray Inc. Amy has
a love for science and art with a fascination for complexity studies and graph theory. She tweets @amyhodler. | {"url":"https://db-engines.com/en/blog_post/81","timestamp":"2024-11-10T06:24:50Z","content_type":"text/html","content_length":"27446","record_id":"<urn:uuid:838f981b-e2ac-4541-aae3-309d2130ea8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00635.warc.gz"} |
A Short Text Classification Model for Electrical Equipment Defects Based on Contextual Features
Issue Wuhan Univ. J. Nat. Sci.
Volume 27, Number 6, December 2022
Page(s) 465 - 475
DOI https://doi.org/10.1051/wujns/2022276465
Published online 10 January 2023
© Wuhan University 2022
0 Introduction
Power equipment inspection is essential to maintain the system's regular operation. The equipment defects found in the inspection will be presented in the power defect system. Determining the type of
equipment defects is a prerequisite for eliminating them. However, the present defect classification work is mainly completed by manual classification. As the scale of the power system continues to
expand, the number of devices is increasing exponentially, which significantly increases the inspection workload^[1-4]. Therefore, better utilization of short text classification to improve the
efficiency of defect identification is an urgent problem in the power industry^[5].
During the operation of electrical equipment, many defect data are generated^[6,7], which are usually recorded manually by inspectors and classified by professionals according to their experience. In
addition, these data are characterized by the lack of semantic information, sparse data, and high dependency. Therefore, improving the short text classification model is the key to classifying
defects in power equipment^[8].
With the development of deep learning, it has made exemplary achievements in the fields of computer vision^[9], speech recognition^[10,11], and text classification ^[12]. Convolutional neural network
^[13] (CNN) and recurrent neural network^[14] (RNN) are the commonly used deep learning networks. However, CNN ignores the dependency features among local information and RNN is prone to the problems
of gradient disappearance and gradient explosion. Liu et al^[15] used CNN to classify short texts of electrical equipment defects, which reduced the classification error rate comparing with the
traditional machine learning classification methods. On this basis, scholars proposed the long short-term memory network^[16] (LSTM), which effectively solved the problem of RNN. Further,
bi-directional long-short term memory^[17] (BiLSTM) was developed to obtain contextual features from text sequences forward and backward. Wei et al^[18] proposed a fault detection classification
method combining BiLSTM and CNN, which extracts local feature information through the maximum pool layer in CNN, but cannot extract global features. Therefore, the network needs to be further
optimized to retain the features of global information.
Currently, graph neural networks (GNN) designed for short texts have achieved good results. Hao et al^[19] first transformed the text into a text-graph structure, obtained word embeddings by graph
convolution operations, and fed them to a classifier for text classification. Yao et al^[20] transformed the text classification problem into a node classification problem and applied GNN to corpus
graphs, which eventually achieved excellent text classification results. Hu et al^[21] modeled corpus-level latent topic, entity, and document graphs, while Ye et al^[22] operated on corpus-level
latent topic, document, and word graphs. Both papers connect documents to different types of entities, such as latent topics and entities, but do not connect to other documents and cannot capture
similarities between short texts.
For short text classification of electrical equipment defects, a large amount of irrelevant topic information involved in BiLSTM training will lead to degradation of classification performance. The
attention mechanism assigns weights according to word importance and highlights contextually essential information. The combination of BiLSTM and the attention mechanism can further improve
classification accuracy. But the ability of BiLSTM to capture contextual features is weak. Therefore, we introduce CNN to capture salient topic features and make full use of contextual features to
improve the classification accuracy. Due to the random initialization of weight values, the gradient descent method used by CNN may fall into a local optimum solution, for which an optimization
algorithm can be used to find the appropriate parameter values.
Above all, the main contributions of this paper are as follows:
1) In this paper, we propose a text classification model that combines BiLSTM with the Attention Mechanism and CNN optimized by the Genetic Algorithm. The feature vectors of the model inputs are
constructed and selected by the bidirectional encoder representation from transformers (BERT) model, thus improving the accuracy of the short text classification model.
2) In order to obtain crucial semantic information in the sequence, this paper integrates BiLSTM with the Attention Mechanism to capture the important semantic information in the sentence by
assigning different weights to the information extracted from the forward hidden layer and the backward hidden layer.
3) We introduce CNN to capture important local word order features from textual contextual features, and optimize CNN weight vectors with the help of the genetic algorithm to find the model with the
best weight values.
1 Model Architecture
The model architecture proposed in this paper is shown in Fig. 1, which includes five parts: encoding words with BERT, capturing semantic information with BiLSTM, giving different weights with the
attention mechanism, capturing salient features with CNN and Softmax classification.
First, the character embedding part uses the BERT model as the initialization method for text representation, converting words into word feature vectors. Second, the input to the BiLSTM layer is the
feature vector obtained from the word embedding layer. The contextual features are obtained from the forward and backward hidden layers while capturing the bidirectional semantic dependencies. Then,
the attention mechanism assigns higher weights to the words that affect the semantic information. Meanwhile, CNN extracts locally significant features while ultimately maintaining long-term
dependencies. Finally, the classification layer fuses the pooled features together to form a feature map, which is used for Softmax classification. Algorithm 1 shows the algorithmic representation of
the BAGC model architecture.
Algorithm 1 BAGC model algorithm flow
Input: text $S={x1,x2,⋯,xT},xi(i∈[1,T])$
Output: class label $Ĉ$
1.Input to the word embedding layer and convert to word vector form:
2.Inputs to the BiLSTM layer are the word vectors:
3.Input the $ht$ produced by the BiLSTM layer to the Attention layer, and obtain the implicit information through nonlinear transformation:
4.Randomly initialize the attention matrix $v$ and multiply $ut$ for normalization to form the attention matrix:
5. Form the output:
6.Taking the text embedding vectors${s1,s2,⋯,sn}$ as the input of CNN, a window of $h$ words are convolved through a filter to generate a new characteristic:
7. Using $yi$ as the input of the Softmax layer, the probability of the corresponding category of the text is generated:
8. Classify text: $Ĉ=argmaxP(S|C)$
9. Return $Ĉ$
1.1 BERT Word Vector Encoding
In this paper, WordPiece is used to segment the input sentence. Each word is processed into a vector form of words, texts, and positions and simultaneously added to the BERT coding layer. As shown in
Fig. 2, the sentence P, "Insulation aging of stator winding of Longhu line" is divided into several words, which become "Insulation," "aging," "of," "stator," "winding," " of," "Longhu," "line." They
correspond to Token layer information, Segment layer information, and Position layer information, respectively. The token layer is the vector embedding of words, the Segment layer is used to
distinguish which sentence belongs to which, and the Position layer is used to distinguish the position in the sequence.
Also, we set the sentence length of the BERT encoding layer input at 32 bits, where [CLS] and [SEP] take 2 characters each. When the length exceeds 30 bits, the subsequent sentences are cut off; when
the length is less than 30 bits, supplementary <padding> is used. Among them, [CLS] represents the special symbol of classification output, and [SEP] represents the end of a sentence, occupying one
character, respectively. The calculation formula for the input layer is as follows:
$h 1 = E T o k e n + E S e g e m e n t + E P o s i t i o n$(1)
For a text S consisting of T words, text is converted into word vector form by using BERT pre-training language model:
$S = { e 1 , e 2 , ⋯ , e T }$(2)
1.2 BiLSTM Captures Bidirectional Semantic Dependencies
LSTM solves the problem of gradient disappearance and explosion to a certain degree. However, LSTM cannot extract contextual information, and BiLSTM can better capture bidirectional semantic
dependencies. Therefore, this paper captures the context features of the text through BiLSTM, which is composed of forwarding and backwarding LSTM units and gains information from two reverse
orientations separately. The information output of sentence P through the BERT encoding layer is $P={e1,e2,⋯,eT}$, which is input to the BiLSTM layer, where the dimension corresponding to the matrix
$P$ is the number of batch training samples × the maximum sentence length × the hidden layers of BiLSTM. BiLSTM encodes the sentence P, and each word vector output contains contextual features. The
formula is as follows:
$h i p ⃗= L S T M ⃗( h i - 1 p ⃗, p i )$(3)
$h i p ⃖= L S T M ⃖( h i + 1 p ⃖, p i )$(4)
$h i p = h i p ⃗⊕ h i p ⃖$(5)
where $hip⃗$ and $hip⃖$ are the output results of BiLSTM forward LSTM and reverse LSTM at time $i$, respectively. $hip$ is the output layer information of BiLSTM at time $i$.
1.3 The Attention Mechanism Assigns Different Weights
The contribution of each word in the text sequence to the classification is different. For example, in sentence P, "Insulation aging of stator winding of Longhu line", after the BiLSTM layer
operation, the result obtained is that each word plays an equally important role, but "Insulation aging" obviously plays a more critical role actually. Accordingly, this paper introduced the
Attention Mechanism, which calculates the correlation coefficient between words in the text sequence, and the weights is assigned to the word vector according to the correlation between words.
The input of the attention layer is $ht$ generated by the BiLSTM layer, and the hidden information is obtained by nonlinear transformation. After the Attention Mechanism runs, the output of sentence
P is a word vector. The word vector of "Insulation aging" has a greater impact on text classification than other word vectors, so "Insulation aging" should be given a higher weight in the output word
vector. In this paper, we convert $ht$ to $ut$ by fully connected layer operation, and the formula is as follows:
$u t = t a n h ( W h h t + b h )$(6)
The word-to-word correlation coefficients are calculated with the help of scoring functions. It measures the correlation between words and words, which is converted into a probability distribution by
$a t = u t T v ∑ t e x p ( u t T v )$(7)
Finally, the feature vector is formed after the weighting operation:
where $ht$ is the characteristic vector output when the model runs the BiLSTM at time $t$, $bh$ is the corresponding offset at time $t$, $Wh$ is the weight coefficient matrix at time $t$, $v$ is the
attention matrix initialized as circumstances warrant, $αt$is the weight of each word in the sentence, $si$ is the weighted output vector.
1.4 CNN Extracts Salient Features
Based on the context features of the text, this paper uses CNN to gain further the topic's salient characteristics of text alignments from contextual attributes. A convolution operation obtains the
featured graph. The convolution result is sampled by pooling layer to decrease the magnitude of the convolution vector and avoid over-fitting.
In this paper, the convolution operation is performed by using a plurality of convolution kernels, and the ${s1,s2,⋯,sn}$ obtained by the Attention layer is expressed as:
$s 1 : n = s 1 ⊕ s 2 ⊕ ⋯ ⊕ s n$(9)
Passing a convolutional filter with an $h$-width window, the new features are obtained:
$y i = f ( W k * s i : i + h - 1 + b )$(10)
where $⊕$ is the concatenation operator, $b$ stands for the bias, $Wk$ denotes the corresponding weight matrix corresponding of diverse convolution kernels, $i$ acts for the $i-th$ eigenvalue,$h$
indicates the scale of the convolution kernel, and $f$ indicates the Relu nonlinear activation function, $yi$ represents the consequence of convolution calculation.
Then, the extracted critical information is pooled. The largest eigenvalue is extracted in the sampling window, and all sampled eigenvalues are combined into ${y1,y2,y3,⋯,yn},$the export of CNN, such
as formula (11):
$y = ∑ i = 1 n - h + 1 m a x ( y i )$(11)
1.5 Genetic Algorithm to Optimize CNN Parameters
The core of the genetic algorithm ^[23-25] (GA) is parameter encoding, population initialization, and determination of fitness function, and then the optimal solution is obtained by the search. The
classical CNN learning method uses the fastest descent algorithm for learning, and the learning performance is strongly affected by the initial weight settings of the convolutional and fully
connected layers. The optimal weights are obtained after selection, crossover, and variation operations as the initial weights of CNN. These weights are used as the initial weights of the CNN, and
their learning performance is better than the initial weights randomly selected by the fastest descent algorithm.
The main problem of training CNN with the fastest descent algorithm is falling into the local optimal solution. To solve this problem, we introduced the GA. The fundamental idea is using the GA to
determine the initial weights of the CNN classifier. Firstly, multiple sets of initial weights are selected, and the combination methods of each set of weights are encoded as chromosomes. Different
weight combination methods are generated by chromosomes' selection, crossover, and mutation operations. Then, the fitness value of chromosomes is used to select the optimal weight combination, where
the fitness value is the CNN classification accuracy using different weight combination techniques.
The optimization improvement model process is shown in Fig. 3.
The steps of chromosome encoding and fitness solution are as follows:
1) decoding the chromosomes to get the CNN's convolutional and fully connected layers' initial weights;
2) using the fastest descent algorithm to train the CNN network $d$ times;
3) calculating the CNN's accuracy as the fitness value of the corresponding chromosome. The optimization algorithm is shown in Algorithm 2,where the parameter $d$ is the number of iterations in
training, $M$ is the population size, $PC$ is the crossover probability, $PM$ is the mutation probability, $weight$ is the CNN weight, $learning_rate$ is the learning rate, $fitness(xi)$ represents
fitness function, $sorted( )$ is the sorting function.
Algorithm 2 Genetic Algorithm to optimize CNN network
Input: Genetic Algorithm Population, $M$=20, $PC$=0.65, $PM$=0.05
Output: Final population $G$
1. Chromosomes encoded by initialized weight combinations
2.Do {
3.Initializing the CNN classifier and train classifier with the steepest descent algorithm, updating network weights:
4.Using the accuracy of each network as the fitness of chromosomes, calculate and rank:
5.Performing genetic operations, the higher the fitness, the greater the probability of being selected:
6.} while (The fitness value meets the termination condition or the reproduction algebra reaches the upper limit)
7.Return $G$
2 Experiments and Analysis
2.1 Dataset Introduction
In order to study the classification effect of the model constructed in this paper on the defective texts of electric equipment, 6 750 defective text data from the electric equipment records of the
State Grid Corporation of China are randomly selected as the research object, and there are six fault categories: blockage, leakage, misalignment, failure, invalidation and others. Among them, 5 400
texts are used in the training set, and the texts used in the test set and validation set are 675, respectively. A sample of dataset is shown in Table 1.
Short texts on electrical defects are different from other Chinese texts, which have the following four characteristics on the whole:
1) Most defects have an inseparable relationship with the exclusive domain of power equipment, and there are many electrical specialized words in the text. In addition, due to the unique expression
habits of the inspectors, there may be different descriptions for the same component, such as "shake meter" and "megger".
2) Due to the complexity of defects and the different recording habits of the inspectors, the length of each defective short text varies from each other, the shortest can be as few as four words, and
the longest can be as long as 40 words.
3) The exact fault location can lead to different classifications due to different types of faults. For example, faults caused by display panels are classified into two types: display panel black
screen and display panel unclear.
4) Faulty texts include a vast quantity of data, and the similarity of different forms of defective data may be high and lack sufficient semantic information. However, traditional text classification
models have unavoidable limitations in classifying texts with high similarity. Meanwhile, classifying defective texts requires high storage space and computational power for classification models.
2.2 Hyperparameters Settings
Based on the experimental process, some hyperparameters of this model are set as shown in Table 2.
1) Word vector dimension: The setting of the word vector dimension affects the word representation's accuracy. As shown in Fig. 4, with the increase of word vector dimension, the classification
accuracy shows increasing and decreasing trends. The accuracy reaches the highest value when the word vector dimension is 200, indicating that the word meaning cannot be represented accurately when
the word vector dimension is low. Meanwhile, higher dimensionality can also make the vector representation too sparse and cause redundancy.
2) Neural network hidden layer: The operational capability of CNN is determined on the quantity of hidden layers. The following conclusion can be drawn from Fig.5. Along with the increasing of hidden
layers, the classification accuracy shows an increasing and decreasing trend. When the number of hidden layers is set to 256, the model reaches the best accuracy.
3) CNN convolution kernel size: The sizes of convolution kernels affect CNN's ability to obtain local features. As shown in Fig. 6, choosing convolution kernel size as [2, 3, 4], the model accuracy
can be higher than other convolution kernels.
4) Setting of epoch number: The $MF1$ score and train loss $FLoss$ of different epoch is shown in Fig. 7 and Fig. 8, respectively. With the increase of iterations, the training process tends to be
stable, and the model's $MF1$ score and $FLoss$ tends to converge. When the iterations reach 30, the training set $MF1$ = 93.65% and the verification set $MF1$ = 91.58%.
2.3 Description of Evaluation Indicators
In binary classification problems, accuracy, precision, recall and F1 value are the usually used methods to measure model performance, and the corresponding formula is as follows. Accuracy refers to
the proportion of correctly classified samples to the total number of samples. Precision refers to the proportion of correctly predicted positive classes to all predicted positive classes. Recall
refers to the proportion of correctly predicted positive classes to all actual positive classes. F1 integrally evaluates the results of accuracy and recall. TP (true positive) indicates that positive
samples are correctly identified as positive samples. FP (false positive) indicates that negative samples are incorrectly identified as positive samples. TN (true negative) indicates that negative
samples are correctly identified as negative samples. FN (false negative) indicates that positive samples are incorrectly identified as negative samples.
$A c c u r a c y = T P + T N T P + F P + T N + F N$(12)
$P r e c i s i o n = T P T P + F P$(13)
$R e c a l l = T P T P + F N$(14)
$F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l$(15)
Since the power defect text contains six categories, it is a multi-classification problem. In order to comprehensively evaluate the classification effect of the model, we use the macro-average
composite index. Among them, the macro precision rate $MP$ and the macro recall rate $MR$ are defined as follows, $n$ representing the number of experimental sample data categories.
$M F 1 = 2 × M P × M R M P + M R$(16)
$M P = 1 n ∑ i = 1 n P r e c i s i o n i$(17)
$M R = 1 n ∑ i = 1 n R e c a l l i$(18)
2.4 Comparative Experiment and Analysis
To further validate the effectiveness of the BAGC model in text classification, we conducted comparison experiments with the baseline model on the dataset. Table 3 shows the results of comparative
1) TextCNN^[26]: In this model, the CNN is applied to the text classification task, by using multiple sizes of convolution kernels, critical information in the sentence can be extracted, thus
enabling better capture of local relevance. Then the model is connected with Softmax for classification.
2) TextRNN^[27]: This model uses a two-layer RNN, which is good at capturing more comprehensive sequence information. Then the most critical features are automatically screened out by maximum
pooling, and then a fully connected layer is used for classification.
3) FastText^[28]: The input of the model is a word sequence and output is the belongingness probability of this sequence to different categories. The words and phrases in the sequence are formed into
feature vectors, which are mapped to the middle layer by a linear transformation, and then mapped to labels. Meanwhile, a nonlinear activation function is used to predict the categorical labels.
4) BERT^ [29]: The model takes the output CLS-marked vector from the last encoder of BERT to generate the probability values belonging to each label through a fully connected layer, and then the
largest one will be chosen as the prediction result.
5) BiLSTM_Attention ^[30]: The model is a dynamic pre-trained word vector model that captures textual, contextual feature information using a multi-layer Bi-directional Transformer architecture. The
hidden layer acquires the global classification feature information, and generates the probability value of each label through the fully connected layer to get the final prediction result.
Table 3 shows that the model in this paper outperforms the baseline method. The $MF1$ of the BAGC model improves by 3.4%, 4.17%, 3.54%, 1.2%, and 2.94% when compared with the TextCNN model, TextRNN
model, FastText model, BERT model, and BiLSTM_Attention model, respectively, which means a better learning performance.
Figure 9 shows the single-label classification accuracy of the BAGC model and the baseline model on data samples. It is obtained from Fig. 9 that our model shows a trend of being more accurate than
these baseline models, and only the single-label accuracy of our model is lower than the other models.
To verify the model's generalization ability, the classification accuracy of each defect category is tested. As shown in Table 4, the $MF1$ values for the "blockage" and "leakage" categories are
95.27% and 96.59%, respectively. The reason may be that in the text preprocessing stage, the data of these two categories are more different from other categories. At the same time, the weaker
correlation with other categories allows the data in this category to be better distinguished. In addition, we have applied the model to the customer service datasets provided by the partners of the
project on which this paper relies (two automobile components manufacturing companies). For the classification of defects in the example sentence "The fuel pump suddenly stops working and the
throttle suddenly sticks", the model gives the correct classification result, and this category belongs to "engine part failure". The experimental results prove that the model starts from the actual
needs of automotive enterprises, helps them to quickly obtain product defect information and further improves defect management, and realizes the effective validation of this method in specific
industrial application scenarios.
2.5 Analysis of Ablation Experiments
To further verify the validity of the components of the BAGC model, we performed ablation experiments. BAGC/B is the BAGC model removing the BiLSTM module, which extracts contextual semantics. BAGC/A
is the BAGC model removing the Attention Mechanism. BAGC/C is the BAGC model removing the process of CNN module, which captures local word order features of the text. BAGC/G is the BAGC model
removing the GA optimization in CNN weight vector operation. Table 5 shows the experimental results.
As can be seen from Table 5, the Attention Mechanism and the BiLSTM layer have a large impact on the classification ability of the BAGC model. Compared with BAGC/B, the $MF1$ of the BAGC model is
improved by 2.63%. Once the BiLSTM layer is removed, the effectiveness of the BAGC model will be significantly reduced, proving that BiLSTM can make up for the shortcomings of a single deep learning
model and better extract contextual semantic information from text data.
The $MF1$ of the BAGC model is enhanced by 1.39% compared with BAGC/A. It is demonstrated that the Attention Mechanism can assign more weights to the words that affect the semantics, which improves
the performance of the model. The $MF1$ of the BAGC model is improved by 1.05% compared with BAGC/C, which shows that the convolutional layer has less impact on the model than other components, but
it still helps to improve the classification accuracy. Compared with BAGC/G, the $MF1$ of the BAGC model is improved by 0.79%, which indicates the validity of the Genetic Algorithm in CNN layer on
improving the classification performance.
To sum up, each component of the BAGC model is necessary, and the results obtained after shows that the BAGC model removing any of them are suboptimal.
3 Conclusion
Based on the shortcomings of sparse data and insufficient semantic features in the short text classification process, a deep short text classification model incorporating contextual features is
proposed. The input to the model is a text vector generated using the BERT word vector model. In order to better extract contextual semantic information from the samples, words will be given unequal
weight values according to their different importance. Based on this, a CNN is optimized by the Genetic Algorithm to capture important local word order features. According to the experimental
results, the model can effectively identify defect classes and improve short text classification ability by fusing contextual features. Our model can better identify the data in the text of power
equipment defects, effectively complete the data structuring process of the text, and assist to solve the emergency defect processing scheme.
As the level of intelligent grid detection increases, there will be more unstructured data related to the grid, such as images and audio. These diverse fault expressions can present defects from
multiple dimensions. In the future, multi-source heterogeneity and data fusion are the development trends. We can integrate unstructured and structured data, build a Knowledge Graph in the field of
electric power, and realize the query of electric power knowledge base, so as to further improve the fault diagnosis accuracy.
All Tables
All Figures
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://wujns.edpsciences.org/articles/wujns/full_html/2022/06/wujns-1007-1202-2022-06-0465-11/wujns-1007-1202-2022-06-0465-11.html","timestamp":"2024-11-13T21:20:28Z","content_type":"text/html","content_length":"202575","record_id":"<urn:uuid:bf77b1b0-cc83-41ef-a034-3daea00d336f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00884.warc.gz"} |
NCERT Solutions for Class 7 Maths chapter-wise book pdf
NCERT Solutions for Class 7 Maths PDF
Date: 13th Nov 2024
NCERT solutions for class 7 Maths are provided here for class 7 students so that they can prepare and score high marks in their CBSE class 7 Maths exam 2022-23.
CBSE is an education board presiding over several schools across the country. National Council of Educational Research and Training (NCERT) is responsible for creating the curriculum followed by
schools operating under the CBSE board.
NCERT solutions for class 7 Maths are created by subject experts based on the latest CBSE syllabus and exam pattern. It also lays the foundation of many concepts which are commonly asked in exams and
in many competitive exams.
These NCERT Solutions for class 7 Maths chapter-wise pdf will help students tremendously in their preparation for cbse class 7 Maths exam 2022-23.
NCERT solutions for class 7 Maths are in accordance with the latest CBSE Maths syllabus 2022-23 that aids in covering all the questions which may appear in the Maths exams.
NCERT Solutions for class 7 Maths are one of the best study materials for class 7 Maths students willing to learn and improve their knowledge to score high marks in their Maths exams 2022-23.
CBSE class 7 Maths students are highly suggested to focus on every single question and topic available in NCERT solutions for class 7 Maths.
To score high marks in your CBSE class 7 Maths exam, you have to regularly practice CBSE Class 7 Question Papers, Class 7 Sample Papers, Class 7 Notes, Class 7 NCERT Books, NCERT Solutions for Class
7, Class 7 Exemplar and Class 7 Syllabus 2022-23.
Download NCERT Solutions for Class 7 Maths 2022-2023
Class 7 Maths students can easily make use of these complete NCERT Solutions for Class 7 Maths pdf in English & Hindi medium from below links by downloading them:
CBSE Class 7 Maths NCERT Books
Class 7 Maths students can easily buy CBSE Class 7 Maths NCERT Book in English & Hindi medium from below links:
Importance of NCERT Solutions for Class 7 Maths
TutorialsDuniya.com have provided chapter-wise NCERT solutions for class 7 Maths pdf as they are helpful for the class 7 Maths students to score well in the upcoming Maths exam 2022-23. NCERT
solutions for class 7 Maths help students to simplify complex topics, formulas and to clear their doubts.
Class 7 Maths students should practice all the questions in NCERT solutions for class 7 Maths to score high marks in their Maths exams 2022-23.
To achieve higher marks in the Maths exam, students will be needed to answer various types of questions such as MCQs, Case-Studies and many theory-based questions. All the questions that are asked in
the CBSE exams are entirely based on NCERT books.
Importance of NCERT solutions for class 7 Maths in exam 2022-23 are as follows:
• NCERT solutions for class 7 Maths help students to gather in-depth knowledge as they present a comprehensive study of any topic and include all the possible and relevant facts.
• The data and information presented in these NCERT solutions for class 7 Maths are verified and collected from reliable sources.
• NCERT solutions for class 7 Maths are helpful for students in simplifying complex topics, formulas and clearing their doubts.
• CBSE class 7 Maths students can use these NCERT solutions for revision and assess their knowledge gap with help of these NCERT solutions and study accordingly.
• Homework and assignments are given to class 7 Maths students based on the concepts from these NCERT solutions for class 7 Maths.
• NCERT solutions for class 7 Maths has easy language & their choice of words and writing style is suitable for every student.
• NCERT solutions for class 7 Maths are very detailed as they discuss every aspect of a topic and include multiple examples, exercises, and diagrams which help students to understand the concept
Download CBSE Study Material App for FREE high-quality educational resources for school & college students.
We hope our NCERT Solutions for Class 7 Maths in english & hindi medium has helped you. Please share these NCERT Solutions for Class 7 Maths 2022-23 with your friends as well 🙏
TutorialsDuniya.com wishes you Happy Learning! 🙂
NCERT Books Class 7
NCERT Solutions for Class 7
NCERT Exemplar Class 7
CBSE Class 7 Notes
NCERT Solutions for Class 7 Maths FAQs
Where can I get NCERT solutions for class 7 Maths?
TutorialsDuniya.com have provided NCERT solutions for class 7 Maths so that you can score good marks in your cbse class 7 Maths exam.
Where can I get NCERT solutions for class 7 Maths in english & hindi medium?
You can easily get NCERT solutions for class 7 Maths in english & hindi medium 2022-23 at TutorialsDuniya.com
Class 12 Computer Science Previous Year Question Papers
CBSE class 12 Computer Science previous year question papers with solutions are provided here for class 12 students to analyze their preparation and improve their performance accordingly in their
CBSE class 12 Computer Science board exam 2025.
URL: https://tutorialsduniya.com/cbse/class-12-computer-science-previous-year-question-papers/
Author: 95,786 CBSE Students | {"url":"https://www.tutorialsduniya.com/cbse/ncert-solutions-for-class-7-maths/","timestamp":"2024-11-13T08:53:21Z","content_type":"text/html","content_length":"120629","record_id":"<urn:uuid:ed8c5d3b-c8ba-4640-abd6-f863cb75fa13>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00749.warc.gz"} |
Mathematics and computer science master’s track 2024/25
Data Sciences, Geometry and Combinatorics
The “mathematics and computer science track” is a master’s (M2) track aiming for a dual formation in mathematics and computer science with courses at the border of the two disciplines (almost each
course is taught jointly by at least a mathematician and a computer scientist). A significant fraction of the attendees is non-French speaking, and the lectures are delivered in English.
This track is simultaneously a track of the M2 “Computer Science” master’s program, which leads to a diploma in Computer Science and of the M2 “Mathematics” master’s program, which leads to a diploma
in Mathematics. It is also possible to get both diplomas by registrating in each formation. It is jointly supported by the Bézout Labex, the UFR of Mathematics and the Institut Gaspard Monge. The
persons in charge are Laurent Hauswirth (LAMA) and Cyril Nicaud (LIGM). This track is open to students, supported or not by the Bézout scholarship program.
The information below concerns the academic year 2024-2025. The archives on previous years are available here: 2018-2019, 2019-2020, 2020-2021, 2021-2022, 2022-2023. Before 2018, the Bézout Labex
supported individual courses at the interface of mathematics and computer science.
REGISTRATION 2024-2025: Registration are possible on the following pages from the mathematics master’s program and the computer science master’s program (depending on the degree aimed for). These
pages are not completely up-to-date concerning the pedagogical aspects… see below for the newest information.
(Note: UE=”Unité d’enseignement”=indivisible piece of lectures; HETD=”Heure équivalent TD”: 1 HETD corresponds to 40 minutes of lecture.)
• 4 weeks on basics: complements in mathematics and complements in computer science. Each UE has 6 ECTS and 48 HETD. (In 2024-2025: from September 16 to October 11.)
• 10 weeks for a general large background: data sciences, probabilistic methods, discrete maths and geometric calculus. Each UE has 6 ECTS and 60 HETD, split into 2 courses of 3 ECTS and 30 HETD
each. (In 2024-2025: from October 14 to December 20. Exams the week of January 6.)
• 8 weeks for two UE of specialization chosen by students among four UE, having each 6 ECTS and 40 HETD. (In 2024-2025: from January 20 to March 21, with one week break. Exams the week of March
• A research memoir/internship of 18 ECTS. (In 2024-2025: from March 31.) in a academic laboratory or a private company within a department of applied mathematics or data sciences (aeronautics,
software, transport’s logistic, …)
Preliminary list of courses
Semester Name ECTS Hours
1 UE Basics Mathematics (analysis, algebra, probability, geometry) 6 48
1 UE Basics Computer Science (complexity, algorithms, programming, graphs) 6 48
1 UE Discrete and continuous optimisation 6 60
Discrete optimisation 3 30
Continuous optimisation 3 30
1 UE Geometry and Combinatorics 6 60
Geometry 3 30
Combinatorics 3 30
1 UE Data Sciences 6 60
Introduction to data sciences 3 30
Computational aspects of data sciences 3 30
2 UE Advanced Data Sciences 6 40
Deep learning methods 3 20
Computational aspects of deep learning methods 3 20
2 UE Maths specialization: Advanced Geometry 6 40
2 UE CS specialization: Algebraic Combinatorics and formal calculus 6 40
Combinatorics Hopf algebra 3 20
Operads 3 20
Detailed description of the courses of the first semester
Basics in Mathematics.
• Algebra and linear algebra (Fradelizi): Groups: order, quotient group, cyclic groups, finite groups, finite abelian groups, group actions; Rings, polynomials and fields: ideals, principal ideal
domains, finite fields; Linear algebra: endomorphisms, eigenvectors, spectral theorem.
• Analysis (Sester): Normed vector spaces : equivalent norms, topology, continuous functions, compactness, the finite dimensional case; Examples of metric spaces; Differential calculus. Extremum
problems; Convex functions, convexity inequalities. Asymptotic analysis.
• Probability (Martinez): Random experiment and probability spaces. Law, mean value, moments,… of a random variable. Applications to combinatorics on graphs; Deviation’s inequality, concentration
inequalities (Markov, Tchebychev, Hoeffding inequality…); Martingales, inequalities with Martingales; Markov chains. References : The Probabilistic Method (N. Alon, J. H. Spencer).
• Geometry (Sabourau): We will give an introduction to graph theory: connectedness, degree, trees/forests, adjacency matrix etc.
Basics in Computer sciences.
• Algorithmic: data structures (Nicaud): The data structures studied include: array, lists, stack, …; dynamical arrays; trees, well-balanced trees- heap, priority queues; hashtables; minimal range
query; suffix array; suffix trees.
• Complexity (Thapper): The course is an introduction to computational complexity theory. We will cover the following notions: Turing machines, the Church-Turing thesis, (un)decidability, the
halting problem; P, NP, polynomial-time reductions, NP-completeness, the Cook-Levin theorem, co-NP; PSPACE; the time and space hierarchy theorems, Ladner’s theorem; the polynomial hierarchy and
collapses; approximation of NP-hard problems. References: Introduction to the Theory of Computation (Michael Sipser).
• Programmation (Borie): Programmation : Python and Sage.
Quick review of the basics of programming. Solve simple mathematical-algorithmic problems with Python (gcd, f(x) = 0, numerical integration algorithm, knapsack, backtracking, …). Getting started
with Sage, a computer algebra system. Programming project at the interface mathematics and computer science.
• Graph theory (Bulteau and Weller). Fundamentals; Connectivity- Planar graphs- Flow/Cut- Examples of graph classes- Examples of problems- Matchings; P/NP, Reductions; parameterized algorithms;
examples of parameters; kernels; minors
Discrete and continuous optimization.
• Discrete optimization (Thapper): Min-max results in combinatorial optimization provide elegant mathematical statements, are often related to the existence of efficient algorithms, and illustrate
well the power of duality in optimization. The course aims at being a gentle introduction to the richness of this type of results, and especially those that belong to the theory of perfect
graphs. It will make connections with the course of continuous optimization, in particular in what concerns linear programming and polyhedra, and will rely on concrete examples taken from
industry that illustrate the relevance of tools from combinatorial optimization for real-world applications.
The preliminary plan of the course is as follows:- Discrete optimization in bipartite graphs: Hall’s marriage theorem, König’s theorems, algorithms; chains and antichains in posets: theorems of
Dilworth and Mirsky; chordal graphs: interval graphs, coloring, duality, decomposition; perfect graphs: definition, weak and strong theorems; perfect graphs: polyhedra, algorithms; Lovász’ theta
function: definition, computation, sandwich theorem, Shannon capacity.
• Continuous optimization (Zitt): The course will cover the theory and main examples in convex optimization. The tentative list of topics covered is as follows: Convex sets and functions, convex
optimization problems. Duality and optimality conditions. Among examples we will see Linear programming, Quadratic programming, Second order cone programming. Additional topics will include
sparse solutions via L1 penalization, and notions on algorithms, including the simplex algorithm and interior point methods.
Geometry and combinatorics.
• Geometry (De Mesmay and Hauswirth): Algorithms and combinatorics of embedded graphs. This course will provide an introduction to the study of graphs arising in geometric settings, with a focus on
planar graphs and graphs embedded on surfaces. The main objective of the course is to explore the interactions between the geometry and topology of low-dimensional spaces on the one hand, and the
combinatorics of their discrete structures on the other hand, as well as to showcase algorithmic techniques tailored for these objects. Topics that will be investigated include: Basics of planar
graphs: Jordan’s curve theorem, combinatorial representations, duality, Euler’s formula, Kuratowski-Wagner theorem, Planarity testing, Tutte embedding, Efficient algorithms for planar graphs,
Classification of surfaces, basics of topological graph theory, Topological algorithms: homotopy testing and shortest loops.
• Combinatorics (Novelli): The lectures on enumerative combinatorics will consist in the study of classical objects: permutations, trees, partitions, parking functions; classical sequences:
factorial, Catalan, Schroder; classical methods: bijections, group actions, induction, generating series.The lectures will be heavily based on the study of various examples, some very easy and
others trickier.
Data Sciences.
• Theoretical aspects of Data Sciences (Bonis) :
This course will provide the necessary tools to understand data sciences from the theoretical perspective. The goal of this course is to introduce notions of statistical estimation with a focus
on parametrics statistics and cast machine learning problems as a statistical estimation problem. In particular, the students will be familiar with the following concepts:
– The basics of statistical estimation (estimator, loss function, …)
– Classical estimators (moment method, maximum likelihood, …)
– Properties of statistical estimators (mean squared error, bias-variance tradeoff, …)
– Linear regression
– General machine learning problems (regression and classification)
• Computational aspects of Data Sciences (Lacombe):
This course will present the basics of data sciences from the practitioner perspective. The goal is to understand the typical machinery (from theory and numerics) used by data scientists when
they design machine learning models given a set of data. At the end of the course, the students will be familiar with most notions any data scientist should know, including:
– Standard terminology (supervised / unsupervised learning, regression / classification, etc.);
– Optimization through gradient descent
– Basics of supervised learning (linear regression, classification…)
– Basics of unsupervised learning (k-means, PCA…).
– Software: Python with pandas, numpy, scikit-learn, and jax.
Detailed description of the courses of the second semester (choose two courses)
Advanced Geometry and graph theory:
Advanced Geometry and Graph theory (Fanoni and Sabourau): This course will focus on families of expander graphs. These are sequences of graphs, with growing number of vertices, which are at the same
time sparse and highly connected. For their interesting properties, they have many applications in mathematics and computer science. We will talk about constructions of examples of expanders, the
different viewpoints which can be used to define them and some of their properties. We will also present two applications, one in computer science (error correcting codes) and one in mathematics
(embeddings in Euclidean spaces).
Advanced Data Sciences :
Introduction to modern machine learning problems (Lacombe)
This course will be in continuation with the course of the first semester, and will be dedicated to modern machine learning models, mainly (deep) neural networks and their different flavors. At the
end of the course, the students will have notions on: Feedforward fully-connected networks and their training trough back-propagation, Convolutional and Residual neural networks, diffusion models,
transformer architecture… In terms of software:, we will rely on Python with Tensorflow and/or PyTorch. Importantly, students will be able to quickly understand modern machine learning problems and
adapt to new models when they encountering them in academia or industry.
Theoretical aspects of deep learning methods (Hebiri)
In this lecture, we will provide statistical controls (bound on generalization error) for general supervised learning algorithms. First, we will derive a bound for the Empirical Risk Minimizer (ERM)
using tools from the Vapnik–Chervonenkis theory. Then, we will consider several algorithms based on the convexification of the risk in the context of binary classification. The last part of the
course will explore modern multi-class classification problems and present some techniques for addressing the problem based for instance on set-valued approaches.
By the end of the course, students will be familiar with classic ML algorithms such as trees, random forests, SVMs, boosting, and neural network. In addition, the methods introduced in the course
will be compared on real data.
Algebraic combinatorics and formal calculus (Borie and Novelli)
Operads in combinatorics (Borie): Informally, an operad is a space of operations having one output and several inputs that can be composed. Each operad leads to the definition of category of
algebras. This theory offers a tool to study situations wherein several operations interact with each others. This lecture begins by presenting some elementary objects of algebraic combinatorics:
combinatorial classes and combinatorial algebras. We introduce then (non-symmetric) operads and study some tools allowing to establish presentations by generators and relations of operads. Koszul
duality in non-symmetric operads is an important part of this theory which shall be presented. We end this lecture by reviewing some generalizations: colored operads, symmetric operads, and pros. We
shall also explain how the theory of operads offers a tool to obtain enumerative results.
Algebraic combinatorics (Novelli): The lectures on algebraic combinatorics will consist in the study of: classical symmetric functions and a short discussion about representation theory;
noncommutative symmetric functions (NCSF); the definition of Hopf algebras; the dual algebra of NCSF, quasi-symmetric functions; the modern generalizations of those algebras; and the use of all these
algebraic properties (transition matrices, expressions in various bases, morphisms of Hopf algebras) to solve (classical) combinatorial questions. As in the lectures in combinatorics of the first
semester, the lectures will be heavily based on the study of examples. | {"url":"http://labex-bezout.fr/math-cs-track/","timestamp":"2024-11-15T01:46:04Z","content_type":"text/html","content_length":"78884","record_id":"<urn:uuid:4786e34e-5af9-4750-8427-2363159855fa>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00590.warc.gz"} |
mate-calc: Calculate a specific mathematic expression; used tool: mate-calc
mate-calc: Calculate a specific mathematic expression.
$ mate-calc --solve ${2 + 5}
try on your machine
The command "mate-calc --solve ${2 + 5}" is using the terminal command "mate-calc" to open the calculator application. The "--solve" option is used to indicate that the command wants to solve a
mathematical expression.
In this case, the expression to be solved is "${2 + 5}". The expression inside the curly braces is an arithmetic operation, adding the numbers 2 and 5.
When the command is executed, the calculator application will open, evaluate the expression, and display the result, which in this case would be 7.
This explanation was created by an AI. In most cases those are correct. But please always be careful and never run a command you are not sure if it is safe. | {"url":"https://forrestcli.com/context/mate-calc/mate-calc:tldr:c90fc","timestamp":"2024-11-10T15:52:24Z","content_type":"text/html","content_length":"10607","record_id":"<urn:uuid:374e71c9-465f-45c9-b90a-b5de822e9a2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00495.warc.gz"} |
Blog #3: Converting measured value to Amperes
I welcome you to my next blog post as part of Experimenting with Current Sense Amplifiers Design Challenge. In my first blog I described my experiments plan and in my second blog post I shown basics
of using MAX40080 sensor as part of MikroE Current 6 Click Board. In this third blog I will continue with fundamentals of MAX40080 sensor. In this blog post I will describe approaches how to find
formula for converting measured digital value in range between -4095 and 4095 to real value in Ampere unit. I will show two approaches how to do this. The first I will show experimental approach of
determining formula from measured values at known currents and then I will show accurate approach providing exact formula for converting value.
Experimental approach
The first approach which I will describe is experimental. If we do not know how to convert measured value exactly, we can suppose some things and then estimate function (for example linear function)
that we can use for converting value. In case of current sensing, we luckily can suppose linearity of value because we measure voltage drop on shunt resistor which is linear with current according to
ohm law and ADC inside MAX40080 is also almost linear. Se we can set up two experiments with two different static known currents flowing through circuit, read values measured by sensor and compute
coefficients of linear function using linear interpolation. So, lets experiment.
For the first measurement we can reuse values measured from previous blog. In this blog I used 100ohm resistor powered by 3.3V rail of Raspberry Pi and sensor returned values between around 20 to 30
and we computed average of 100 samples as a 24.1. So, for first measurement of theoretical current 0.033 A (value 0.0333 is computed using ohm law as a U / R where U is voltage on resistor 3.3V and R
is value of the resistor) we know that sensor returns 24.1.
For second measurement I upgraded my setup. Because I have no regulable high power bench power supply, I was limited to standard power adapters. Instead of 3.3V from Raspberry Pi which is limited to
source 50mA I decided to use 12V from barrel jack. I bought barrel jack connector and solder wires to it:
For powering I used old router power supply which can source 5A/12V. The next material issue which I faced was resistor. At this point I realised that is easy to make some high current flow like 1A,
but do I have resistor which can handle this current without overheating? I don’t. All my resistors which I have can handle 0.25 or 0.6W, but for handling so “high” current (about 1 Amp) on “high”
voltage (about 12 V) requires 12 Wats (P = U * I = 12 * 1 = 12). After some calculations and estimation, I decided to make setup of 16 × 100-ohm resistors connected using 8 paralell branches and each
branch having 2 resistors connected in series. This result to final resistance of 25 ohms (100 * 2 / 8), allowing to flow 0.48 A at 12 V and reducing load on each resistor to 0.36 W (limit is 0.25
but this I did not considered as significant violation at room temperature). Final connection looks schematically as follows:
And in reality:
After connecting I run the same program as in previous blog post with following output:
Now we get higher numbers than in previous case and it is expected behaviour because we measure higher current. So, we have two measurements of two different currents 0.033A and 0.48A and two
different values 24.1 and 355.42 calculated based on data from sensor.
Experimentally determining conversion formula using linear interpolation
Now we know two inputs and corresponding outputs of some linear function. I will refer value returned by sensor as VAL and real current as I. Linear function and our known points looks on the plot as
Generic formula of linear function looks as follows:
And if we replace X and Y by our variables:
So, it is function which expects VAL as parameter and outputs real current as a value. Our task is to find parameters A and B which are used within formula. The purpose of linear interpolation is to
build two equations (with two unknown variables A and B) and solve it. We replace VAL by known values returned by sensor and I by theoretically known corresponding currents. The equations look as
And now we can solve them. You can use online solver like WolframAlpha, but I did it manually using multiplication of first equation by constant and then summing with second equation. After it one
unknown variable disappear (it is multiplicated by zero) and after that we can get value of one coefficient by solving simple single equation containing single unknown variable:
And now we can place newly discovered value of B param to one of original equation and simply solve it:
Now we know A and B params. I used WolframAlpha for checking my results. Computed parameters look correct and rounding errors which I did does not look significant:
So, now we can place parameters A and B to original formula and get resulting linear function for converting measured value to real current in Amperes:
Testing Formula
For testing formula, I used another third configuration with resistor. I used 12V as in previous case and 220ohm resistor (in fact it was 2 × 2 series-parallel configuration). Expected current from
ohm law is 12 / 220 which is approximately 55mA. So, lets check. I used the same program with following output:
So now we can “call” our function F with value 42.38 and see what we get:
We got 0.0577 A which is after converting from A to mA 57.7mA and this is very near our known theoretical result 55mA. It looks that our experimentally determined formula is correct. Minor error is
caused by noise, accuracy of voltage and resistor values used and so on. Our formula is not totally correct because we computed it from similarly inaccurate values.
Only single sample suffice
One possible upgrade of our formula reduces need of parameter B with another small assumption. We can assume that sensor returns 0 when measuring 0A current and then we need only one point. This
assumption is quite trivial. Efficiently we reduce equations by B parameter which would be zero. Note that we calculated B parameter near zero. The non-zero B parameter introduce some offset to our
calculations, and it is problematic especially when measuring negative values.
Accurate Approach
In previous section you have seen experimental approach of estimating coefficients for converting measured value to current. But MAX40080 is not a black magic, and all its structures are documented.
We can make exact formula using parameters from datasheet. I will describe process in a reverse way: from 12-bit digital value to real current. The important image for this section is block diagram
of MAX40080 sensor from datasheet:
The first step of making formula is realize what does the senser measure. The digital value which we receive comes from ADC. ADC in MAX40080 is 12-bit, so it converts voltage in range 0V to reference
voltage and convers it to value between 0 to 4095 (2^12 – 1). MAX40080 is bidirectional so it adds 13^th bit and returns value in range -4095 to 4095. The first question is what the reference voltage
is? The answer is in datasheet: It is 1.25V:
So MAX40080’s ADC converts value in range -1.25V to 1.25V (I personally thing that it internally measures voltage in range 0 to 1.25V and sign bit is provided by some comparator externally. It is
internal detail, and I did not find accurate answer to this question in datasheet but it is not important for us).
From this knowledge we can convert digital value to voltage (still not a current) by simple cross multiplication (or linear interpolation with single point as used in previous sections):
Now we received voltage on input of ADC. Input of ADC is connected across some multiplexer and filtering circuit from amplifier. This amplifier has fixed gain 25 or 125 depending on configuration.
Note that datasheet has wrong label which I crossed out for making it less confusing.
So, if we want to get voltage on input rather the amplified one, we just need to do reverse operation of amplification which is division. Formula will look as follows:
So now we have computed real voltage on the input of MAX40080 chip but we do not need a voltage. We rather need the current. So now we must understand what we are really measuring. The answer is also
visualized on block diagram from MAX40080 datasheet:
So, we are measuring voltage drop on resistor which is outside the MAX40080. In our case it is resistor on MikroE Click Board. It is this resistor:
It is large because it can handle large current. In opposition its value is very small because it should be transparent for measured circuit. Its value is only 10mOhms (mili-ohms) according to
So now we know formula for getting measured voltage, we know value of resistor having measured voltage drop so now we can use simple ohm law to get final formula for getting value of current from
measured value:
We can simplify nested fractions:
And this is final formula which we can use for converting measured value (VAL) to current I in Amperes.
Testing Formula
Our script used standard 50mV range, so G value is 25. We can try evaluating our formula for measured values from my previous experiments. We know that sensor returned values around 355.42 (which was
average for 100 samples) for theoretically computed current of 0.48A. So, let’s check formula:
We are near expected result. In this case error is not caused by an experimentally determined coefficients, but rather by noise, and also improper calculation of theoretical currents resulting from
assumed voltage exactly 12V and resistor value exactly 25ohm which both was inaccurate with some tolerance in real world.
Comparing Experimental and Theoretical Results
Finally, we can compare our theoretical formula with experimentally deduced one. Theoretical formula matches our assumption that B param should be zero. When you measure value of 0, you get 0A of
current. Now think about A param. A param is coefficient which corresponds to value in Amperes for one measured unit. So, if we evaluate our exact formula with VAL=1, then we should get A param of
our experimentally deduced formula. Let’s do this:
Our experimentally estimated coefficient was 0.00135. 0.00122 and 0.00135 are very similar numbers which confirms that both methods resulted into very similar formula. Lastly, we can compare with
results coming from somebody else. If you download official library from MikroE and opens current6.h file, you can see that this library use coefficient of value 0.00125 which is also very similar to
coefficients computed by me:
In this blog post you have seen two ways how to get or compute formula for converting measured value from MAX40080 CSA sensor to value in real Ampere unit. I shown experimental way gathering formula
from measurements using linear interpolation and also you have seen accurate method providing formula from datasheet parameters. From practical point of view, these information are useless because we
can just use formula and coefficients computed by someone else. But I still like it. I learn a lot. This process allowed me to deeply understand operation of MAX40080 CSA and CSAs in general. In
fact, this process is very similar for other CSA no matter of vendor. Only parameters like G, and Vref differs. Sometimes internal structure differs slightly.
Experiments Status and Future Plans
At the time of writing this blog post I started experimenting with using MAX40080 from MAX32625 microcontroller. After first MCU experiments I faced some issues that cost me a lot of time and I also
found some mistakes in datasheet which I will report to Maxim. I am currently testing my own library and I plan to post some first thoughts soon. I also plan to prepare some simple Python library
because I feel some interest from other challengers due to lack of any official library for Raspberry Pi. I also have some idea how to utilize my library for MCUs on Raspberry Pi and I am considering
creating CLI utility for interacting MAX40080 without any programming needed. I am still waiting for PCBs for my more advanced experiments. OSHPark panelised them and send them to fab but still did
not ship them to me. In the meantime, I ordered and received required components.
Last Words
For this blog it is totally all. Thank you for reading it and stay tuned. If you find any inconvenience or other issue in my formulas, computation or somewhere else, feel free to write a comment. I
also like to hear any feedback in comments.
Next blog: Blog #4: Using MAX40080 CSA with MAX32625 MCU – First thoughts
• misaz in reply to colporteur
Thank you for feedback. I do not plan any private alfa testing because time is going and this only postpones release date. I will rather publish library and then fix possible bugs after
• guillengap in reply to misaz
Your answer makes sense, I think another alternative is to use two current sensors. In a laptop, the battery can be charging, and at the same time I can work with it... Kind regards
• colporteur in reply to misaz
Excellent work misaz.
The E14 community is very fortunate in your offer to share the intellectual property. I have neither the skill nor knowledge to develop the library needed to continue this challenge. I have
followed your blog posts in anticipation of your success.
I have been using the crumbs of knowledge that have fallen from your table to further my challenge project along. My hope was I might come up garner a method to get a measurement. Your offer to
share a library is what I look forward to.
My challenge is to measure motor current in a 12VDC system. I look forward to using your code. If it can help your efforts, I would welcome the opportunity to be a beta tester.
• misaz in reply to aspork42
Thank you for feedback.
• misaz in reply to guillengap
I have very few experiences with battery charging circuits but I think it should work. MAX40080 can sense currents in both direction without any issue. Direction of currents can freely change at
runtime. Both sensing RS+ and RS- voltages have to be higher than GND but in this is ok in this case. I think you just need to discconnect circuit at some point and connect RS+ to one end of
disconnected wire and RS- to the second end of diconnected wire like connecting standard ampermeter. I recommend to measure voltages before connecting and if all conditions from Absolute Maximum
Ratings from Dataheet are met (and preferably also coresponding recommended parameters from Electrical Characteristic), it should work well. Note that Maxim/ADI mention battery operation devices
as a candidates for using MAX40080 at main page of Datasheet: | {"url":"https://community.element14.com/challenges-projects/design-challenges/experimenting-with-current-sense-amplifier/b/challenge-blog/posts/blog-3-converting-measured-value-to-amperes","timestamp":"2024-11-04T01:25:33Z","content_type":"text/html","content_length":"281485","record_id":"<urn:uuid:18f4e9ac-166a-4c1c-87e3-f53777001cdb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00219.warc.gz"} |
Unit Vectors and Angles
In my job, I work with a ton of unit vectors. While tracking things in three dimensions can be tricky, it helps to start breaking down as many aspects of the dimensions to their base components. I’m
sure we’re all familiar with common spatial axis labeling. You get X, Y, and Z. Their application isn’t uniform. Some machine uses X for the vertical axis, some might use Z. What you might not yet be
familiar with would be the orientations of objects in space. Yes, you can track the cartesian coordinates rather handily with XYZ, but that will only tell you the position of those objects. It won’t
tell you the direction they’re facing, and for that we use unit vectors labeled IJK. They’re typically values between -1 and 1, and they can define rotation completely.
There are times when you might need to calculate an angle from a unit vector. In those instances, you’re going to need the two unit vector values you’re concerned with. Let’s say hypothetically we
have an angle we’re calculating and our two unit vector values are 0.5402 and 0.8415. This brings us to:
$$\frac{\arctan(0.8415, 0.5402) \times 180}{\pi}$$
You should get a result of roughly 32.7. Neato. Of course, you can go in the other direction easily as well.
$$\sin(32.7)$$ and $$\cos(32.7)$$
It’s a pretty damn cool party trick, for sure. It actually has a few strong niche use cases, though. Especially in robotics. | {"url":"https://redlegion.org/posts/2022-08-03-unit-vectors-and-angles/","timestamp":"2024-11-14T05:25:57Z","content_type":"text/html","content_length":"8358","record_id":"<urn:uuid:aaa916bc-d431-428b-88b2-84c86997026d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00817.warc.gz"} |
Scale invariant forces in 1d shuffled lattices PDF
APS/123-QED Scale invariant forces in 1d shuffled lattices Andrea Gabrielli1,2 1 Istituto dei Sistemi Complessi - CNR, Via dei Taurini 19, 00185-Rome (Italy) and 2 SMC-INFM, Department of Physics,
University “La Sapienza” of Rome, P.le Aldo Moro 2, 00185-Rome, Italy (Dated: February 2, 2008) 6 0 In this paper we present a detailed and exact study of the probability density function P(F) of 0
thetotalforceF actingonapointparticlebelongingtoaperturbedlatticeofidenticalpointsources 2 of a power law pair interaction. The main results concern the large F tail of P(F) for which two
casesaremainlydistinguished: (i)Gaussian-likefastdecreasingP(F)forlatticewithperturbations n forbidding any pair of particles to be found arbitrarily close to oneeach other; (ii) L´evy-likepower a
law decreasing P(F) when this possibility is instead permitted. It is important to note that in the J secondcasetheexponentofthepowerlawtailofP(F)isthesameforallperturbation(apartfrom 4 very singular
cases), and is in an one to one correspondence with the exponent characterizing the behaviorof thepair interaction with the distance between the two particles. ] h c PACSnumbers:
02.50.-r,05.40.-a,61.43.-j e m I. INTRODUCTION ever the exact results we present in 1d are suggestive of - t thebehaviorofthesamequantitiesinhigherdimensions. a Infactonecansee [11]thatthe
changeofspatialdimen- t s Theknowledgeofthestatisticalpropertiesoftheforce sion only renders calculation not explicitly performable, . acting on a particle belonging to a gas and exerted by t keeping
qualitatively the behavior we present below. a all the other particles provides important information in m many physical contexts and applications. Typical exam- - ples are: (i) distribution of the
gravitational force in a d II. DEFINITIONS AND FORMALISM gas of masses in cosmological and stellar astrophysical n o applications [1, 2, 3], (ii) distribution of molecular and Inorderto approachinthe
properwaythe problemof c dipolar interactions [4] in gas of particles, (iii) theory of the global force probability distribution in a perturbed [ defects in condensed matter physics [5], and (iv)
granu- lattice of particles interacting via a power law spatially lar materials [6, 7]. The first seminal work in this field 2 decreasingpairinteraction,letusconsiderfirstlyagasof v was due to
Chandrasekhar [1] and deals, among many such identical particles with microscopic density 5 subjects, with the gravitational force probability distri- 6 bution in a homogeneous Poisson spatial
distribution of n(x)= δ(x x ), 3 identical particles. By studying the characteristic func- − i 6 i tion of the sum of the stochastic forces due to the single X 0 5 particles, the probability
distribution of this total force wherexi isthepositionoftheith particle. Letusassume is exactly found to be given by the so-called Holtzmark 0 thatthe averagenumber density n = n(x) >0(where 0 /
distribution, which is a three-dimensional analog of the theaverage ... istobeintendedasanehnsemibleaverage) at one-dimensional fat tailed stable L´evy distributions. In is well definhedi(i.e., the
particle distribution is uniform m [2,4,8,9]approximatedextensions,todifferentbranches on sufficiently large scales [9]). We then suppose that of physics, of this approach can be found for more com- -
particles interact via a pair force f(x) depending on the d plex particle distribution (i.e., point processes) obtained mutual pair distance x as n by perturbing a homogeneous Poisson point process.
In o this paper wepresentastudy ofthe totalforce probabil- x c f(x)= C . : ity distribution for a very different class of spatial par- − xα+1 v ticle distributions (i.e., point processes), the
perturbed | | Xi lattices of point particles, in the case in which the pair This means that f(x) gives the force exerted by a par- interaction decays spatially as a general power law. We ticle in the
origin on another particle in x (the force is r a thinkthatthisstudycanbeveryusefulforapplicationin attractive if C > 0, and repulsive if C < 0). Therefore both solid state physics (e.g., in the case
of Coulomb or the force-field in the point x of the space will be dipolarpairinteraction)[4],andcosmologywheren-body x x y x gravitationalsimulations (introduced to study the prob- (x)=C i− C dyn(y) −
, lemof“structureformation”duetogravitationalcollapse F i |xi−x|α+1 ≡ Z |y−x|α+1 X from primordial cosmological mass density fluctuations) (1) are performed usually starting from suitable perturbed
where the last integral is over all the space. However, lattice initial conditions [10]. We limit the study to the fromthelastexpressionofEq.(1),beingn >0,wehave 0
one-dimensionalcaseinordertoavoiddifficulties related that for a givenrealizationofthe stochastic density field to the anisotropies of higher dimensional lattices. How- n(x), the infinite volume limit of
(x) is not univocally F 2 defined for α 1 (i.e. the integral in absolutely diverg- value obtained by Eq. (2) (i.e., subtracting the effect of ≤ ing,anditsvaluedependonhowthislimitistaken). The the
background). This preliminary discussionof the sta- same feature is presentinhigher spatialdimensions. For tistical definiteness of the force is useful to justify the F example in d =3 the same
problem is present for α 3 symmetrical way in which we take the infinite volume ≤ and in general α d in d dimensions. For instance this limit in the one-dimensional shuffled lattice case we an- ≤ is the
case of the gravitationalforce in a self-gravitating alyze in the rest of the paper. The statistical properties homogeneousgasofidenticalmasses[1],andofCoulomb of we will find in this peculiar way (no
background F interaction in the one component plasma (OCP) [12] of andsymmetricallimit) coincide withthoseofthe casein identical electrical charges both in the disordered and which the effect of a
negative background is considered the ordered (i.e., the Coulomb lattice [13]) phases. This independently of the way in which the infinite volume problemiswellknownincondensedmatterphysicsabout limit
is taken,whichtherefore canbe consideredthe real the OCP.However,inthis case,the problemis automat- physical case. Moreover, from the above considerations ically solved by the presence in the
physical system of we deduce that, as in the shuffled lattice S(k) k2 at ∼ a uniform background charge density n (x) = n with smallk(see[16]),theresultsarevalidforallvaluesα>0 b 0 − opposite
signwithrespectto the identicalchargedparti- in d=1. clesandsuchthattoconserveglobalchargeneutralityin Let us take, therefore, a 1d regular chain of 2N +1 the system. Once the attractive force of the
background unitary mass particles with a lattice spacing a > 0 (we is considered on a charged particle together with the re- willtakeeventuallythelimitN + ),i.e.,theposition
pulsiveforcesexertedbytheotherparticles,theproblem ofthenth particleisX =na. T→here∞forethemicroscopic n of the infinite volume limit of is solved and its value density can be written as F unique. For
what concerns the self-gravitating systems, in Newtoniangravitationananalogofthe uniform back- N groundoftheOCP(i.e. anegativeuniformmassdensity nin(x)= δ(x na). − n (x) = n such to generate a
repulsive force on the n=−N b 0 X − particles) does not exist, and it has to be introduced in Clearly the average density of particles in the system the system artificially to regularize the problem
(an ap- is n = 1/a. We now apply an uncorrelated displace- proach usually called Jeans’ swindle [14]). However this 0 ment field (i.e., a random shuffling) to this system, i.e., negative background
comes out naturally, as an effect of a random displacement U is applied to the generic nth spaceexpansion,when the gravitationalmotionofparti- n particle independently of the other particles. This
dis- cles is described, starting from the equations of general placement field is completely characterized by the one- relativity,incomovingcoordinatesinaquasi-uniformex- displacement probability
density function (PDF) p(u) panding Einstein - De Sitter universe [15] which is the (i.e., Prob(u U <u+du)=p(u)du). After the appli- main model of universe used in cosmology. In practice ≤ n cation
of the displacements the new microscopic density considering the presence of such balancing background will be: will give for the following expression: F N y x (x)=C dyδn(y) − , (2) n(x)= δ(x na un),
(3) F y xα+1 − − Z | − | n=X−N where δn(x) = n(x)−n0. This makes F to be defined the un’s being the realizations of the random variables also for smaller α depending on the small k behavior of U all
extracted from p(u) independently one of each n the power spectrum S(k) δ˜n(k)2 of the density other. We consider the case p(u)=p( u) for simplicity. ∼ | | Equation(3)
saysthattheparticleorig−inallyinX =na, field where δ˜n(k) is the FourieDr transfoErm of δn(x). By n after the displacement will be in X =na+U . For the n n studying the large distance scaling behavior
of the inte- analysis of spatial density correlations in such a system grated fluctuations of n(x) [9], it is simple to show that, see [16]. Let us call q (x) the PDF of the position of the assumingS
(k) kβ atsmallk,inddimension isawell n ∼ F nth particle (i.e., Prob(x xn < x+dx) = qn(x)dx). definedstatisticalquantity(i.e.,itsvaluedoesnotdepend ≤ Clearly it is given by on the way in which the
infinite volume limit is taken) for α > (d β)/2 if β < 1, and α > (d 1)/2 if β 1 q (x) p(x na). − − ≥ n (see also the discussion in Appendix II on the definite- ≡ − nessoftheforce
respectivelyintheshuffledlatticeand Let us now assume, as above, that the nth particle F thehomogeneousPoissonparticledistributions). Notefi- creates a force-field in the point x of the type: nally that
takingthe infinite volume limit symmetrically with respect to the point x on which the force is calcu- X x n f (x)=C − lated the background gives a zero net force on the point n X xα+1 n x. Therefore
the value of obtained calculating Eq. (1) | − | F taking the infinite volume limit symmetrically with re- with α > 0 and C a constant. Therefore the total spect to the point x gives automatically the
well defined stochastic field generated at a generic point x of the F 3 space by all the system particles is: However we show below how to reduce the problem to that of a sum of independent stochastic
terms, by in- N X x troducing the concept of conditional probability density n (x)=C − . (4) F X xα+1 function. The solution of this conditional problem will n n=X−N | − | give also the way to face
the study of the first uncondi- tional case given by Eq. (4). Note that it is a sum of random variables. Let us call About in Eq. (6), we want again to find the PDF W (F) its PDF; it will be given by 0
F W(F) of the value F of this force. As before, since the ∞ N displacements applied to the particles are independent W (F)= dx p(x na) one of each other, we have the exact relation: 0 n n Z Z−∞"n=−N
− # Y +∞ N ×δF −C−N,N xnxn−xxα+1 . W(F)=Z Z−∞ "n=Y−Ndxnp(xn−na)# n6=0 | − | X −N,N xn x0 Itisimmediatetoseethat (x)isthesumofindependent δ F C − , × − x x α+1 random variables C(X Fx)/X
xα+1. However, as n6=0 | n− 0| n n X − | − | the PDF’s q (x) change with n, these variable are not n that, througha simple changeof variables∆ =x x identically distributed. As shown below, this,
together n n− 0 or n=0, can be rewritten as with the fact that needs not a normalization in N 6 F to be well defined in the large N limit, are the reasons +∞ why we do not obtain in general an exact
Gaussian or W(F)= dx p(x ) 0 0 L´evy limit [17, 18] for the W0(F). In order to study the Z−∞ asymptoticbehaviorinF andN,itis usualto introduce +∞ −N,N theso-calledcharacteristicfunctionof
,i.e.,theFourier d∆ p(∆ +x na) F n n 0− transform (FT) of W0(F): Z Z−∞ n6=0 Y ∞ −N,N Wˆ0(k) dF W0(F)eikF δ F C ∆n . ≡Z−∞ × − n6=0 |∆n|α+1 N ∞ y x X = dyp(y na)exp iCk − (.5) − y xα+1
Let us analyze the behavior of the conditional PDF n=Y−NZ−∞ (cid:18) | − | (cid:19) P(F;x0), conditioned to the fact that the particle on which the force is evaluated is at X =x : By studying the
small k behavior of the single integrals 0 0 in Eq. (5) and taking appropriately the limit N → ∞ +∞ −N,N we can deduce the moments and the large F behavior of P(F;x )= d∆ p(∆ +x na) W (F). 0 n n 0−
0 Z Z−∞ n6=0 However, we will study a slightly different and more Y difficultproblem,whichisofparticularinterestifwewant −N,N ∆ n to study the dynamics of the system particles under the δ F C .
(7) × − ∆ α+1 effect of only this mutual force. We study directly the n6=0 | n| X statisticalpropertiesofthestochasticforceactingonone generic system particle. In particular we calculate the
Inthis way,oncex0 is fixed, the totalforce is the sum F total force acting on the particle initially located at the of independent contributions origin of the space and displaced to X =U : 0 0 ∆ n f =
C . (8) −N,N −N,N Xn X0 n |∆n|α+1 = f (X )=C − . (6) F n 0 Xn X0 α+1 It is interesting to see in which cases satisfies the cen- nX6=0 nX6=0 | − | trallimittheorem. We willseethatitFneversatisfiesthis
In this way, taking the limit N , we get the sym- theoremevenwhenitsPDFisrapidlydecreasingatlarge → ∞ metricinfinitevolumelimitwithnonegativebackground values. More precisely, we will see that even in
this last of Eq.(1) with x=X , which is, as explained in the first case its PDF is dependent on the details of p(u). 0 paragraphof this section, equal to Eq.(2) with arbitrary For the sake of
simplicity of notation let us assume wayoftakingtheinfinitevolumelimit,andthereforegiv- in the rest of the paper that C = 1 (the repulsive case ingtheforceonanyparticlebelongingtothesystemwith C = 1
will be trivially deduced from this). By per- − the uniform balancing background. Note that now, be- forming the simple change of variables given by Eq. (8), cause of the presence of the variable X
in each term of it is possible to find the conditional (i.e., conditioned to 0 the sum (6), is no more a sum of independent terms. X = x ) PDF g (f;x ) of the single stochastic force 0 0 n 0 F 4
generated by the particle in X on the particle fixed in 1. Non-Overlapping condition (NOC): No parti- n X =x : cle can be found arbitrarilyclose to any other par- 0 0 ticle; i.e., the supports[20]
respectively of p(u) and f −1−1/α f of p(u na), for all integer n = 0, have an empty g (f;x )= | | p +x na . (9) n 0 α · f 1+1/α 0− overlap−. The main case of phy6sical interest in this (cid:18)| |
(cid:19) class of displacement fields is when 0 < u <a/2 Clearly also the forces fn so distributed are independent such that p(u)=0 for u >u ; ∃ 0 ofoneeachother. Thesupportofgn(f;x0)canbesimply | | 0
deduced from the one of p(u). 2. Overlapping condition(OC):Particlescancross oneeachotherandatleastonepairofparticlescan befoundarbitrarilyclosetooneeachother;i.e.,the III. LARGE F ANALYSIS AND LIMIT
supports respectively of p(u) and of p(u na), for THEOREMS − at least an integer n = 0, have a non-zero overlap. 6 The main case of physical interest in which this Let us take the FT of Eq. (7) in
order to evaluate happens is when ǫ > 0 such that p(u) > 0 for all the conditional characteristic function of the stochastic ∃ u <a/2+ǫ. variable : | | F +∞ We will see that in the first case we
obtain a rapidly de- Pˆ(k;x0) dF P(F;x0)eikF (10) creasingW(F)eventhoughthereisnoconstrainttoward ≡ Z−∞ GaussianityinthelargeN limit,whileinthesecondcase −N,N +∞ x wehaveapowerlawtailedW(F)
similarlytothatofthe = dxp(x+x na)exp ik . 0− xα+1 three-dimensional Holtzmark distribution [1]. n6=0 Z−∞ (cid:18) | | (cid:19) Y Note that the quantity IV. DETAILED ANALYSIS OF EQ. (10) +∞ x dxp(x+x
na)exp ik (11) Z−∞ 0− (cid:18) |x|α+1(cid:19) Let us analyze the single factor of Eq. (10) which, +∞ as aforementioned, is the FT of the conditional PDF = dfgn(f;x0)eikf ≡{eikf}n;x0, gn(f;x0) of the
force felt by the particle in X0 = x0 Z−∞ due to only the particle in X : n whereg (f;x ),givenbyEq.(9),istheconditionalPDF n 0 ofthe fieldf feltbythe particleinx dueto the particle +∞ x 0 dxp(x+x na)
exp ik in xn, is the conditional charact+er∞istic function of this Z−∞ 0− (cid:18) |x|α+1(cid:19) field f. Moreover a(f) = dfg (f;x )a(f). It { }n;x0 −∞ n 0 = exp ik x , (12) is important to note
that, if x = 0 (i.e., the particle on which we calculate the force0Ris stuck at the origin), (cid:28) (cid:18) |x|α+1(cid:19)(cid:29)n,x0 the condition p(u)= p( u) on the displacements of any
particle would imply − where hs(x)in,x0 denotes the average of the function s(x) over the shifted PDF h (x) = p(x + x n,x0 0 − +∞ x na). In practice, if we indicate with simply s(u) = dxp(x+na)exp ik
+∞ h i xα+1 dup(u)s(u), then we can say that Z−∞ (cid:18) | | (cid:19) −∞ +∞ x R = dxp(x−na)exp −ik xα+1 . hs(x)in,x0 =hs(u+na−x0)i . Z−∞ (cid:18) | | (cid:19) We want to study Eq. (11) in the limit
of small k. Sim- Consequently, if one fixes x =0 Eq. (10) becomes: 0 ilarly to what pointed out in the previous section, the N +∞ x 2 small k behavior of exp ik x is different in Wˆ(k)=n=1(cid:12)Z−∞
dxp(x−na)exp(cid:18)ik|x|α+1(cid:19)(cid:12) . the two cases in whDich, a(cid:16)s a|x|cαo+n1s(cid:17)eEqun,exn0ce of displace- Y(cid:12) (cid:12) ments, the pair of particles initially in x=0 and x=
na (cid:12) (cid:12) However the sh(cid:12)ift x0 = 0 of the particle initially a(cid:12)t the cannotorcanbefoundarbitrarilyclosetooneeachother, 6 origin, and on which we calculate the force, breaks
this i.e.,respectivelyifthesupportsofp(u)andp(u na)have symmetry, which is anyway recovered when the aver- − an empty or a non-zero overlap. age over p(x ) is performed to proceed from P(F;x ) to 0 0
Letusstartwiththecase(i). If 0<u∗ < na/2such W(F) (however we will see that this is a further source ∃ | | thatp(u)=0for u u∗,theexponentofexp ik x of noise when we calculate explicitly the variance
of the | |≥ |x|α+1 force ). cantakeonly limited values in the integral(12(cid:16)) (i.e., th(cid:17)e F In order to proceed into the analysisof the PDF of , support of g (f;x ) is restricted to only
a finite interval n 0 F we have to distinguish two basically different cases: of values of f). Note that in the given hypothesis (i), if 5 n > 0 the quantity x can take only strictly positive val-
rewrite it as in Eq. (11): ues, while if n < 0 it takes only strictly negative values. In this case, if n>0, we can write: x +∞ exp ik dfg (f;x )eikf xα+1 ≡ n 0 x (cid:28) (cid:18) | | (cid:19)
(cid:29)n;x0 Z−∞ exp ik +∞ x (cid:28) (cid:18) |x|α+1(cid:19)(cid:29)n,x0 = dxp(x+x0−na)exp ik xα+1 +∞ +∞ (ikx−α)m Z−∞ (cid:18) | | (cid:19) = dxp(x+x na) 0− m! Note that in this case, differently
from the previous one, Z−∞ mX=0 the support of gn(f;x0) includes arbitrarily large values +∞ (ik)m of f for which, using Eq. (9), we have = x−αm | | m! n,x0 mX+=∞0(ik)m (cid:10) (cid:11) gn(f;x0)= p
(x0α−na)|f|−(α+1)/α+o(|f|−(α+1)/α). = (u+na x )−αm , (13) 0 m! − m=0 LetuscallM =[α−1]theintegerpartofα−1. Byusing X (cid:10) (cid:11) theresultspresentedinAppendixIwecanconcludethat where u∗ exp ik
x (15) (u+na−x0)−αm =Z−u∗dup(u)(u−x0+na)−αm. (cid:28) (cid:18) |x|α+1(cid:19)(cid:29)n;x0 (cid:10) (cid:11) M (ik)m If instead n<0, Eq. (13) becomes = (u+na x0)−αm +Sn(k;x0), m! − m=0 X (cid:10)
(cid:11) x +∞ ( ik)m exp ik = − ( x)−αm where Sn(k;x0) contains all the terms of order higher (cid:28) (cid:18) |x|α+1(cid:19)(cid:29)n,x0 mX=0 m! (cid:10) − (cid:11)n,x0 than M, including the
singular part of the small k ex- = +∞ (−mik!)m (−u−na+x0)−αm , (14) kp.anBsyionusoinfgDeExqp.(cid:16)(3ik2)|x,|xαw+e1(cid:17)caEnn;fix0nawllhyicwhriisteo:f order 1/α in m=0 X (cid:10) (cid:11) S (k;x
)= (16) Note that, as p(u) = p( u), we have n 0 − ( u na+x0)−αm = (u na+x0)−αm . In (−1)(M+1)/2πp(x0−na) k1/α+o(k1/α) for odd M bho−th −cases i h − i αΓ[(α+1)/α]cos α−1−Mπ 2 (cid:16) (cid:17)
(cid:12)(cid:12)(cid:12)(cid:12)(cid:28)(cid:18)|x|xα+1(cid:19)m(cid:29)n,x0(cid:12)(cid:12)(cid:12)(cid:12)<(|n|a+2u∗)−αm Here,αfoΓr[((αs−+im11))Mp/lα/i2]cπsiitpny((cid:16)x,0αw−−en12−ah)
Maπv(cid:17)eke1x/cαlu+deod(kt1h/eα)cafsoerinevwenhiMch for any m(cid:12) 0, and therefor(cid:12)e the series in Eq. (13) ab- ≥ exactly M = 1/α for which we have logarithmic correc- solutely
converges. It is very important to note that, if tions in k to the above equations. u∗ <a/2 (i.e., no pair ofparticles canbe found arbitrar- ilyclosetooneeachother)allthe factorsinEq.(10)can be
represented as a Taylor series (13) to all orders m. V. FINDING W(F) As shown below in more detail, this implies that when u∗ < a/2, W(F) has all finite moments and therefore is rapidly decreasing at
large F. Atthispointwecangofurtherandclassifythepossible behaviors of P(F;x ) and W(F). In the secondcase (ii) instead 0<ǫ<a/2 suchthat, 0 at least u satisfying na/2 ǫ∃< u < na/2+ǫ, we Basicallywe
againdistinguish the following two cases: ∀ | | − | | | | have p(u) > 0. In this case the particle initially at na ± 1. ǫ>0 such that u >a/2 ǫ one has p(u)=0; and the particle initially at the origin
can be found ar- ∃ ∀| | − bitrarily close. This implies that the quantity x (i.e., 2. 0<ǫ<a/2suchthat,atleast usatisfyinga/2 u x +na) in (12) is permitted to take arbitrary small ∃ ∀ − 0 ǫ< u a/2+ǫ,
p(u)>0. − values up to zero, and therefore in the last expression | |≤ of Eq. (13) there would be an infinite number of diverg- ing terms of the last series. In other words the Taylor A. Case 1: fast
decreasing W(F) series sum in the second expression of Eq. (13) cannot be exchanged with the average operation ... , and h in,x0 In this case the system satisfies the NOC and all the we expect a
singular part in the small k expansion of factors in Eq. (10) can be expanded in the Taylor series the average exp ik x . In order to find it, we |x|α+1 (13) and (14) for all different n. n,x0 D
(cid:16) (cid:17)E 6 We can then write It is simple to see that +∞ 2 N Pˆ(k;x) dF P(F;x)eikF (17) F = (u x+na)−2α (19) ≡ 2 − u x ZN−∞ nX=1n(cid:10)(cid:10) (cid:11) (cid:11) = ei(u−x+kna)α e−i
(u+x+kna)α − (u−x+na)−α u (u+x+na)−α u x nY=1D ED E 1(cid:10),(cid:10)N (cid:11) (cid:10) (cid:11) (cid:11) o N +∞ (ik)m + (u x+na)−α (u+x+na)−α = (u x+na)−αm − u− u " m! − Xn<l(cid:10)(cid:2)
(cid:10) (cid:11) (cid:10) (cid:11) (cid:3) nY=1 mX=0 (cid:10) (cid:11) (u x+la)−α (u+x+la)−α , +∞( ik)l × − u− u x −l! (u+x+na)−αl , where f(cid:2)o(cid:10)r clarity we hav(cid:11)e red(cid:10)efined
(cid:11) (cid:3)(cid:11) # l=0 X (cid:10) (cid:11) u∗ a(u) = dua(u)p(u) where in the average ... over the displacement u we h iu −uu∗∗ hthaavtefurosemdEthqe. (s1y7m),mweetrhhyaipveroPpˆe(krt;y−px()u
=) =Pˆ†p((k−;xu)),.wNhoertee hhahb((xu),ixx)=iuRiRx−u=∗d−xuu∗a∗(x−)upu∗(∗xd)udxb(u,x)p(u)p(x). A† indicates the complex conjugate of A. By calling again u∗ <a/2 the maximal permitted dis- It is
matter of simplRe algeRbra to show that, for p(u) = p( u), Eq. (19) can be rewritten as placement for each particle (i.e., the support of p(u) is − included in [ u∗,u∗]), we can find Wˆ(k)= [W(F)] by
− F 2 N simply calculating the following average F = (u x+na)−2α (u x+na)−α 2 2 − u− − u x u∗ nX=1D(cid:10) (cid:11) (cid:10) (cid:11) E 1,N Wˆ(k)= dxp(x)Pˆ(k;x). (18) + (u x+na)−α [ (u x+la)−α Z−u∗
− u − u n,l X(cid:10)(cid:10) (cid:11) (cid:10) (cid:11) It is simple to verify that, as Pˆ(k; x) = Pˆ†(k;x) and (u+x+la)−α ] (20) − u x − p(u) = p(−u), the function Wˆ(k) is real and Wˆ(k) = We se
(cid:10)e that the force(cid:11)va(cid:11)riance is composed of two differ- Wˆ( k). TheTaylorexpansioninkofWˆ(k)isobtainable − ent contributions: the former, given by the first sum in from Eqs.(17) and
(18). Since it is a real function and Eq.(20),ismainlyduetothefluctuationsinthedisplace- is the characteristic function only even powers of k are ments u of all the sources of the force (in this term
the present. In particular the coefficient of the k2 term is averageover x is only a smoothing operation), while the 2/2 where −F latter, given by the second sum, is determined basically by the
fluctuations created by the stochastic displace- +∞ ment x of the particle initially in the originon which we h( )= dF W(F)h(F). F evaluate the force (in this term is the averagesover u to Z−∞ play a
role of simple smoothing). It is interesting and useful in applications to calculate Actually, rigorously speaking, we should show that all the coefficients of the Taylorexpansionof Wˆ(k) arecon- all
the above expressions by evaluating all the terms in the sums in Eq. (20) to the second order in (u x)/na. vergent to finite values in the limit N . It is simple ± to show it by expanding the terms
(→u ∞x +na)−αm In order to do this, we use the following second order 0 h ± i Taylor expansion for B A: of Eq. (17) in Taylor series of (u x0)/na for n 1 ≪ ± ≥ whichisjustifiedbythefactthatinthe
givenhypothesis −γ B u + x0 2u∗ <a, and considering that (A+B)−γ =A−γ 1+ | | | |≤ A (cid:18) (cid:19) +∞ (ik)m(na)−αm +∞(−ik)l(na)−αl =1, n 1. =A−γ 1 γB + γ(γ+1) B 2+o B 2 . m! × l! ∀ ≥ " − A 2
(cid:18)A(cid:19) (cid:18)A(cid:19) # m=0 l=0 X X From this, substituting respectively A with na and B Therefore we conclude that we can write Wˆ(k) in the with u x, we have that ± following form: (u
x+na)−γ ± +∞ 2n u x γ(γ+1) u x 2 Wˆ(k)= ( 1)n F k2n. (na)−γ 1 γ ± + ± . n=0 − (2n)! ≃ " − na 2 (cid:18) na (cid:19) # X 7 Moreover we have that (u x)2n+1 = 0 for any B. Case 2: power law tailed W(F)
± u x integer n due to the symmetry p(u) = p( u), while we have that (u x)2 (cid:10)(cid:10)= 2σ2 wher(cid:11)e(cid:11)σ−2 = u2 = Asshownabove,thisisthecaseinwhichtheOCissat- ± u x u −uu∗∗duu2p
(cid:10)((cid:10)u)isthe(cid:11)var(cid:11)ianceofthe singledispla(cid:10)cem(cid:11)ent. iisnfiiteida,l il.aet.t,icpearptoicslietsioanrsebpeeyromnidttethdetolimjuitmpofotuhteorfeltahteeidr Therefore
we can write R unitary cell in such a way to be found arbitrarily close σ2 (u x+na)−γ (na)−γ 1+γ(γ+1) , to some other particle. Note that this is always the case ± u x ≃ (na)2 when the support of p
(u) is unlimited, i.e., if p(u) > 0, (cid:20) (cid:21) a(cid:10)n(cid:10)d (cid:11) (cid:11) u IR. However the same kind of W(F) is also ob- ∀ ∈ tained if u∗ > a/2 such that p(u) > 0, u [ u∗,u∗] (u
x+na)−α 2 (na)−2α 1+α(3α+2) σ2 . and zero o∃utside. The difference between∀thes∈e t−wo sub- ± u x≃ (cid:20) (na)2(cid:21) cases is only in the amplitude of the power law tail of D(cid:10) (cid:11) E
Henceforth W(F) but not in its exponent. In general,if the particle (u x+na)−2α (u x+na)−α 2 initially at the lattice site x = na is permitted, through ± u− ± u x displacements, to be found
arbitrarily close to the par- D(cid:10) α2σ2 (cid:11) (cid:10) (cid:11) E ticle initially at x = 0, it will contribute to the product = . (10) through a factor of the type (15). If instead this is
(na)2(α+1) not permitted, it will contribute to (10) through a fac- Moreover tor of the form (13) or (14) depending respectively on (u x+na)−α (u x+la)−α whether n > 0 or n < 0. In any case if M = [1
/α], in − u ± u x order to find the main terms of the small k expansion of (cid:10)(cid:10)(na)−α(la)−α (cid:11) (cid:10) (cid:11) (cid:11) Wˆ(k) (so to determine the large F tail of W(F)), it is ≃
α2σ2 1 1 sufficienttotruncateallthesmallk expansionofthe dif- 1 +α(α+1)σ2 + ; × ∓ (la)(na) (na)2 (la)2 ferentfactorsinEq.(10)atmosttotheorderM+1. For (cid:20) (cid:18) (cid:19)(cid:21) the sake of
simplicity, let us limit the discussion to the from which case in which strictly αM < 1 in such a way to exclude (u x+na)−α [ (u x+la)−α (21) logarithmic corrections in k. We can write − u − u
(cid:10)(cid:10) (u+x+la)−α(cid:11) ](cid:10) 2α2σ2(cid:11) . Pˆ(k;x0) (24) − u x ≃ (la)α+1(na)α+1 (OC) M (ik)m x m (cid:10) (cid:11) (cid:11) +A(n,x ,α)k1/α Uobstinaignall these results in all the
terms of Eq. (20), we ≃ Yn "mX=0 m! (cid:28)(cid:18)|x|α+1(cid:19) (cid:29)n,x0 0 # 2 N 1 N 1 2 (NOC) M+1(ik)m x m , F2 ≃α2σ2nX=1(na)2(α+1) +2"nX=1(na)α+1# × Yl "mX=0 m! (cid:28)(cid:18)|x|α+1
(cid:19) (cid:29)l,x0# (22) where A(n,x ,α) is the coefficient of the term k1/α in 0 It is simple toverify that both sums in Eq. (22) arecon- Eq.(16),andthe firstproductonn is onthe pa∼rticlesin
vergingforN + forallα>0,forwhichwecanthen X withn=0whichcanbefoundarbitrarilyneartothe n → ∞ 6 rewrite particleinX (i.e., satisfyingthe OCwithrespecttothe 0 F2 α2σ2 particleinX0), while the
productonl is onthe particles ζ(2α+2)+2ζ2(α+1) , (23) in X with l =0 which have a positive minimal distance 2 ≃ a2(α+1) l 6 tothesameparticle(i.e.,satisfyingtheNOCwithrespect where ζ(t) is the Ri
(cid:2)emann zeta function (n(cid:3)ote that for to the particlein X ). If p(u)>0 u IR allthe system 0 t 1+ wee have ζ(t) 1/(t 1)). Again the first term particles with n=0 are included i∀n th∈e first
product. If is→due to the fluctuatio≃ns in t−he position of the sources, instead p(u) > 06 for u [ u∗,u∗] with u∗ > a/2 and while the second one is due to the fluctuations in the zerooutside, the
firstpro∈du−ctinclude only contributions position of the particle on which we are calculating the from the particles with 2u∗/a<n<2u∗/a and n=0, force. In particular In Eq. (22) the generic term of
the whiletheothersareinclu−dedinthe secondproduct.6The firstsumgivetherelativeweightofthenth nearestneigh- large F behavior of P(F;x ) and consequently of W(F) 0 bor particles in determining the force
on the particle in is completely determined by the singular term of order X0. At last we can say that, in the case of displace- k1/α in the smallk expansionof Pˆ(k;x0). It is simple to ments limited
within a box well contained in a unitary see that up to the order k1/α cell around the initial lattice position, we can approxi- M mate W(F) with a Gaussian PDF with zero mean and Pˆ(k;x )= c (x )
km+c (x )k1/α, variance given by Eq. (23). However, as already pointed 0 m 0 1/α 0 out, there is no constraint, in the limit N , toward mX=0 →∞ rigorousGaussianityandnon-Gaussiancorrectionsarein
where the the coefficients c (x ) can be deduced m 0 general present. by counting from Eq. (24) (in particular c (x ) = m 0 8 imFm(x0)/m!, where Fm(x0) = −+∞∞dF P(F;x0)Fm is VI. CONCLUSIONS the mth
moment of P(F;x ) and m M) and 0 R ≤ g g We havepresenteda detailedstudy of the PDF W(F) n6=0 of the stochastic force generated by a randomly per- c1/α = A(n;x0), turbed lattice of sourceFs of a
scale invariant attractive −2u∗/aX<n<2u∗/a pair interaction field f(x) = Cx/xα+1 with α > 0 at − | | where the formula includes also the case u∗ . The distance x from the source. → ∞ small k expansion
up to the order k1/α of W(F) will be In general we distinguish two cases: consequently 1. The NOC is satisfied and no pair of particles can M befoundatanarbitrarilysmallreciprocaldistance; Wˆ(k) b
km+b k1/α, m 1/α ≃ m=0 2. TheOCissatisfiedanditexistsatleastoneofsuch X pairs of particles. where +∞ im In the first case we have a fast decreasing W(F) similar b = dx p(x )c (x )= m with m M m 0 0 m 0
m!F ≤ toaGaussianPDFatlargeF,eventhoughnoconstraint Z−∞ towardanexactGaussiancentrallimittheoremisfound. +∞ b = dx p(x )c (x ), In the second case a power law tailed W(F) is found. 1/α 0 0 1/α 0 Z−∞
Theuniqueexponentofsuchpowerlawisdirectlyrelated to the pair interaction exponent α, while its amplitude where m = +∞dF W(F)Fm = F −∞ depends also on the lattice spacing a (with respect to −+∞∞dx0p
(x0)Fm(x0) ≡ FRm(x0) . It is possible the unit distance through which we measure x in f(x)) and in general on the shape of the perturbations PDF tRo evaluate explicitly b1/α bDy using EEq. (16). We
are nowgin the situatiogn to connect the singular p(u). InparticularinthiscaseW(F)hasapowerlawtail term b k1/α of the small k expansion of Wˆ(k) to the with the same exponent as the stable L´evy
distribution 1/α found in the Poisson case (see Appendix II) but with a large F tail of W(F) by using directly the arguments in reducedamplitude, eventhough,analogouslyto the case Appendix I. This
gives simply (i), no constrainthas been found towardthe stable L´evy W(F) BF−1−1/α distribution. ≃ Some further general considerations have can now be with done: 1 +∞ n6=0 B = dx p(x ) p(x na). (25)
Inthecaseinwhichtheprobabilityoffindingarbi- α 0 0 0− • Z−∞ −2u∗/aX<n<2u∗/a trarilyclosetooneeachother,thelargeF behavior ofW(F)isbasicallydeterminedbythe smallxbe- Note that if the support of p(x na)
is much larger 0 havior of f(x) and not at all by the the fact if it − thana andp(u)is smooth(i.e., approximatelyconstant) is long range or not. Therefore if we considered a on the scale a, we can
approximate Eq. (25) with fast decreasing f(x) but with the same divergence 1 p(0)a in x = 0 we would have deduced the same con- B = − . clusions about the exponent of the large F tail of αa W(F)
[21]. Note that this last approximated expression is not de- pendent on the details of p(u) for u = 0. Finally, we For this reason, even if we consider a lattice per- 6 • can observe that we have
obtained a power law tailed turbed by correlated displacements, we expect to W(F) characterizedby the same exponent of the case of obtain the partition into the two cases (i) and (ii)
ahomogeneousPoissonparticledistributionpresentedin above considered depending on the possibility or Appendix II. The only differences are the two following: not to find pair of particles arbitrarily
close. The amplitude of this power law tail is reduced • ActuallydifferentcasesforthelargeF tailofW(F) in the shuffled lattice with respect to that of the • between the Gaussian-like “fast decreasing”
and Poisson particle distribution, given by Eq. (37), of L´evy-like “power law” tailed PDF with exponent a factor β =(α+1)/αarepossibleinveryparticularcases. +∞ n6=0 These cases correspond to the
choice of p(u) such dx0p(x0) ap(x0 na) 1 p(0)a. that p(u) > 0 exactly for u [ a/2,a/2] and zero − ≃ − ∈ − Z−∞ −2u∗/aX<n<2u∗/a outside. By changing the limit behaviors of p(u) when u a/2 we can obtain
different large F Intheshuffledlatticewehavethispowerlawtailfor behavior→sof±W(F). Inparticularifp(a/2)>0and • each α > 0, while in the Poisson case the problem finite(weconsiderp(u)=p( u))wehavethesame
is not well defined for α 1/2 (see Appendix II). case as (ii) described above−with β =(α+1)/α. If ≤ 9 instead p(a/2) = 0, depending on the behavior of Using the general relation valid for 0<β <1 p(u)
foru (a/2)− wewillhavedifferentvaluesof β but in g→eneral larger than (α+1)/α. If, finally, ∞ qβ π dq = , (29) pw(aay/2in)t=egr+a∞ble()ininsugecnhearawlawyetohbattapi(nuβ)rsemmaallienrstahnayn- Z0 1+q2
2cos β2π (α+1)/α (but always > 1 so that W(F) remains (cid:16) (cid:17) we can conclude integrable). Bπ hˆ(k)= kβ−1+o(kβ−1). (30) Appendix I: Fourier transform of power law tailed Γ(β)cos βπ 2 PDFs
(cid:16) (cid:17) If instead of using the integral representation (28) one We are interested in the small k behavior of the char- used acteristic function fˆ(k) of a given power law tailed PDF 1 ∞ f
(x) which for large x behaves as Ax−α with α > 1. x−β = dzzβ/2−1e−x2z, Let us call [α] = n ≥| |1 the integer p|a|rt of α. In this | | Γ β2 Z0 hypothesisfˆ(k)hasaregularTaylorexpansionuptothe (cid:16)
(cid:17) order n 1 followed by a singular term proportional to one should obtain the alternative expression containing kα−1: − only Gamma functions: +∞ n−1 (ik)m fˆ(k) dxf(x)eikx = xm+fˆs(k), (26)
B√πΓ 1−β ≡Z−∞ m=0 m! hˆ(k)= 2 kβ−1+o(kβ−1). X 2βΓ(cid:16)β (cid:17) where xm = +∞dxxmp(x) and fˆ(k) contains the sin- 2 −∞ s (cid:16) (cid:17) gular part ofRfˆ(k) and at small k is an infinitesimal of
Note that the coefficient of the term kβ−1 is real and order α 1 in k (if α is an integer it contains also loga- positive. Another important case is when the function − rithmic corrections). h(x) has
an odd non-integrable power law tail, i.e.: Now we show that effectively at sufficiently small k, fˆs(k) ∼ Bkα−1 (where now a(k) ∼ b(k) means that h(x)=B|x|−β[2θ(x)−1]+h0(x), (31) lim [a(k)/b(k)] = 1)
giving an explicit expression for k→0 B as a function of both the amplitude A and the expo- where θ(x) is the usual Heaviside step function, B > 0, nent α. 0<β <1, andh (x) the same of Eq.(27). By
using the 0 First of all, let us study the case of a function h(x) same integral transformation leading to Eq. (30), we in that can be written as this case we obtain: h(x)=B|x|−β +h0(x), (27) hˆ(k)=i
Bπ kβ−1+o(kβ−1). where B > 0, 0 < β < 1 and h (x) is a smooth func- Γ(β)sin βπ 0 2 tion, integrable in x = 0 and such that xβh (x) 0 0 (cid:16) (cid:17) → for x . This means that h(x) presents an
even At this point we can go back to the problem of finding | | → ∞ power law tail. In this case the small k behavior of the dominant small k contribution of the term fˆ(k) in hˆ(k)= +∞dxh(x)eikx is
completelydeterminedbythe Eq.(26)forthePDFf(x)decayingatlarge x asAsx−α. −∞ | | | | Fourier transform of B x−β, i.e.: Notethatnowwecannotapplydirectlytheargumentwe R | | have used for the above
function h(x). In fact if, from +∞ hˆ(k) B dxx−βeikx one side, also in this case we can write ∼ | | Z−∞ f(x)=Ax−α+f (x) In order to perform this Fourier transform we introduce | | 0 the integral
representation: with xαf (x) 0for x ,fromthe othersideα> 0 | | → | |→∞ x−β = 1 ∞dzzβ−1e−|x|z, (28) 1 (for definiteness of probability) and f0(x) contains a nonintegrablesingularityatx=
0suchthattocancelthe | | Γ(β) Z0 non integrable contribution of the Ax−α term at small whereΓ(β)istheEulerGammafunction. UsingEq.(28) x. Inordertocircumventthis difficul|ty|weintroducethe we can write:
function +∞ 1 ∞ +∞ dxx−βeikx = dzzβ−1 dxeikx−|x|z g(x)=xnf(x), | | Γ(β) Z−∞ Z0 Z−∞ 2kβ−1 ∞ qβ where n is the integer part of α. In this way g(x) is = dq . Γ(β) 1+q2 similar to the function h(x) of
Eq. (27) if n is even and Z0 10 totheh(x)ofEq.(31)ifnisodd. Therefore,bydefining Let us now study the characteristic function as usual gˆ(k)= +∞dxg(x)eikx, we can say that −∞ +∞ Wˆ (k)= [W (F)]= dF W
(F)eikF . R Aπ P F P P gˆ(k)= kα−n−1+o(kα−n−1) Z−∞ Γ(α n)cos α−nπ − 2 By taking the FT of Eq. (34) we obtain if n is even, and (cid:0) (cid:1) N L/2 dx x gˆ(k)=i Aπ kα−n−1+o(kα−n−1) WˆP(k)= L exp ik
xα+1 Γ(α n)sin α−nπ "Z−L/2 (cid:18) | | (cid:19)# − 2 if n is odd. Now in orde(cid:0)r to fin(cid:1)d the singular part fˆ(k) By adding and subtracting 1 inside the square brackets, s and taking the
limit L with N =ρ L we arrive at offˆ(k) it is sufficientto integraten times gˆ(k) (the inte- →∞ 0 the final expression: grationconstantsgivingriseto thefinite moments terms of fˆ(k) in Eq. (26)). In
this way we obtain +∞ t Wˆ (k)=exp ρ k1/α dt 1 exp i . fˆs(k) (32) P (cid:20)− 0 Z−∞ (cid:18) − (cid:18) |t|α+1(cid:19)(cid:19)(cid:21) (−1)n/2Aπ kα−1+o(kα−1) for even n Note that Γ(α)cos(α−nπ) = 2
+∞ t ∞ dt 1 exp i =2 dt 1 cos(t−α) Γ((−α1))s(inn+(1α)/−22nAππ)kα−1+o(kα−1) for odd n, Z−∞2 ∞(cid:18) − (cid:18) |t|α+1(cid:19)(cid:19) Z0 (cid:0) − (cid:1) where we have used the following
property of Gamma = α duu−1−1/α(1−cosu), function: Γ(x+1)=xΓ(x), hence[(α 1)...(α n)Γ(α Z0 n)]=Γ(α). Note that in both case th−e coeffici−ent of th−e where the last passage is due to the change of
variable term kα−1 is real. t−α = u. Let us now use, as in the previous appendix, the integral representation: 1 ∞ Appendix II: Force PDF in a Poisson particle u−1−1/α = dzz1/αe−uz. distribution Γ
αα+1 Z0 Through this transform(cid:0)ation(cid:1) we arrive finally to the re- Let us consider the case in which the particles are dis- lation tributed on the line interval ( L/2,L/2]of length L fol-
− lowing a spatially stationary Poisson process with aver- +∞ t dt 1 exp i age density ρ0 > 0. We want to know the PDF WP(F) − tα+1 of the field Z−∞ (cid:18) (cid:18) | | (cid:19)(cid:19) 2 ∞ z−1+1/α
N = dz . (35) F = xi (33) αΓ αα+1 Z0 1+z2 x α+1 i Xi=1 | | Note that the last in(cid:0)tegra(cid:1)l is diverging for α 1/2, in- ≤ generated at the origin of the space by all the N system dicating
that the problem is not well defined for these particles(wecanconsiderN =ρ Lasthefluctuationsof valuesofαasF isnotawelldefinedstochasticquantity. 0 order√ρ L,duetothePoissonstatistics,arecompletely This
means that the sum in Eq. (33) needs a L depen- 0 unimportant for this problem in the large L limit). We dent normalization to become a well defined stochastic will follow the procedure to find W (F)
in three dimen- variable. In fact, differently to the shuffled lattice case, P sions used by Chandrasekhar in [1] for the gravitational wherethetypicalmassfluctuationonregionsofsizeRis force.
Notethat,asthepositionsofdifferentparticlesare proportionalto R0, in the Poissoncase this is due to the uncorrelated,thejointPDFp(x ,...,x )ofthepositions fact that such fluctuation is proportional to
R1/2. The 1 N of the N system particles is simply: field due to the mass fluctuation in a sphere of radius R on the origin of the sphere is of order R−α for the shuf- N fled lattice and R−α+1/2 in the
Poisson point process. p (x ,...,x )= p(x ), N 1 N i This explains why for a shuffled lattice the problem is iY=1 well defined for any α > 0 and not only for α > 1/2. where p(x )=1/L. Therefore we can
write This also says that for α < 1/2, in order to have a well i defined stochastic field also in the Poisson case, we have L/2 N dx N x to divide the field in Eq. (33) by L−α+1/2 where L is i i WP(F)=Z
Z−L/2"i=1 L #δF −j=1 |xj|α+1 . dthimeesnyssitoenms,sfiozre.whTichhetshaemseamaregmumasesntfluccatnuabteiounsseadreinred- Y X (34) spectivelyproportionaltoR(d−1)/2intheshuffledlattice,
See more | {"url":"https://www.zlibrary.to/dl/scale-invariant-forces-in-1d-shuffled-lattices","timestamp":"2024-11-10T22:48:51Z","content_type":"text/html","content_length":"214341","record_id":"<urn:uuid:473a79a8-58ad-4904-ae80-87f6362d53da>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00585.warc.gz"} |
Simulation of Photonic Quantum Computers Enhanced by Data-Flow Engines
by Peter Rakyta (ELTE), Ágoston Kaposi, Zoltán Kolarovszki, Tamás Kozsik (ELTE), and Zoltán Zimborás (Wigner)
We are at the start of an exciting era for quantum computing, in which we are learning to manipulate many quantum degrees of freedom in a controlled way. At this point, it is vital for us to develop
classical simulators of quantum computers to enable the study of new quantum protocols and algorithms. As the experiments move closer to realising quantum computers of sufficient complexity, their
work must be guided by an understanding of what tasks we can hope to perform. Within the framework of the Quantum Information National Laboratory of Hungary [L1], we have developed a highly efficient
photonic quantum computer simulator system, which is composed of Piquasso [L2], a flexible user-friendly general simulator of photonic quantum computers and of Piquasso Boost [L3], a high-performance
simulator software stack. We report about this software system’s performance for simulating the Boson Sampling protocol focusing on Piquasso Boost and on its enhancement by a data-flow engine based
permanent calculator device developed in a collaboration with Maxeler Technologies [L4].
Experimental setups successfully demonstrating boson-sampling (BS) provide an important milestone in the race to build universal quantum computers. Since BS experiments rely on multiphoton
interference in linear optical interferometers, they do not have all the problem-solving ability of a universal quantum computer, but are suitable to solve some specific problems faster than today’s
machines. There is currently a quest to find a set of practically important problems that could be mapped to this family of sampling schemes. This is partially motivated by the reasoning of Scott
Aaronson showing the equivalence between searching and sampling problems [1].
The idea of BSBoson sSampling was introduced in the seminal work of Aaronson and Arkhipov [12]. They formulated the well-defined computational problem (sampling from the output distributions n
indistinguishable photons that interfere during the evolution through an interferometer network), which could already provide a demonstration of a scalable quantum advantage over classical computers
already with near-term photonic quantum devices. When scaling up the number of the photons passing through the interferometer at the same time, it becomes difficult to calculate the distribution of
the output photons using conventional computers. The central mathematical problem to determine the probability of finding a given number of particles in the individual output modes of the
interferometer is to evaluate the permanent function of the unitary matrix U describing the physics working behind the n-port interferometer:
where Sn labels the set of all permutations constructed from 1,2,...n. In fact, the permanent function is inherently encoded in the nature of the quantum world via fully symmetric wavefunctions used
to describe indistinguishable bosonic particles. Thus, the ability to efficiently calculate the permanent is the key ingredient to simulate BS, which is crucial to explore possible applications of
Currently the most efficient scalable approach to calculate the permanent of an n × n matrix A has a computational complexity of O(n^2.2^n), which can be further reduced to O(n.2^n) if data recycling
of intermediate results is implemented via Gray code ordering. The Ryser’s and the BB/FG formulas (named after Balasubramanian-Bax-Franklin-Glynn) follow quite different approaches to evaluate the
permanent, also resulting in quite different numerical properties [3].
In our work we designed a novel scalable recursive implementation to calculate the permanent via the BB/FG formula following the idea of [4]. Our permanent algorithm has a computational complexity of
O(n.2^n) similarly to the Gray-code ordered algorithm of [3], without the overhead of processing the logic associated with the generation of auxiliary data needed for the Gray-code ordering. Instead,
our algorithm relies on a recursive spawning of parallel tasks maximising the amount of computational data being recycled during the evaluation process. We compared the performance of our
implementation provided in the Piquasso Boost library to the implementation of TheWalrus [L5] package also having implemented parallelised C++ engines to evaluate the permanent function. Our results
(see Figure 1) show the logarithm of the average execution time needed to calculate the permanent of n×n random unitary matrices as the function of the matrix size n.
Figure 1: Benchmark comparison of individual implementations to calculate the permanent of an n x n unitary matrix. For better illustration, the discrete points corresponding to the individual
matrices are connected by solid lines. The numbers associated with the individual implementations describe the speedup compared to the TheWalrus package at matrix size indicated by the vertical
dashed line.
Our implementation is four times faster than TheWalrus code executed on 24 threads of Intel Xeon Gold 6130 CPU processor. We also compared the numerical performance of the Piquasso Boost simulation
framework to the benchmark of Ref. [3]. In this case our benchmark comes very close to the execution time achieved on a single node of the Tianhe-2 supercomputer consisting of an Intel Xeon E5
processor with 48 threads and three Xeon Phi 31S1P cards with 684 threads in total. Our results indicate that the Piquasso package provides a high performance simulation framework even on smaller
hardware. Our recursive implementation scales well on shared memory architectures, however, its scalability over distributed computational resources is limited [L6].
To efficiently perform permanent calculations on large scale computing resources we need to come up with an alternative approach. Here we report on a data-flow engine (DFE) based permanent calculator
device. We developed a full-fledged permanent calculator implementation on Xilinx Alveo U250 FPGA cards using high-level data-flow programming tools developed by Maxeler Technologies. The data-flow
engines are driven by the CPU of the host machines providing a high scalability of our implementation over MPI communication protocol: it is possible to divide the overall computational problem into
chunks and distribute them over the nodes of a supercomputer cluster just like in the case of the CPU implementation of [3].
Our FPGA-based DFEs have several advantages over CPU and GPU technologies. Probably the most important aspect of FPGA is the possibility to have hardware support for arithmetic operations exceeding
the precision of 64-bit arithmetic units of CPU and GPU hardware. In practical situations the permanents of unitary matrices describing the photonic interferometers would be much smaller than the
individual elements of the input matrix. In this situation one needs to increase the numerical precision in the calculations to obtain a result that can be trusted. In fact, the implementations of
the walrus package and the Piquasso Boost library already use extended precision for floating point operations on the CPU side to calculate the permanent. On the DFE side we used 128-bit fixed point
number representation to perform the calculations, providing the most accurate result among the benchmarked implementations.
Figure 1 also shows the performance of our DFE permanent calculator compared to the previously discussed CPU implementations. We observe that on DFE we can calculate the permanents of larger matrices
much faster than by CPU based implementations with a numerical precision exceeding them. Note that at smaller matrices the execution time is dominated by the data transfer through the PCI slot.
In our future work we aim to use our highly optimized Piquasso simulations framework to address computational problems related to BS and examine the possibility of using BS quantum devices to solve
real world computational problems. Our efficient method to evaluate the permanent would be valuable in other research works as well, making the evaluation of the amplitudes of many-body bosonic
states faster and more reliable.
This project has received funding from the Ministry of Innovation and Technology and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of
[L1] https://qi.nemzetilabor.hu/
[L2] https://github.com/Budapest-Quantum-Computing-Group/piquasso
[L3] https://github.com/Budapest-Quantum-Computing-Group/piquassoboost
[L4] https://www.maxeler.com/
[L5] https://github.com/XanaduAI/thewalrus
[L6] https://arxiv.org/abs/2109.04528
[1] S. Aaronson, A. Arkhipov: “The computational complexity of linear optics”, in Proc. of STOC’11 https://dl.acm.org/doi/10.1145/1993636.1993682
[2] J. Wu, et al.: “A benchmark test of boson sampling on Tianhe-2 supercomputer”, National Science Review, Volume 5, Issue 5, September 2018, Pages 715–720, https://doi.org/10.1093/nsr/nwy079
Please contact:
Peter Rakyta
Eötvös Loránd University, Hungary
This email address is being protected from spambots. You need JavaScript enabled to view it. | {"url":"https://ercim-news.ercim.eu/en128/special/simulation-of-photonic-quantum-computers-enhanced-by-data-flow-engines","timestamp":"2024-11-11T23:45:34Z","content_type":"text/html","content_length":"48904","record_id":"<urn:uuid:58d57e4e-4f22-4a23-b46a-97025d81eb8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00170.warc.gz"} |
Question #d94de | Socratic
Question #d94de
1 Answer
Distance traveled before coming to rest: $\approx 1066.67 \text{m}$
Time before train completely stops: $80 \text{s}$
First, you need to look for the acceleration. You can make use of this formula:
$\textcolor{w h i t e}{X X} {v}^{2} = {u}^{2} + 2 a s$
• $v$ is the final velocity
• $u$ is the initial velocity
• $a$ is the acceleration
• $s$ is the distance traveled
Now let's take a look at our given:
$\textcolor{w h i t e}{X X} v = 48 \text{km/h}$
$\textcolor{w h i t e}{X X} u = 96 \text{km/h}$
$\textcolor{w h i t e}{X X} s = 800 \text{m"=0.8"km}$
Plugging these values into the equation:
$\left[1\right] \textcolor{w h i t e}{X X} {v}^{2} = {u}^{2} + 2 a s$
$\left[2\right] \textcolor{w h i t e}{X X} \left(48 \text{km/h")^2=(96"km/h")^2+2a(0.8"km}\right)$
Now you just have to isolate $a$.
$\left[3\right] \textcolor{w h i t e}{X X} \left(48 \text{km/h")^2-(96"km/h")^2=2a(0.8"km}\right)$
$\left[4\right] \textcolor{w h i t e}{X X} \left[\left(48 \text{km/h")^2-(96"km/h")^2]/[2(0.8"km}\right)\right] = a$
Get your scientific calculator and solve for $a$.
$\left[5\right] \textcolor{w h i t e}{X X} a = - 4320 {\text{km/h}}^{2}$
Now that you know the acceleration, you can start working on the problem. To find the distance the train travels before stopping, we will again make use of the formula ${v}^{2} = {u}^{2} + 2 a s$.
This time, we will look for $s$.
$\left[1\right] \textcolor{w h i t e}{X X} {v}^{2} = {u}^{2} + 2 a s$
The train is at rest when its velocity is $0$. Therefore, we will use $0$ as the final velocity.
$\left[2\right] \textcolor{w h i t e}{X X} {\left(0\right)}^{2} = \left(96 {\text{km/h")^2+2(-4320"km/h}}^{2}\right) s$
Isolate $s$.
$\left[3\right] \textcolor{w h i t e}{X X} {\left(0\right)}^{2} - \left(96 {\text{km/h")^2=2(-4320"km/h}}^{2}\right) s$
$\left[4\right] \textcolor{w h i t e}{X X} \left[- \left(96 {\text{km/h")^2]/[2(-4320"km/h}}^{2}\right)\right] = s$
Use a scientific calculator.
$\left[5\right] \textcolor{w h i t e}{X X} s = 1.0 \overline{6} \text{km"=1066.bar6"m"~~1066.67"m}$
To find the time it takes for the train to make a complete stop, you can use this formula:
$\textcolor{w h i t e}{X X} v = u + a t$
• $v$ is the final velocity
• $u$ is the initial velocity
• $a$ is the acceleration
• $t$ is the time
Plugging these values into the formula:
$\left[1\right] \textcolor{w h i t e}{X X} v = u + a t$
$\left[2\right] \textcolor{w h i t e}{X X} \left(0\right) = \left(96 {\text{km/h")+(-4320"km/h}}^{2}\right) t$
Isolate $t$.
$\left[4\right] \textcolor{w h i t e}{X X} \left(- 96 {\text{km/h")/(-4320"km/h}}^{2}\right) = t$
Solve using a scientific calculator.
$\left[5\right] \textcolor{w h i t e}{X X} t = 0.0 \overline{2} \text{h"=80"s}$
Impact of this question
1348 views around the world | {"url":"https://socratic.org/questions/5622543711ef6b4797bd94de","timestamp":"2024-11-08T14:12:55Z","content_type":"text/html","content_length":"38182","record_id":"<urn:uuid:2910ac9a-d8c9-4894-993d-cb30221729a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00391.warc.gz"} |
Boundary Value Problems/Ordinary-Differential-Equations - Wikiversity
In mathematics, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable.
A simple example is Newton's second law of motion, which leads to the differential equation
${\displaystyle m{\frac {d^{2}x(t)}{dt^{2}}}=F(x(t)),\,}$
for the motion of a particle of constant mass m. In general, the force F depends upon the position of the particle x(t) at time t, and thus the unknown function x(t) appears on both sides of the
differential equation, as is indicated in the notation F(x(t)).
Ordinary differential equations are distinguished from partial differential equations, which involve partial derivatives of several variables.
Ordinary differential equations arise in many different contexts including geometry, mechanics, astronomy and population modelling. Many famous mathematicians have studied differential equations and
contributed to the field, including Isaac Newton|Newton, Leibniz, the Bernoulli family, Riccati, Alexis Claude Clairaut, d'Alembert and Euler.
Much study has been devoted to the solution of ordinary differential equations. In the case where the equation is linear, it can be solved by analytical methods. Unfortunately, most of the
interesting differential equations are non-linear and, with a few exceptions, cannot be solved exactly. Approximate solutions are arrived at using computer approximations (see numerical ordinary
differential equations).
The trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law.
Ordinary differential equation
Let y be an unknown function
${\displaystyle y:\mathbb {R} \to \mathbb {R} }$
in x with ${\displaystyle y^{(n)}}$ the n^th derivative of y, then an equation of the form
${\displaystyle F(x,y,y',\ \dots ,\ y^{(n-1)})=y^{(n)}}$
is called an ordinary differential equation (ODE) of order n; for vector valued functions,
${\displaystyle y:\mathbb {R} \to \mathbb {R} ^{m}}$ ,
it is called a system of ordinary differential equations of dimension m.
When a differential equation of order n has the form
${\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n)}\right)=0}$
it is called an implicit differential equation whereas the form
${\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n-1)}\right)=y^{(n)}}$
is called an explicit differential equation.
A differential equation not depending on x is called autonomous.
A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y
${\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}$
with a[i](x) and r(x) continuous functions in x. The function r(x) is called the source term; if r(x)=0 then the linear differential equation is called homogeneous, otherwise it is called
non-homogeneous or inhomogeneous.
Given a differential equation
${\displaystyle F(x,y,y',\dots ,y^{(n)})=0}$
a function u: I ⊂ R → R, is called the solution or integral curve for F, if u is n-times differentiable on I, and
${\displaystyle F(x,u,u',\ \dots ,\ u^{(n)})=0\quad x\in I.}$
Given two solutions u: J ⊂ R → R and |v: I ⊂ R → R, u is called an extension of v ifI ⊂ J and
${\displaystyle u(x)=v(x)\quad x\in I.\,}$
A solution which has no extension is called a global solution.
A general solution of an n-th order equation is a solution containing n arbitrary variables, corresponding to n constant of integration|constants of integration. A particular solution is derived from
the general solution by setting the constants to particular values, often chosen to fulfill set 'initial value problem|initial conditions or boundary conditions. A singular solution is a solution
that can't be derived from the general solution.
Reduction to a first order system
Any differential equation of order n can be written as a system of n first-order differential equations. Given an explicit ordinary differential equation of order n and dimension 1,
${\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n-1)}\right)=y^{(n)}}$
we define a new family of unknown functions
${\displaystyle y_{n}:=y^{(n-1)}.\!}$
We can then rewrite the original differential equation as a system of differential equations with order 1 and dimension n.
${\displaystyle y_{1}^{'}=y_{2}}$
${\displaystyle y_{2}^{'}=y_{3}}$
${\displaystyle \vdots }$
${\displaystyle y_{n-1}^{'}=y_{n}}$
${\displaystyle y_{n}^{'}=F(y_{n},\dots ,y_{1},x).}$
which can be written concisely in vector notation as
${\displaystyle \mathbf {y} ^{'}=\mathbf {F} (\mathbf {y} ,x)}$
${\displaystyle \mathbf {y} :=(y,\ldots ,y_{n}).}$
Linear ordinary differential equations
A well understood particular class of differential equations is linear differential equations. We can always reduce an explicit linear differential equation of any order to a system of differential
equation of order 1
${\displaystyle y_{i}'(x)=\sum _{j=1}^{n}a_{i,j}(x)y_{j}+b_{i}(x)\,\mathrm {,} \quad i=1,\ldots ,n}$
which we can write concisely using matrix and vector notation as
${\displaystyle \mathbf {y} ^{'}(x)=\mathbf {A} (x)\mathbf {y} (x)+\mathbf {b} (x)}$
${\displaystyle \mathbf {y} (x):=(y_{1}(x),\ldots ,y_{n}(x))}$
${\displaystyle \mathbf {b} (x):=(b_{1}(x),\ldots ,b_{n}(x))}$
${\displaystyle \mathbf {A} (x):=(a_{i,j}(x))\,\mathrm {,} \quad i,j=1,\ldots ,n.}$
Homogeneous equations
The set of solutions for a system of homogeneous linear differential equations of order 1 and dimension n
${\displaystyle \mathbf {y} ^{'}(x)=\mathbf {A} (x)\mathbf {y} (x)}$
forms an n-dimensional vector space. Given a basis for this vector space ${\displaystyle \mathbf {z} _{1}(x),\ldots ,\mathbf {z} _{n}(x)}$ , which is called a fundamental system, every solution ${\
displaystyle \mathbf {s} (x)}$ can be written as
${\displaystyle \mathbf {s} (x)=\sum _{i=1}^{n}c_{i}\mathbf {z} _{i}(x).}$
The n × n matrix
${\displaystyle \mathbf {Z} (x):=(\mathbf {z} _{1}(x),\ldots ,\mathbf {z} _{n}(x))}$
is called fundamental matrix. In general there is no method to explicitly construct a fundamental system, but if one solution is known d'Alembert reduction can be used to reduce the dimension of the
differential equation by one.
Nonhomogeneous equations
The set of solutions for a system of inhomogeneous linear differential equations of order 1 and dimension n
${\displaystyle \mathbf {y} ^{'}(x)=\mathbf {A} (x)\mathbf {y} (x)+\mathbf {b} (x)}$
can be constructed by finding the fundamental system ${\displaystyle \mathbf {z} _{1}(x),\ldots ,\mathbf {z} _{n}(x)}$ to the corresponding homogeneous equation and one particular solution ${\
displaystyle \mathbf {p} (x)}$ to the inhomogeneous equation. Every solution ${\displaystyle \mathbf {s} (x)}$ to nonhomogeneous equation can then be written as
${\displaystyle \mathbf {s} (x)=\sum _{i=1}^{n}c_{i}\mathbf {z} _{i}(x)+\mathbf {p} (x).}$
A particular solution to the nonhomogeneous equation can be found by the method of undetermined coefficients or the method of variation of parameters.
Fundamental systems for homogeneous equations with constant coefficients
If a system of homogeneous linear differential equations has constant coefficients
${\displaystyle \mathbf {y} ^{'}(x)=\mathbf {A} \mathbf {y} (x)}$
then we can explicitly construct a fundamental system. The fundamental system can be written as a matrix differential equation
${\displaystyle \mathbf {Y} ^{'}=\mathbf {A} \mathbf {Y} }$
with solution as a matrix exponential
${\displaystyle e^{x\mathbf {A} }}$
which is a fundamental matrix for the original differential equation. To explicitly calculate this expression we first transform A into Jordan normal form
${\displaystyle e^{x\mathbf {A} }=e^{x\mathbf {C} ^{-1}\mathbf {J} \mathbf {C} ^{1}}=\mathbf {C} ^{-1}e^{x\mathbf {J} }\mathbf {C} ^{1}}$
and then evaluate the Jordan blocks
${\displaystyle J_{i}={\begin{bmatrix}\lambda _{i}&1&\;&\;\\\;&\ddots &\ddots &\;\\\;&\;&\ddots &1\\\;&\;&\;&\lambda _{i}\end{bmatrix}}}$
of J separately as
${\displaystyle e^{x\mathbf {J_{i}} }=e^{\lambda _{i}x}{\begin{bmatrix}1&x&{\frac {x^{2}}{2}}&\dots &{\frac {x^{n-1}}{(n-1)!}}\\\;&\ddots &\ddots &\ddots &\vdots \\\;&\;&\ddots &\ddots &{\frac {x
^{2}}{2}}\\\;&\;&\;&\ddots &x\\\;&\;&\;&\;&1\end{bmatrix}}.}$
• A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
• A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
• D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
• Hartman, Philip, Ordinary Differential Equations, 2nd Ed., Society for Industrial & Applied Math, 2002. ISBN 0-89871-510-5.
• W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
• E.L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0486603490
• Witold Hurewicz Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8 | {"url":"https://en.m.wikiversity.org/wiki/Boundary_Value_Problems/Ordinary-Differential-Equations","timestamp":"2024-11-12T16:42:11Z","content_type":"text/html","content_length":"127272","record_id":"<urn:uuid:14dd2827-a632-4ace-bb6c-27ae34cfa3f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00494.warc.gz"} |
Math Learning – Digital Promise
How do children develop an understanding of math concepts? How can educators support math learners?
Research suggests that the mind is hard‐wired to view the world numerically.[i] Children begin to understand and use numbers very early in life, and can distinguish between different quantities of
objects as early as six months of age.[ii] As they grow, children learn to count, to understand numbers as symbols for representing quantities, to discriminate between different quantities, and to
put things in numerical order.[iii] This numerical competence is the foundation for the development of more complex math skills.
Formal math activities like counting and using numbers rely in part on number sense, an innate human awareness of how numbers work. Number sense includes the intuitive ability to distinguish between
quantities, understand how numbers are related to each other, and perceive what happens when quantities are combined or separated (i.e. addition and subtraction).[iv] Students typically enter school
with a basic number sense that has been forming since infancy. Educators can build on students’ numerical understanding by exploring numbers in a variety of ways in the classroom. For instance,
research has shown that playing number board games can help students develop skills including number line estimation, magnitude comparison, numeral identification, and counting.[v]
Of course, math encompasses more than numbers. Children also learn key skills including measurement and comparison, pattern recognition, abstract and symbolic thinking, and spatial reasoning in math
class. While abstract math is often seen as advanced subject matter, research has shown that even elementary school students are capable of complex mathematical processes including algebraic
The sections below highlight some key findings from the research on the learning and teaching of math.
The most familiar and well-documented math skill that children can learn at a young age is an understanding of numbers. Basic number skills — the understanding of numbers as symbols, the ability to
count and put things in numerical order, and the ability to discriminate between different quantities — build the foundation for more advanced math achievement. Children who have strong numerical
competence by kindergarten and first grade perform better on standardized math tests later in elementary school.[vii] Those who lack basic number skills can continue to fall behind their peers as
they progress through elementary and secondary math classes.[vii] Emerging research is also showing that conceptual aspects of fractions (including equal partitioning or “fair sharing”), algebra, and
geometry can be learned in the early grades, which can improve students’ later math performance.[viii]
Spatial ability is the capacity to visualize and mentally manipulate objects; for example, to imagine what an object would look like if it were rotated. Spatial ability plays an important role in
math learning, and is a strong predictor of a student’s achievement in math, science, and engineering.[ix] While some people naturally develop strong spatial ability, a large body of research
demonstrates that it can also be learned through activities ranging from video games, to blocks and puzzles, to instructor‐led spatial lessons.[x] Spatial training can even improve performance on
tasks that are not explicitly spatial.[xi] In one study, for instance, children who received training in the mental rotation of objects performed better on arithmetic exercises than those who did not
receive the training.[xii]
There are no innate differences between males and females in mathematical or spatial reasoning abilities.[xiii] However, A recent study showed that male and female 10th grade students had the same
confidence in handling tough concepts in English, but boys rated their own ability to tackle the toughest math problems much higher than girls even though they were getting the same grades. Math
confidence predicted whether students of both genders persisted in science, technology, engineering or math (STEM) courses in college, but boys with the highest belief in their own math abilities in
their senior year of high school had more than three times higher chance of majoring in a STEM field than a similar group of girls.[xiv] Some strategies, including pairing girls with female role
models in STEM fields and informal learning activities, like afterschool math clubs and science museum visits, have been effective in increasing female students’ interest, and persistence, in math
and other STEM subjects.[xv]
Both children and adults can experience anxiety around math in the classroom as well as the real world.[xvi] Teachers who feel math anxiety can unintentionally pass it on to their students. Research
has found a link between teachers’ math anxiety and the math anxiety and math confidence of their students; this link is particularly noticeable in female students.[xvii] Teachers can inadvertently
transmit negative attitudes about math either explicitly, by saying that they dislike numbers or that math is hard, or implicitly, by cutting math work short or treating it as a chore to be completed
in order to get to more “fun” learning activities. Studies show that training courses on the fundamentals of how to teach math, as opposed to how to do math, can reduce teachers’ math anxiety.[xviii]
Research has also found that dialogue around math concepts and discussions of mathematical reasoning in the classroom can reduce students’ math anxiety.[xix] When teachers focus on cultivating
positive mathematical mindsets in their students, students understand that they can work hard and improve at math; it is not about innate talent.[xx]
[xxi] However, recent research supports a more interactive approach that asks students to explain and elaborate on their mathematical reasoning rather than simply provide an answer or show
calculations.[xxii] There are many ways a student can tackle the same problem, and even end up with the same solution,[xxiii] so listening carefully to students’ explanations to tease apart the
mathematical reasoning behind a solution can help instructors identify, and directly address, any gaps in students’ mathematical thinking.[xxiv] Observant teachers who attend to the developing ideas
of students – by listening, questioning, and building a collaborative community of problem solvers – support meaningful math learning.[xxv]
[i]Cantlon JF, Libertus ME, Pinel P, Dehaene S, Brannon EM, Pelphrey KA, (2009). The Neural Development of an Abstract Concept of Number Journal of Cognitive Neuroscience. [ii] John D.Bransford, Ann
L.Brown, and Rodney R.Cocking, editors (2000) How People Learn Brain, Mind, Experience, and School. Chapter 4 — “How Children Learn” [iii] Siegler, R. S. (2003). Implications of cognitive science
research for mathematics education. In Kilpatrick, J., Martin, W. B., & Schifter, D. E. (Eds.), A research companion to principles and standards for school mathematics (pp. 219‐233). Reston, VA:
National Council of Teachers of Mathematics. [iv] Jordan, N. C., Glutting, J., Ramineni, C., & Watkins, M. W. (2010). Validating a number sense screening tool for use in kindergarten and first grade:
Prediction of mathematics proficiency in third grade. School Psychology Review, 39(2), 181. [v] Ramani, G. B., Siegler, R. S., & Hitti, A. (2012). Taking it to the classroom: Number board games as a
small group learning activity. Journal of educational psychology, 104(3), 661. [vi] Bastable, V., and Schifter, D. (2008). Classroom Stories: Examples of Elementary Students Engaged in Early Algebra.
In Kaput, J., Carraher, D., and Blanton, M. (Ed.) Algebra in the Early Grades (165-184). New York: Lawrence Erlbaum Associates and National Council of Teachers of Mathematics. [vii] Validating a
Number Sense Screening Tool for Use in Kindergarten and First Grade: Prediction of Mathematics Proficiency… [Article] Jordan NC, Glutting J, Ramineni C, Watkins MW,SCHOOL PSYCHOL REV (2010) Cognitive
Predictors of Achievement Growth in Mathematics: A 5‐Year Longitudinal Study [Article] Geary DC,DEV PSYCHOL (2011) [viii] Gearhart, M., Saxe, G., Seltzer, M., Schlackman, J., Ching, C., Nasir, N., .
. . Sloan, T. (1999). Opportunities to Learn Fractions in Elementary Mathematics Classrooms. Journal for Research in Mathematics Education, 30(3), 286-315. Bastable, V., and Schifter, D. (2008).
Classroom Stories: Examples of Elementary Students Engaged in Early Algebra. In Kaput, J., Carraher, D., and Blanton, M. (Ed.) Algebra in the Early Grades (165-184). New York: Lawrence Erlbaum
Associates and National Council of Teachers of Mathematics. [ix] Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological
knowledge solidifies its importance. Journal of Educational Psychology, 101(4), 817–835. doi:10.1037/a0016127 Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial
ability in intellectually talented young adolescents: A 20‐year longitudinal study. Journal of Educational Psychology, 93(3), 604–614. doi:10.1037/0022‐0663.93.3.604 [x] Uttal, D. H., Meadow, N. G.,
Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N. S. (2013). The malleability of spatial skills: a meta‐analysis of training studies. Psychological bulletin, 139(2), 352. [xi] Uttal,
D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N. S. (2013). The malleability of spatial skills: a meta‐analysis of training studies. Psychological bulletin, 139
(2), 352. [xii] Cheng, Y. L., & Mix, K. S. (2014). Spatial training improves children’s mathematics ability. Journal of Cognition and Development, 15(1), 2‐11. [xiii] Sex differences in intrinsic
aptitude for mathematics and science? A critical review [Review] Spelke ES,AM PSYCHOL (2005). [xiv] Lara Perez-Felkner, L., Nix, S., & Thomas, K. (2017). Gendered Pathways: How Mathematics Ability
Beliefs Shape Secondary and Postsecondary Course and Degree Field Choices. Frontiers in Psychology. [xv] Weber, K. (2011). Role Models and Informal STEM-related Activities Positively Impact Female
Interest in STEM. Technology and Engineering Teacher, 71, 18-21. [xvi] Beilock, S. L., & Willingham, D. T. (2014). Math anxiety: Can teachers help students reduce it? American Educator, Summer:
28-32. [xvii] Beilock, S. L., Gunderson, E. A., Ramirez, G., & Levine, S. C. (2010). Female teachers’ math anxiety affects girls’ math achievement. Proceedings of the National Academy of Sciences,
107(5): 1860-1863. [xviii] Beilock, S. L., & Willingham, D. T. (2014). [xix] Walshaw, M., & Anthony, G. (2008). The teacher’s role in classroom discourse: A review of recent research into mathematics
classrooms. Review of educational research, 78(3): 516-551. [xx] Dweck, C. (2008). Mindsets and Math/Science Achievement. The Carnegie Corporation of New York-Institute for Advanced Study Commission
on Mathematics and Science Education. [xxi] Franke, M. L., Kazemi, E., & Battey, D. (2007). Mathematics teaching and classroom practice. Second handbook of research on mathematics teaching and
learning, 1, 225‐256. [xxii] Franke, M. L., Kazemi, E., & Battey, D. (2007). Mathematics teaching and classroom practice. Second handbook of research on mathematics teaching and learning, 1, 225‐256.
https://www.air.org/sites/default/files/downloads/report/An‐UpClose‐Look‐at‐Student‐Centered‐Math‐Teaching.pdf [xxiii] Siegler, R. S. (2003). Implications of cognitive science research for
mathematics education. In Kilpatrick, J., Martin, W. B., & Schifter, D. E. (Eds.), A research companion to principles and standards for school mathematics (pp. 219‐233). Reston, VA: National Council
of Teachers of Mathematics. [xxiv] https://www.air.org/sites/default/files/downloads/report/An‐UpClose‐Look‐at‐Student‐Centered‐ Math‐Teaching.pdf [xxv] Mueller, M., Yankelewitz, D. & Maher, C.
(2014). Teachers Promoting Student Mathematical Reasoning. Investigations in Mathematics Learning, 7(2), 1-20. | {"url":"https://digitalpromise.org/research-map/topics/math-learning/","timestamp":"2024-11-15T01:20:13Z","content_type":"text/html","content_length":"93255","record_id":"<urn:uuid:2e6ba268-089c-4426-a27a-4088b5fb84c9>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00520.warc.gz"} |
definite integral problem HELP
definite integral problem HELP.
I need an explanation for this Calculus question to help me study.
For this discussion, you will work in groups to visualize the area under a curve using the online graphing tool Desmos. You will use the Desmos applet for a definite integral. To get started:
Use the Definite Integral tool to explore the area under a curve. Complete the following.
• Replace the default equation with f ( x ) = 3x^5 – 2x^3 .
• First set the limits as a = 1 and b = 2 . You may need to zoom in or out to see the graph and the shaded region.
• Then, experiment with a and b both being negative.
• Then, let a be negative and b be positive. Try using the slider and have a and b as decimals.
• Take a screenshot of one of your cases (including the right column of input information and the graph with the shaded region) to post into the discussion.
Post your step-by-step work and a screenshot of one of your cases. | {"url":"https://learnedwriters.com/definite-integral-problem-help/","timestamp":"2024-11-02T14:08:26Z","content_type":"text/html","content_length":"44388","record_id":"<urn:uuid:d100074a-c3ca-4d29-a238-de656b64e6f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00613.warc.gz"} |
PDA: Privacy-preserving Data Aggregation in Wireless Sensor Networks
PDA: Privacy-preserving Data Aggregation in Wireless Sensor
Wenbo He
Xue Liu
Hoang Nguyen
Klara Nahrstedt
Tarek Abdelzaher
Department of Computer Science
University of Illinois at Urbana-Champaign
Champaign, IL, 61801, United States
Abstract— Providing efficient data aggregation while preserv-ing data privacy is a challengpreserv-ing problem in wireless sensor net-works research. In this paper, we present two privacy-preserving
data aggregation schemes for additive aggregation functions. The first scheme – Cluster-based Private Data Aggregation (CPDA)– leverages clustering protocol and algebraic properties of poly-nomials.
It has the advantage of incurring less communication overhead. The second scheme – Slice-Mix-AggRegaTe (SMART)– builds on slicing techniques and the associative property of addi-tion. It has the
advantage of incurring less computation overhead. The goal of our work is to bridge the gap between collaborative data collection by wireless sensor networks and data privacy. We assess the two
schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. We present simulation results of our schemes and compare their performance to a typical data
aggregation scheme – TAG, where no data privacy protection is provided. Results show the efficacy and efficiency of our schemes. To the best of our knowledge, this paper is among the first on
privacy-preserving data aggregation in wireless sensor networks.
A wireless sensor network (WSN) is an ad-hoc network composed of small sensor nodes deployed in large numbers to sense the physical world. Wireless sensor networks have very broad application
prospects including both military and civilian usage. They include surveillance [1], tracking at critical facilities [2], or monitoring animal habitats [3]. Sensor networks have the potential to
radically change the way people observe and interact with their environment.
Sensors are usually resource-limited and power-constrained. They suffer from restricted computation, communication, and power resources. Sensors can provide fine-grained raw data. Alternatively, they
may need to collaborate on in-network processing to reduce the amount of raw data sent, thus conserving resources such as communication bandwidth and energy. We refer to such in-network processing
generically as
data aggregation. In many sensor network applications, the
designer is usually concerned with aggregate statistics such as
SUM, AVERAGE, or MAX/MIN of data readings over a certain
region or period. As a result, data aggregation in WSNs has received substantial attention.
As sensor network applications expand to include increas-ingly sensitive measurements of everyday life, preserving data privacy becomes an increasingly important concern. For exam-ple, a future
application might measure household details such as power and water usage, computing average trends and
mak-ing local recommendations. Without providmak-ing proper privacy protection, such applications of WSNs will not be practical, since participating parties may not allow tracking their private data.
In this paper, we discuss how to carry privacy-preserving data aggregation in wireless sensor networks. In the following, we first elaborate two specific motivating applications of using wireless
sensor network to carry out private data aggregation. 1) As alluded above, wireless sensors may be placed in houses to collect statistics about water and electricity consumption within a large
neighborhood. The aggre-gated population statistics may be useful for individual, business, and government agencies for resource planning purposes and usage advice. However, the readings of sensors
could reveal daily activities of a household, such as when all family members are gone or when someone is taking a shower (different water appliances have distinct signatures of consumption that can
reveal their identity). Hence we need a way to collect the aggregated sensor readings while at the same time preserve data privacy.
2) Future in-home floor sensors, collecting weight infor-mation, are used together with shoe-mounted sensors, collecting exercise-related information, in an obesity study to correlate exercise and
weight loss. Aggregate statistics from those data are useful for agencies such as Department of Health and Human Services, as well as insurance companies for medical research and financial planning
purposes. However, individual’s health data should be kept private and not be known to other people. From these data aggregation examples, we see why preserv-ing the privacy of individual sensor
readpreserv-ings while obtainpreserv-ing accurate aggregate statistics can be an important requirement. The protection of privacy also gives us add-on benefits includ-ing enhanced security. Consider
the scenario when an adver-sary compromises a portion of the sensor nodes: when there is no privacy protection, the comprised nodes can overhear the data messages and decrypt them to get sensitive
information. However, with privacy protection, even if data are overheard and decrypted, it is still difficult for the adversary to recover sensitive information.
Consequently, providing a reasonable guideline on building systems that perform private data aggregation is desirable. It is well-known that end-to-end data encryption is able to protect
private communications between two parties (such as the data source and data sink), as long as the two parties have agree-ment on encryption keys. However, end-to-end encryption or link level
encryption alone is not a good candidate for private data aggregation. This is because:
1) If end-to-end communications are encrypted, the in-termediate nodes could not easily perform in-network processing to get aggregated results.
2) Even when data are encrypted at the link level, the other end of the communication is still able to decrypt it and get the private data. Hence privacy is violated.
Though research on privacy-preserving computation has been active in other domains including cryptography and data mining, previously-studied schemes are not readily applicable to private data
aggregations in WSNs. Most of them are either not suitable for or too computational-expensive to be used in the resource-constrained sensor networks, as we will discuss in detail in Section II.
In this paper, we present two privacy-preserving data aggre-gation schemes called Cluster-based Private Data Aggreaggre-gation
(CPDA) and Slice-Mix-AggRegaTe (SMART) respectively, for
additive aggregation functions in WSNs. The goal of our work is to bridge the gap between collaborative data aggregation and data privacy in wireless sensor networks. When there is no packet loss, in
both CPDA and SMART, the sensor network can obtain a precise aggregation result while guaranteeing that no private sensor reading is released to other sensors. Observe that this is a stronger result
than previously proposed protocols that are able to compute approximate aggregates only (without violating privacy). Our presented schemes can be built on top of existing secure communication
protocols. Therefore, both security and privacy are supported by the proposed data aggregation schemes.
In the CPDA scheme, sensor nodes are formed randomly into clusters. Within each cluster, our design leverages al-gebraic properties of polynomials to calculate the desired aggregate value. At the
same time, it guarantees that no individual node knows the data values of other nodes. The intermediate aggregate values in each cluster will be further aggregated (along an aggregation tree) on
their way to the data sink. In the SMART scheme, each node hides its private data by slicing it into pieces. It sends encrypted data slices to different intermediate aggregation nodes. After the
pieces are received, intermediate nodes calculate intermediate aggregate values and further aggregate them to the sink. In both schemes, data privacy is preserved while aggregation is carrying out.
We evaluate the two schemes in terms of efficacy of privacy preservation, communication overhead, and data aggregation accuracy, comparing them with a commonly used data aggre-gation scheme TAG [4],
where no data privacy is provided. Simulation results demonstrate the efficacy and efficiency of our schemes.
The rest of the paper is organized as follows. Section II summarizes the related work. Section III describes the model and requirements of privacy-preserving data aggregation in
wireless sensor networks. Section IV provides our two algo-rithms for private data aggregation. Section V evaluates the proposed schemes. We summarize our findings and lay out future research
directions in Section VI.
In typical wireless sensor networks, sensor nodes are usually resource-constrained and battery-limited. In order to save resources and energy, data must be aggregated to avoid overwhelming amounts of
traffic in the network. There has been extensive work on data aggregation schemes in sensor networks, including [4], [5], [6], [7], [8], [9]. These efforts share the assumption that all sensors are
trusted and all com-munications are secure. However, in reality, sensor networks are likely to be deployed in an untrusted environment, where links, for example, can be eavesdropped. An adversary may
compromise cryptographic keys and manipulate the data.
Work presented in [10], [11], [12] investigates secure data aggregation schemes in the face of adversaries who try to tamper with nodes or steal the information. Work presented in [13], [14] shows
how to set up secret keys between sensor nodes to guarantee secure communications. For most existing secure data aggregation schemes though, an intermediate ag-gregation node has to decrypt the
received data, then aggregate the data according to the corresponding aggregation function, and finally encrypt the aggregated result before forwarding it. This sequence is fairly expensive for data
aggregation in sensor networks. To reduce computational overhead, Girao et al. [16] and Castelluccia et al. [17] propose using homomor-phic encryption ciphers, which allow efficient aggregation of
encrypted data without decryption involved in the intermediate nodes. Though these schemes are more efficient and can provide end-to-end privacy, they do not protect the private data of a node from
being known by other neighboring or intermediate nodes. This is because when the neighboring or intermediate nodes know the encryption key, they can decrypt the private data. In contrast, the private
data aggregation schemes we present in this paper can guarantee that the private data of a sensor node is not released to any other nodes.
Privacy has also been studied in the data mining do-main [18], [19], [20], [21]. Two major classes of schemes are used. The first class is based on data perturbation (random-ization) techniques. In a
data perturbation scheme, a random number drawn from a certain distribution is added to the private data. Given the distribution of the random perturbation, recovering the aggregated result is
possible. At the same time, by using the randomized data to mask the private values, privacy is achieved. However, data perturbation techniques have the drawback that they do not yield accurate
aggregation results. Furthermore, as shown by Kargupta et al. in [20] and by Huang et al. in [21], certain types of data perturbation might not preserve privacy well.
Another class of privacy-preserving data mining schemes [22], [23], [24] is based on Secure Multi-party Computation (SMC) techniques [25], [26], [27]. SMC deals with the problem of a joint
computation of a function with
multi-party private inputs. SMC usually leverages public-key cryptography. Hence SMC-based privacy-preserving data mining schemes are usually computationally expensive, which is not applicable to
resource-constrained wireless sensor networks.
As we will show in the rest of this paper, unlike previous privacy-preserving approaches, our new private data aggre-gation schemes have the advantages: (1) They preserve data privacy such that
individual sensor data is only known to their owner; (2) The aggregation result is accurate when there is no data loss; (3) They are more efficient and hence more suitable for resource-constrained
wireless sensor networks.
III. MODEL ANDBACKGROUND A. Sensor Networks and the Data Aggregation Model
In this paper, a sensor network is modeled as a connected graph G(V, E), where sensor nodes are represented as the set of vertices V and wireless links as the set of edges E. The number of sensor
nodes is defined as |V | = N .
A data aggregation function is defined as y(t) ,
f (d1(t), d2(t), · · · , dN(t)), where di(t) is the individual
sen-sor reading at time t for node i. Typical functions of f include
sum, average, min, max and count. If di(i = 1, · · · , N ) is given, the computation of y at a query server (data sink) is trivial. However, due to the large data traffic in sensor networks,
bandwidth constraints on wireless links, and large power consumption of packet transmition1[, data aggregation]
techniques are needed to save resources and power.
In this paper, we focus on additive aggregation functions, that is, f (t) =
di(t). It is worth noting that using
additive aggregation functions is not too restrictive, since many other aggregation functions, including average, count,
variance, standard deviation and any other moment of the
measured data, can be reduced to the additive aggregation function sum [17].
B. Requirements of Private Data Aggregation
Protecting the data privacy in many wireless sensor network applications is a major concern. The following criteria summa-rize the desirable characteristics of a private data aggregation scheme:
1) Privacy: Each node’s data should be only known to itself. Furthermore, the private data aggregation scheme should be able to handle to some extent attacks and collusion among compromised nodes.
When a sensor network is under a malicious attack, it is possible that some nodes may collude to uncover the private data of other node(s). Furthermore, wireless links may be eavesdropped by
attackers to reveal private data. A good private data aggregation scheme should be robust to such attacks.
2) Efficiency: The goal of data aggregation is to reduce the number of messages transmitted within the sensor
1[A Berkeley mote consumes approximately the same amount of energy to]
compute 800 instructions as it does in sending a single bit of data [4].
network, thus reduce resource and power usage. Data aggregation achieves bandwidth efficiency by using in-network processing. In private data aggregation schemes, additional overhead is introduced to
protect privacy. However, a good private data aggregation scheme should keep that overhead as small as possible.
3) Accuracy: An accurate aggregation of sensor data is desired, with the constraint that no other sensors should know the exact value of any individual sensor. Accuracy should be a criterion to
estimate the performance of private data aggregation schemes.
C. Key Setup for Encryption
To set context for our work, in this section, we first briefly review a random key distribution mechanism proposed in [13], on which our proposed schemes operate.
Security Assumptions and Key Setup:
In the new private data aggregation algorithms – CPDA and
SMART– some messages are encrypted to prevent attackers
from eavesdropping. Our schemes can be built on top of exist-ing key distribution and encryption schemes in wireless sensor networks. Here, we briefly review a random key distribution mechanism
proposed in [13] which we use in the design of our schemes.
In [13], key distribution consists of three phases: (1)key pre-distribution, (2)shared-key discovery, and (3)path-key es-tablishment. In the pre-distribution phase, a large key-pool of
K keys and their corresponding identities are generated. For
each sensor within the sensor network, k keys are randomly drawn from the key-pool. These k keys form a key ring for a sensor node. During the key-discovery phase, each sensor node finds out which
neighbors share a common key with itself by exchanging discovery messages. If two neighboring nodes share a common key then there is a secure link between two nodes. In the path-key establishment
phase, a path-key is assigned to the pairs of neighboring sensor nodes who do not share a common key but can be connected by two or more multi-hop secure links at the end of the shared-key discovery
In the random key distribution mechanism mentioned above, the probability that any pair of nodes possess at least one common key is:
pconnect= 1 − ((K − k)!)
(K − 2k)!K!. (1)
Let the probability that any other node can overhear the encrypted message by a given key be poverhear. It is the probability that a third node possesses the same key as this node. Therefore,
K. (2)
The key distribution algorithm discussed above is efficient in terms of using a small number of keys to support secure communication in a large-scale sensor network, hence prevent-ing
eavesdropprevent-ing. This is illustrated in the followprevent-ing numerical example.
Assume a key pool of size K = 10000, and key ring size of k = 200. The probability that any pair of nodes can find a shared key in common is pconnect= 98.3% by Equation (1). In other words, the
probability that a pair of nodes does not share a common key is 1.7%. For these pairs who do not share a common key, they can use the path-key establishment procedure described above to establish a
shared key. Once a pair of nodes select a shared key, the probability that any other node owns the same key is poverhear = [K]k = 0.2%, which is very small.
IV. PRIVATEDATAAGGREGATIONPROTOCOLS In this section, we present two private data aggregation protocols focusing on additive data aggregation. The first scheme is called Cluster-based Private Data
(CPDA). It consists of three phases: cluster formation,
cal-culation of the aggregate results within clusters, and cluster data aggregation. The second scheme is called
“Slice-Mix-AggRegaTe (SMART)”. In SMART, each node hides its private
data by slicing the data and sending encrypted data slices to different aggregators. Then the aggregators collect and forward data to a query server. When the server receives the aggregated data, it
calculates the final aggregation result.
A. Cluster-based Private Data Aggregation (CPDA)
1) Formation of Clusters: The first step in CPDA is to
construct clusters to perform intermediate aggregations. We propose a distributed protocol for this purpose.
The cluster formation procedure is illustrated in Figure 1. A query server Q triggers a query by a HELLO message. Upon receiving the HELLO message, a sensor node elects itself as a cluster leader
with a probability pc, which is a preselected parameter for all nodes. If a node becomes a cluster leader, it will forward the HELLO message to its neighbors; otherwise, the node waits for a certain
period of time to get HELLO messages from its neighbors, then it decides to join one of the clusters by broadcasting a JOIN message. As this procedure goes on, multiple clusters are constructed.
2) Calculation within Clusters: The second step of CPDA
is the intermediate aggregations within clusters. To simplify the discussion, we use a simple scenario, where a cluster contains three members: A, B, and C. a, b and c represent the private data held
by nodes A, B and C, respectively. Let
A be the cluster leader of this cluster. Let B and C be cluster
members. Our privacy-preserving aggregation protocol based on the additive property of polynomials. Figure 2 illustrates the message exchange among the three nodes to obtain the desired sum without
releasing individual private data.
First, nodes within a cluster share a common (non-private) knowledge of non-zero numbers, refer to as seeds, x, y, and z, which are distinct with each other (as shown in Figure 2(1)). Then node A
vAA= a + r1Ax + rA2x2, vA B = a + r1Ay + r2Ay2, vCA= a + r1Az + r2Az2,
(a) Query Server Q triggers a query by HELLO message. A re-cipient of HELLO message elects itself as a cluster leader randomly.
! " # # $ %&' ' (
(b) A and X become cluster leader, so they broadcast the
HELLO message to their
(c) Node E receives multi-ple HELLO messages, then
E randomly selects one to
(d) Several clusters have been constructed and the aggregation tree of cluster leaders is formed
Fig. 1. Formation of clusters
( , ) A B AB Enc v k ( , ) A C AC E nc v k ( , ) B A AB Enc v k (B, ) C BC Enc v k ( , ) C A A[C] E nc v k (C, ) B BC Enc v k !! "#$%& $ !$ ' ()*! +,'! - !( .,! )$/( )$ 01203204
Fig. 2. Message exchange
where rA[1] and r[2]Aare two random numbers generated by node
A, and known only to node A. Similarly, node B and C
calculate vB[A], vB
B, vBC and vAC, vCB, vCC independently as:
N odeB : vB A = b + r1Bx + rB2x2, vB B = b + r1By + rB2y2, vBC = b + r1Bz + rB2z2. N odeC : vC A = c + r1Cx + r2Cx2, vC B = c + r1Cy + rC2y2, vC C = c + r1Cz + r2Cz2.
Then node A encrypts vA[B]and sends to B using the shared key between A and B. It also encrypts v[C]Aand sends to C using the sharing key between A and C (Figure 2(2)). Similarly node
B encrypts and sends vB
A to A and vBC to C; node C encrypts and sends v[A]C to A and v[B]C to B. When node A receives vB[A]
and vC[A], it has the knowledge of vA[A] = a + rA[1]x + rA[2]x2,
A = b + rB1x + rB2x2 and vAC = c + rC1x + r2Cx2. Next,
node A calculates assembled value FA = v[A]A+ v[A]B+ v[A]C =
(a + b + c) + r1x + r2x2, where r1 = r1A+ rB1 + r1C and
r2 = rA2 + r2B+ r2C. Similarly node B and C calculate their
assembled values FB= vBA+ vBB+ vBC= (a + b + c) + r1y +
r2y2 and FC = vCA+ vCB+ vCC = (a + b + c) + r1z + r2z2
respectively. Then node B and C broadcast FB and FC to the cluster leader A (Figure 2(3)). So far, node A knows all the assembled values:
FA= vAA+ vAB+ vAC= (a + b + c) + r1x + r2x2,
FB = vAB+ vBB+ vCB = (a + b + c) + r1y + r2y2, (3)
FC = vAC+ vCB+ vCC = (a + b + c) + r1z + r2z2.
Then the cluster leader A can deduce the aggregate value (a +
b + c). This is because x, y, z, FA, FB, FC are known to A.
By rewriting Equation (3) as U = G−1[F,] [(4)] where G = 1 x x 2 1 y y2 1 z z2 , U = a + b + c[r][1] r2 , and F = [FA, FB, FC]T, a + b + c is known as the first element of U . Note that
G is of full rank, because x, y and z are distinct numbers.
It is necessary to encrypt vA[B], vA[C], vB[A], v[C]B, vC[A], and v[B]C. For example, if node C overhears the value vA[B], then C knows
B, vCA, and FA, then C can deduce vAA = FA− vAB− vCA, and further it can obtain a if x, v[A]A, v[B]A, v[C]A are known. However, if node A encrypts v[B]Aand sends it to node B, then node C cannot get
vA[B]. With only vA[C], FA and x from node
A, node C cannot deduce the value of a. However, if nodes
B and C collude by releasing A’s information (vA
B and vAC) to each other, then A’s data will be disclosed. To prevent such collusion, the cluster size should be large. In a cluster of size
m, if less than (m − 1) nodes collude, the data won’t be
3) Cluster Data Aggregation: A common technique for
data aggregation is to build a routing tree. We implement
CPDA on top of the TAG Tiny AGgregation [4] protocol. Each
cluster leader routes the derived sum within the cluster back towards the query server through a TAG routing tree rooted at the server.
Discussions on Parameter Selection in CPDA
In CPDA, a larger cluster size introduces a larger com-putational overhead (Equation (4). However, a larger cluster size is preferred for the sake of improved privacy under node collusion attacks. In
CPDA, we should guarantee a cluster size
m ≥ 3. Generally, let’s define mcas the minimum cluster size.
We should set mc≥ 3. Next, we discuss how to ensure every cluster has a cluster size larger than mc, and how to tune parameter pc to reduce communication overhead in cluster
formation phase.
If a cluster Ci has a size smaller than mc, (|Ci| < mc), the cluster leader of Ci needs to broadcast a “merge” request
to join another cluster. In the following, we show that given a proper pc, the percentage of clusters that need to merge is small, and the cluster size is in a reasonable range.
We model a sensor network as a random network, assuming
d is the average degree of a node. If a node i is the cluster
leader of a cluster of Ci, then the probability that a neighbor of i joins the Ci is
pi= P (a neighbor of i joins Ci) = (1 − pc) 1
, (5)
where 1−pcis the probability that the neighbor is not a leader of another cluster. Only in this case is the neighbor able to join
Ci. A neighbor is surrounded by dpc cluster leaders including
i, therefore [dp]1
c is the probability that a non-leader neighbor
of i joins Ci. The probability that cluster Ci has k members is: P (|Ci| = k) = µ d k − 1 ¶ pi(k−1)(1 − pi)d−k+1. (6) Therefore, the percentage of clusters that need to merge is given by: P (|Ci| <
mc) = mXc−1 k=1 P (|Ci| = k) = mXc−2 k=0 µ d k ¶ pik(1 − pi)d−k. (7) 1 2 3 4 5 6 7 8 9 10 11 0 5% 10% 15% 20% 25%
Cluster size (degree =20)
c = 1/4
pc = 1/5
pc = 1/6
Fig. 3. Distribution of cluster size with different pc
For a fixed network density, for example, d = 20, P (|Ci| <
3) = 6.9% if pc = 1/5; P (|Ci| < 3) = 1.8% if pc =
1/6. Figure 3 shows that the distribution of cluster size can
be controlled by parameter pc without merging. By local observation of any sensor node, the number of clusters is
(d + 1)pc. On the other hand, if we desire k nodes in each
cluster, then the desired cluster size should be d+1[k] . Therefore, if we target the cluster size around k, and choose pc= 1[k].
B. Slice-Mix-AggRegaTe (SMART)
One drawback of the cluster based protocol is the compu-tational overhead of data aggregation within clusters (Equa-tion (4)). In this sec(Equa-tion, we present a new scheme SMART, which reduces
computational overhead at the cost of slightly increased communication bandwidth consumption. As the
name suggests, “Slice-Mix-AggRegaTe (SMART)” is a three-step scheme for private-preserving data aggregation.
Step 1 (“Slicing”): Each node i (i = 1, · · · , N ), randomly selects a set of nodes Si(J = |Si|) within h hops. For a dense WSN, we can take h = 1. Node i then slices its private data
di randomly into J pieces (i.e., represents it as a sum of J numbers).
One of the J pieces is kept at node i itself. The remaining
J − 1 pieces are encrypted and sent to nodes in the randomly
selected set Si. We denote dij as a piece of data sent from node i to node j. For nodes to which node i does not send any slice, dij = 0. The desired aggregate result can be expressed as f = N X i=1
di= N X i=1 N X j=1 dij, (8) where dij = 0, ∀j 6∈ Si.
Step 2 (“Mixing”): When a node j receives an encrypted slice, it decrypts the data using its shared key with the sender. Upon receiving the first slice, the node waits for a certain time, which
guarantees that all slices of this round of aggregation are received. Then, it sums up all the received slices rj =
i dij, where dij = 0, j 6∈ Si.
Step 3 (“Aggregation”): All nodes aggregate the data and send the result to the query server. Similar to the aggregation step of CPDA, the aggregation is designed using tree-based routing protocols.
When a node gets all data slices, it forwards a message of the sum addressed to its parent, which in turn forwards the message along the tree. Eventually the aggregation reaches the root (query
server). Since
N X j=1 rj= N X j=1 N X i=1 dij= N X i=1 N X j=1 dij. (9)
The final data at the root is the aggregation of all sensor data
f by Equation (8) and (9).
Figure 4 illustrates the 3-step scheme of the SMART pro-tocol for a sensor network with network size N = 7, slicing size J = 3, and hop length h = 1. For SMART, in step 1, sliced data should be
encrypted as in CPDA.
In this section we evaluate the private-preserving data aggregation schemes presented in this paper. We evaluate how our schemes perform in terms of privacy-preservation, efficiency, and aggregation
accuracy. We use TAG [4], a typical data aggregation scheme as the baseline. Since the design of TAG does not take privacy into consideration, no data privacy protection is provided. We only use it
to evaluate the efficiency and aggregation accuracy compared with our proposed schemes.
A. Privacy-preservation Efficacy
In order to evaluate the performance of privacy-preservation, we first define the privacy metric. In wireless sensor networks, private data of a sensor node s may be disclosed to others when
attackers can eavesdrop on communication and/or collude.
(a) Slicing (J = 3, h = 1):dij(i 6= j) is
encrypted and transmitted from node i to j, where
j 6∈ Si. diiis the data piece kept at node i.
(b) Mixing: Each node i decrypts all data pieces received and sums them up including the one kept at itself (dii) as ri.
(c) Aggregation (No encryption is needed) Fig. 4. Illustration of three steps in SMART
That is, there are two cases that may lead to privacy violation: (1) An unauthorized sensor node holds a communication key and is able to decrypt messages sent from s. Under our key distribution
mechanism, the probability that an eavesdropper has the communication key used by s and one of its neighbors is poverhear(Equation (2)). (2) Multiple neighbors of s collude to steal private data
collected by s. We can assume the probability that any two nodes collude is pcollude.
For the simplicity of derivation, let us define poverhear =
pcollude , q. q is interpreted as the probability that the link level privacy is broken. A privacy metric P(q) is defined as the probability that the private data of node s is disclosed for a given q
under either conditions above. P(q) measures the performance of the privacy-preservation of a private data aggregation scheme.
1) Privacy-preservation Analysis of CPDA: In the CPDA
scheme, private data may be disclosed to neighbors only when the sensor nodes exchange messages within the same cluster. Given a cluster of size m, a node needs to send m − 1 encrypted messages to
other m−1 members within the cluster. Only if a node knows all m − 1 keys, can it crack all other
Consequently, P(q) is estimated as
P(q) =
P (m = k)(1 − (1 − q(k−1)×(k−2)2 )k), [(10)]
where dmax is the maximum cluster size. mc is the required minimum cluster size. P (m = k) represents the probability that a cluster size is k. Figure 5 shows that under different cluster sizes, an
eavesdropper has to break all the dashed links to steal other members’ private data. In a cluster, either all or none private data is known to an eavesdropper. Assuming the probability for an
eavesdropper to break one dashed link is q, then q(k−1)×(k−2)2 [is the probability that a node can overhear]
all encrypted messages to other members in the cluster of size
k, and thus know their private data.
Fig. 5. An eavesdropper has to break all the dashed links to steal all private data in a cluster; otherwise no private data is disclosed
2) Privacy-preservation Analysis of SMART: In the SMART
scheme, a sensor node s slices its private data into J pieces and then encrypts and sends J − 1 pieces to its neighbors. It keeps one piece to itself. As a result, the out-degree of s is
J − 1 and the in-degree of s is the number of neighbors who
encrypt and send data pieces to s. Only if an eavesdropper breaks J − 1 outgoing links and all incoming links of a node
s, will it be able to crack the private data held by s. Therefore, P(q) can be approximated by
P(q) = qx−1
P (in − degree = k) qk[,] [(11)]
where dmaxis the maximum in-degree in a network. P (in −
degree = k) is the probability that the in-degree of a node is k.
Figure 6 compares privacy-preservation performance of
CPDA and SMART via simulation, where we consider a
1000-node random network. The average degree of a 1000-node is 16. As we can see from Figure 6, for CPDA, the smaller the value of pc (the probability of a node independently becoming a cluster
leader), the larger the average cluster size, hence the better the privacy-preservation performance is. However, if a cluster size is larger, the computational overhead to compute the intermediate
aggregation value by Equation (4) will also be larger. In SMART, the larger the value of J (the number of slices each node chooses to decompose its private data), the better privacy can be achieved.
However, a larger J will also yield larger communication overhead. For both CPDA and SMART, there is a design tradeoff between the privacy protection and computation/communication efficiency.
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.5% 1 % 1.5% 2 % 2.5% 3 % 3.5% 4 %
q: probability that link level privacy is broken
Percentage that private data is disclosed
pc=0.1 pc=0.16 pc=0.2 (a) CPDA 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.5% 1% 1.5% 2 % 2.5% 3 % 3.5% 4 % 4.5%
q: probability that link level privacy is broken
Percentage that private data is disclosed
J=2 J=3 J=4
(b) SMART
Fig. 6. P(q) under CPDA and SMART.
B. Communication Overhead
CPDA and SMART use data-hiding techniques and
en-crypted communication to protect data privacy. This introduces some communication overhead. In order to investigate band-width efficiency of these schemes, we implemented CPDA and
SMART in ns2 on top of the data aggregation component of TAG. We did extensive simulations and collected results to
compare these two schemes together with TAG (no privacy protection). In our experiments, we consider networks with
600 sensor nodes. These nodes are randomly deployed over
a 400meters × 400meters area. The transmission range of a sensor node is 50 meters and data rate is 1 Mbps.
At the beginning of each simulation, a query is delivered from the query server to the sensor nodes. Similar to TAG [4], the query specifies an epoch duration E, which is the amount of time for the
data aggregation procedure to finish. Upon receiving such a query, a parent node on the aggregation tree subdivides the epoch such that its children are required to deliver their data (protected data
in CPDA and SMART, or unprotected data in TAG) in this parent-defined time interval. Figure 7(a) shows the communication overhead of TAG,
CPDA with pc = 0.3, and SMART with J=3 under different epoch durations. We use the total number of bytes of all packets communicated during the aggregation as the metric. Each point in the figure is
the average result of 50 runs of the simulation. In each run, one randomly generated sensor network topology is used. The vertical line of each data point represents the 95% confidence interval of
the data collected.
Simulation results can be explained by analyzing the num-ber of exchanged messages in each scheme. In TAG, each node needs to send 2 messages for data aggregation: one
Hello message to form an aggregation tree, and one message
for data aggregation. In our implementation of CPDA, a cluster leader sends roughly 4 messages and cluster members sends 3 messages for private data aggregation. Accordingly,
Communication Overhead (bytes)
Epoch Duration (seconds) TAG SMART CPDA
(a) Comparison of TAG, CPDA (pc = 0.3) and
SMART (J=3). 0 50000 100000 150000 200000 250000 0 10 20 30 40 50
Communication Overhead (bytes)
Epoch Duration (seconds) p=0.1
p=0.2 p=0.3
(b) Communication overhead of CPDA with respect to pc. 0 50000 100000 150000 200000 250000 0 10 20 30 40 50
Communication Overhead (bytes)
Epoch Duration (seconds) J=2 J=3 J=4
(c) Communication overhead of SMART with re-spect to J .
Fig. 7. Communication overhead
0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Accuracy
Epoch Duration (seconds) TAG CPDA SMART
(a) Accuracy comparison of TAG, CPDA (pc= 0.3)
and SMART (J=3). 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Accuracy
Epoch Duration (seconds) p=0.1
p=0.2 p=0.3
(b) Accuracy of CPDA with respect to pc.
0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Accuracy
Epoch Duration (seconds) J=2
J=3 J=4
(c) Accuracy of SMART with respect to J .
Fig. 8. Accuracy under collision and packet loss
4pc+ 3(1 − pc) = 3 + pc is the average number of messages
sent by a node in CPDA. Thus, the message overhead in CPDA is less than twice as that in TAG. SMART, with J = 3, needs to exchange 2 messages during the slicing step and 2 messages for data
aggregation (the same as TAG). Hence, each node needs 4 messages for the private data aggregation. Therefore, the overhead of SMART is double that of TAG.
Now let us further study the effect of pc on the communi-cation overhead in CPDA. Figure 7(b) shows the result with
pc = 0.1, 0.2, 0.3 respectively. As we can see, the larger the
pc value, the larger the communication overhead. It is very interesting to notice that when pc = 0.1, communication overhead is much lower than TAG. This is because when pc is too small, many nodes
cannot be covered due to insufficient number of cluster leaders. This also explains why accuracy is very low when pc = 0.1 (in Section V-C).
Finally, let us study the effect of J on the communication overhead in SMART. Figure 7(c) shows the result with J =
2, 3, 4 respectively. As we can see, the larger the J value,
the larger the communication overhead. This is because J represents the number of slices each node chooses to decom-pose its private data into. Since, in slicing phase of SMART, each node sends J − 1
pieces of sliced data to its selected neighbors. Including one message for tree formation and one for aggregation, the total number of messages exchanged is roughly proportional to J + 1. Hence the
larger the value of
J, the larger the communication overhead.
C. Accuracy
In ideal situations when there is no data loss in the network2[,]
both CPDA and SMART should get 100% accurate aggregation results. However, in wireless sensor networks, due to collisions over wireless channels and processing delays, messages may get lost or
delayed. Therefore, the aggregation accuracy is affected. We define the accuracy metric as the ratio between the collected sum by the data aggregation scheme used and the real sum of all individual
sensor nodes. A higher accuracy value means the collected sum using the specific aggregation scheme is more accurate. An accuracy value of 1.0 represents the ideal situation.
Figure 8(a) shows the accuracy of TAG, CPDA (with pc=
0.3) and SMART (with J=3) from our simulation. Here we
have two observations. First, the accuracy increases as the epoch duration increases. Two reasons contribute to this: 1) With a larger epoch duration, the data packets to be sent within this duration
will have less chance to collide due to the increased average packet sending intervals; 2) With a larger epoch duration, the data packets will have a better chance of being delivered within the
deadline. The second observation is that TAG has better accuracy than CPDA and SMART. That is because without the communication overhead introduced by privacy-preservation, there will be less data
Figure 8(b) shows the aggregation accuracy of CPDA with respect to the selection of pc. First, we see when using the
2[Data loss may be caused by collision in wireless channels, deadline]
same pc, a larger epoch duration gives better accuracy. This is due to the fact that a larger epoch duration lets the data packets have a better chance of being delivered before the timeout. Second,
we see that CPDA is sensitive to pc values. The larger the pc value, the higher the aggregation accuracy. This is because: (1)The larger pcvalue is, the smaller portion of nodes are disconnected to
query server through aggrega-tion tree. Those nodes uncovered by aggregaaggrega-tion tree cannot contribute their value in aggregation. (2)A larger pc usually yields a smaller cluster size, which
causes less collisions within the cluster under the same epoch duration. Therefore, we recommend 0.2 ≤ pc ≤ 0.3 in CPDA protocol.
Figure 8(c) illustrates the aggregation accuracy of SMART with respect to the selection of J . Accuracy of SMART is not sensitive to J . However, there is a slightly difference between different J
values: the larger the value of J , the lower the aggregation accuracy. This is because when a private data held by a node is sliced into more pieces, more messages are needed to send all J − 1
pieces to other neighboring nodes. Hence, more collisions occur, which causes a reduction in the aggregation accuracy. We recommend J = 3 in SMART protocol.
Providing efficient data aggregation while preserving data privacy is a challenging problem in wireless sensor networks. Many civilian applications require privacy, without which indi-vidual parties
are reluctant to participate in data collection. In this paper, we propose two private-preserving data aggregation schemes – CPDA, and SMART – focusing on additive data aggregation functions. Table I
summarizes these two schemes in terms of privacy-preservation efficacy, communication over-head, aggregation accuracy, and computational overhead.
Privacy preservation effi-cacy
Good (0.2 ≤ pc≤ 0.3)
Excellent (J ≥ 3) Communication overhead Fair Large
Aggregation accuracy Good (but sensi-tive to pc)
Good (not sensi-tive to J ) Computational overhead Fair Small
We compare the performance of our presented schemes to a typical data aggregation scheme – TAG. Simulation results and theoretical analysis show the efficacy of our two schemes. Our future work
includes designing private-preserving data aggregation schemes for general aggregation functions. We are also investigating robust private-preserving data aggregation schemes under malicious attacks.
[1] D. Culler, D. Estrin, and M. Srivastava, “Overview of Sensor Networks,”
IEEE Computer, August 2004.
[2] N. Xu, S. Rangwala, K. Chintalapudi, D. Ganesan, A. Broad, R. Govin-dan, and D. Estrin, “A Wireless Sensor Network for Structural Moni-toring,” Proceedings of the ACM Conference on Embedded
Sensor Systems, Baltimore, MD, November 2004.
[3] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson, “Wireless Sensor Networks for Habitat Monitoring,” WSNA’02, Atlanta,
Georgia, September 2002.
[4] S. Madden, M. J. Franklin, and J. M. Hellerstein, “TAG: A Tiny AGgregation Service for Ad-Hoc Sensor Networks,” OSDI, 2002. [5] C. Itanagonwiwat, R. Govindan, and D. Estrin, “Directed Diffusion:
Scalable and Robust Communication Paradigm for Sensor Networks,”
MobiCom, 2002.
[6] C. Intanagonwiwat, D. Estrin, R. Govindan, and J. Heidemann, “Impact of Network Density on Data Aggregation in Wireless Sensor Networks,”
In Proceedings of the 22nd International Conference on Distributed Computing Systems, 2002.
[7] A. Deshpande, S. Nath, P. B. Gibbons, and S. Seshan, “Cache-and-query for wide area sensor databases,” SIGMOD, 2003.
[8] I. Solis and K. Obraczka, “The impact of timing in data aggregation for sensor networks,” ICC, 2004.
[9] X. Tang and J. Xu, “Extending network lifetime for precision-constrained data aggregation in wireless sensor networks,” INFOCOM, 2006.
[10] B. Przydatek, D. Song, and A. Perrig, “SIA: Secure Information Aggre-gation in Sensor Networks,” In Proc. of ACM SenSys, 2003.
[11] Y. Yang, X. Wang, S. Zhu, and G. Cao, “SDAP: A Secure Hop-by-Hop Data Aggregation Protocol for Sensor Networks,” ACM MobiHoc, 2006. [12] D. Wagner, “Resilient Aggregation in Sensor Networks,”
of the 2nd ACM Workshop on Security of Ad Hoc and Sensor Networks,
[13] L. Eschenauer and V. D. Gligor, “A key-management scheme for distributed sensor networks,” in Proceedings of the 9th ACM Conference
on Computer and Communications Security, November 2002, pp. 41–47.
[14] D. Liu and P. Ning, “Establishing pairwise keys in distributed sensor networks,” in Proceedings of 10th ACM Conference on Computer and
Communications Security (CCS03), October 2003, pp. 52–61.
[15] S. Zhu, S. Setia, and S. Jajodia, “LEAP: Efficient security mechanisms for large-scale distributed sensor networks,” in Proceedings of 10th
ACM Conference on Computer and Communications Security (CCS03),
October 2003, pp. 62–72.
[16] J. Girao, D. Westhoff, and M. Schneider, “CDA: Concealed Data Aggregation for Reverse Multicast Traffic in Wireless Sensor Networks,” in 40th International Conference on Communications, IEEE
ICC, May 2005.
[17] C. Castelluccia, E. Mykletun, and G. Tsudik, “Efficient Aggregation of Encrypted Data in Wireless Sensor Networks,” Mobiquitous, 2005. [18] R. Agrawal and R. Srikant, “Privacy preserving data
mining,” in ACM
SIGMOD Conf. Management of Data, 2000, pp. 439–450.
[19] A. Evfimievski, R. Srikant, R. Agrawal, and J. Gehrke, “Privacy Pre-serving Mining of Association Rules,” in Proceedings of The 8th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, July 2002.
[20] H. Kargupta, Q. W. S. Datta, and K. Sivakumar, “On The Privacy Preserving Properties of Random Data Perturbation Techniques,” in the
IEEE International Conference on Data Mining, November 2003.
[21] Z. Huang, W. Du, and B. Chen, “Deriving Private Information from Randomized Data,” in Proceedings of the ACM SIGMOD Conference, June 2005.
[22] B. Pinkas, “Cryptographic techniques for privacy preserving data min-ing,” SIGKDD Explorations, vol. 4, no. 2, pp. 12–19, 2002.
[23] W. Du and M. J. Atallah, “Secure multi-party computation problems and their applications: A review and open problems,” in Proceedings of the
2001 Workshop on New Security Paradigms. Cloudcroft, NM: ACM
Press, September 2001, pp. 13–22.
[24] M. Kantarcioglu and C. Clifton, “Privacy-preserving distributed mining of association rules on horizontally partitioned data,” IEEE Transactions
on Knowledge and Data Engineering, vol. 16, no. 9, pp. 1026–1037,
[25] A. C. Yao, “Protocols for secure computations,” in 23rd IEEE
Sym-posium on the Foundations of Computer Science (FOCS), 1982, pp.
[26] I. D. Ronald Cramer and S. Dziembowski, “On the Complexity of Verifiable Secret Sharing and Multiparty Computation,” in Proceedings
of the thirty-second annual ACM symposium on Theory of computing,
2000, pp. 325–334.
[27] J. Halpern and V. Teague, “Rational Secret Sharing and Multiparty Computation,” in Proceedings of the thirty-sixth annual ACM symposium | {"url":"https://1library.net/document/q0jlxrgz-pda-privacy-preserving-data-aggregation-wireless-sensor-networks.html","timestamp":"2024-11-05T13:20:51Z","content_type":"text/html","content_length":"191228","record_id":"<urn:uuid:0520a10a-205d-410c-a532-a9a52c961a95>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00464.warc.gz"} |
IF(CONTAINS(OR( Help
Hi, I am trying to get the following to work:
I have a Sales Order Column where I have 3 possible entries: A date (Ex: 31Aug2019), a 2DP Order (Ex: 11111-1), and a QC Order (Ex: 11111-1 QC). I have two hidden checkboxes that will give a check if
an order is QC or 2DP.
For QC, I just have =IF(CONTAINS("QC", [Sales Order]@row), 1, 0)
For 2DP, if I do =IF(CONTAINS("QC", [Sales Order]@row), 0, 1) I inevitably get the columns with Dates checked as well, which I do not want.
I'm trying to do a formula like this to solve it, but its not working. Anyhelp? Thanks!
=IF(CONTAINS(OR("QC", "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")), [Sales Order]@row), 0, 1)
• Here is what I would recommend...
=IF(AND(CONTAINS("-", [Sales Order]@row), FIND("QC", [Sales Order]@row) = 0), 1)
This will check the box if a dash is found but "QC" is not. Putting this in your 2DP checkbox column should do the trick.
• Ohhh I see, very clever solution. So it capitalizes on the fact that every single Sales Order has a "-" in it while a date never will, and if its not QC, then it has to be 2DP. Very nice, thank
you, I haven't used the FIND function at all, seems very useful.
• Ohhh I see, very clever solution.
Thank you!
So it capitalizes on the fact that every single Sales Order has a "-" in it while a date never will
Absolutely correct.
, and if its not QC, then it has to be 2DP.
Also correct.
Very nice, thank you,
Happy to help!
I haven't used the FIND function at all, seems very useful.
It is extremely useful. A lot of my own uses have been replaced with the CONTAINS function, but there are times where the FIND is just a little more efficient. The way it works is it produces a
numerical value based on the position of the specified text within the text string. If more than one letter or character is specified to be searched for, it will return the position number of the
first character where the string is found. It is also one of the few case sensitive functions.
For example...
[Target Cell]@row = "ABCDEFGabcdefg"
=FIND("A", [Target Cell]@row) will produce a 1.
=FIND("a", [Target cell]@row) will produce a 8.
=FIND("Z", [Target Cell]@row) will produce a 0 (zero) because it was not found.
=FIND("Gab", [Target Cell]@row) produces a 7 because that is where the first character of the specified string is found.
Using those examples above, we can say that if the FIND function when used to look for QC returns a 0 (zero) which means that QC was not found in the target cell, then by default it must be 2DP
because the target cell also contains a dash (which is where the AND function comes in).
Another way of writing
FIND("QC", [Target Cell]@row) = 0
would be
CONTAINS("QC", [Target Cell]@row) = false
but that takes a few more key strokes, and honestly I am still learning to trust using the CONTAINS function that way. It is VERY new compared to the FIND function, so it is taking some getting
used to. Haha
• You could have also used
AND(CONTAINS(................), CONTAINS("2DP", [Target Cell]@row))
AND(CONTAINS(................), FIND("2DP", [Target Cell]@row) > 0) | {"url":"https://community.smartsheet.com/discussion/58316/if-contains-or-help","timestamp":"2024-11-07T03:00:11Z","content_type":"text/html","content_length":"430201","record_id":"<urn:uuid:656deb5b-63d9-4c1d-bb30-93b2ccb9a0d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00881.warc.gz"} |
Appendix Table(Z Score) Converter
Z-Score Convertor introduce:
The Z-Score conversion tool will help you find the Z-Score more quickly.
What's appendix table(Z-Score):
It is calculated by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This process of converting a raw score into a
standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see Normalization for more). | {"url":"https://hsiuhsiu.com/conversion-tool/zscore/en-us","timestamp":"2024-11-14T00:46:41Z","content_type":"text/html","content_length":"16739","record_id":"<urn:uuid:41416b21-0d68-4701-a034-27db0f0c170c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00437.warc.gz"} |
SEMinR allows users to easily create and modify structural equation models (SEM). It allows estimation using either covariance-based SEM (CBSEM, such as LISREL/Lavaan), or Partial Least Squares Path
Modeling (PLS-PM, such as SmartPLS/semPLS).
First Look
Main features of using SEMinR:
• A natural feeling, domain-specific language to build and estimate SEMs in R
• High-level functions to quickly specify interactions and complicated structural models
• Modular design of models that promotes reuse of model components
• Encourages best practices by use of smart defaults and warnings
Take a look at the easy syntax and modular design:
# Define measurements with famliar terms: reflective, composite, multi-item constructs, etc.
measurements <- constructs(
reflective("Image", multi_items("IMAG", 1:5)),
composite("Expectation", multi_items("CUEX", 1:3)),
composite("Loyalty", multi_items("CUSL", 1:3), weights = mode_B),
composite("Complaints", single_item("CUSCO"))
# Create four relationships (two regressions) in one line!
structure <- relationships(
paths(from = c("Image", "Expectation"), to = c("Complaints", "Loyalty"))
# Estimate the model using PLS estimation scheme (Consistent PLS for reflectives)
pls_model <- estimate_pls(data = mobi, measurements, structure)
#> Generating the seminr model
#> All 250 observations are valid.
# Re-estimate the model as a purely reflective model using CBSEM
cbsem_model <- estimate_cbsem(data = mobi, as.reflective(measurements), structure)
#> Generating the seminr model for CBSEM
SEMinR can plot models using the semplot (for CBSEM models) or DiagrammeR (for PLS models) packages with a simple plot method.
SEMinR allows various estimation methods for constructs and SEMs:
• Covariance-based Structural Equation Modeling (CBSEM)
□ Covariance-based estimation of SEM using the popular Lavaan package
□ Currently supports mediation and moderation models with constructs
□ Easily specify interactions between constructs
□ Adds ten Berge factor score extraction to get same correlation patterns as latent factors
□ Adds VIF and other validity assessments
• Confirmatory Factor Analysis (CFA) using Lavaan
□ Uses Lavaan package and returns results and syntax
□ Adds ten Berge factor score extraction to get same correlation patterns as latent factors
• Partial Least Squares Path Modeling (PLS-PM)
□ Uses non-parametric variance-based estimation to construct composites and common-factors
□ Automatically estimates using Consistent PLS (PLSc) when emulating reflective common factors
□ Adjusts for known biases in interaction terms in PLS models
□ Continuously tested against leading PLS-PM software to ensure parity of outcomes: SmartPLS, ADANCO, semPLS, and matrixpls
□ High performance, multi-core bootstrapping function
Researchers can now create a SEM and estimate it using different techniques (CBSEM, PLS-PM).
You can install SEMinR from R with:
Usage and Examples
Load the SEMinR package:
Describe measurement and structural models and then estimate them. See the various examples below for different use cases:
CFA + CBSEM Example with Common Factors
Note that CBSEM models reflective common-factor constructs, not composites. SEMinR uses the powerful Lavaan package to estimate CBSEM models – you can even inspect the more complicated Lavaan syntax
that is produced.
Describe reflective constructs and interactions:
# Distinguish and mix composite or reflective (common-factor) measurement models
# - composite measurements will have to be converted into reflective ones for CBSEM (see below)
measurements <- constructs(
reflective("Image", multi_items("IMAG", 1:5)),
reflective("Expectation", multi_items("CUEX", 1:3)),
interaction_term(iv = "Image", moderator = "Expectation", method = two_stage),
reflective("Loyalty", multi_items("CUSL", 1:3)),
reflective("Complaints", single_item("CUSCO"))
Describe the causal relationships between constructs and interactions:
# Quickly create multiple paths "from" and "to" sets of constructs
structure <- relationships(
paths(from = c("Image", "Expectation", "Image*Expectation"), to = "Loyalty"),
paths(from = "Image", to = c("Complaints"))
Put the above elements together to estimate the model using Lavaan:
# Evaluate only the measurement model using Confirmatory Factor Analysis (CFA)
cfa_model <- estimate_cfa(data = mobi, measurements)
# Dynamically compose full SEM models from individual parts
# - if measurement model includes composites, convert all constructs to reflective using:
# as.reflective(measurements)
cbsem_model <- estimate_cbsem(data = mobi, measurements, structure)
sum_cbsem_model <- summary(cbsem_model)
sum_cbsem_model$meta$syntax # See the Lavaan syntax if you wish
Consistent-PLS (PLSc) Example with Common Factors
Models with reflective common-factor constructs can also be estimated in PLS-PM, using Consistent-PLS (PLSc). Note that the popular SmartPLS software models constructs as composites rather than
common-factors (see below) but can also do PLSc as a special option.
We will reuse the measurement and structural models from earlier:
Estimate full model using Consistent-PLS and bootstrap it for confidence intervals:
# Models with reflective constructs are automatically estimated using PLSc
pls_model <- estimate_pls(data = mobi, measurements, structure)
# Use multi-core parallel processing to speed up bootstraps
boot_estimates <- bootstrap_model(pls_model, nboot = 1000, cores = 2)
PLS-PM Example with Composites
PLS-PM typically models composites (constructs that are weighted average of items) rather than common factors. Popular software like SmartPLS models composites either as Mode A (correlation weights)
or Mode B (regression weights). We also support both modes as well as second-order composites. rather than common factors. Popular software like SmartPLS models composites by default, either as Mode
A (correlation weights) or Mode B (regression weights). We also support second-order composites.
Describe measurement model for each composite, interaction, or higher order composite:
# Composites are Mode A (correlation) weighted by default
mobi_mm <- constructs(
composite("Image", multi_items("IMAG", 1:5)),
composite("Value", multi_items("PERV", 1:2)),
higher_composite("Satisfaction", dimensions = c("Image","Value"), method = two_stage),
composite("Expectation", multi_items("CUEX", 1:3)),
composite("Quality", multi_items("PERQ", 1:7), weights = mode_B),
composite("Complaints", single_item("CUSCO")),
composite("Loyalty", multi_items("CUSL", 1:3), weights = mode_B)
Define a structural (inner) model for our PLS-PM:
mobi_sm <- relationships(
paths(from = c("Expectation","Quality"), to = "Satisfaction"),
paths(from = "Satisfaction", to = c("Complaints", "Loyalty"))
Estimate full model using PLS-PM and bootstrap it for confidence intervals:
pls_model <- estimate_pls(
data = mobi,
measurement_model = mobi_mm,
structural_model = mobi_sm
# Use multi-core parallel processing to speed up bootstraps
boot_estimates <- bootstrap_model(pls_model, nboot = 1000, cores = 2)
Plotting the model results
SEMinR can plot all supported models using the dot language and the graphViz.js widget from the DiagrammeR package.
# generate a small model for creating the plot
mobi_mm <- constructs(
composite("Image", multi_items("IMAG", 1:3)),
composite("Value", multi_items("PERV", 1:2)),
higher_composite("Satisfaction", dimensions = c("Image","Value"), method = two_stage),
composite("Quality", multi_items("PERQ", 1:3), weights = mode_B),
composite("Complaints", single_item("CUSCO")),
reflective("Loyalty", multi_items("CUSL", 1:3))
mobi_sm <- relationships(
paths(from = c("Quality"), to = "Satisfaction"),
paths(from = "Satisfaction", to = c("Complaints", "Loyalty"))
pls_model <- estimate_pls(
data = mobi,
measurement_model = mobi_mm,
structural_model = mobi_sm
#> Generating the seminr model
#> Generating the seminr model
#> All 250 observations are valid.
#> All 250 observations are valid.
boot_estimates <- bootstrap_model(pls_model, nboot = 100, cores = 1)
#> Bootstrapping model using seminr...
#> SEMinR Model successfully bootstrapped
When we have a model, we can plot it and save the plot to files.
We can customize the plot using an elaborate theme. Themes can be used for individual plots as a parameter or set as a default. Using the seminr_theme_create() function allows to define different
# Tip: auto complete is your friend in finding all possible themeing options.
thm <- seminr_theme_create(plot.rounding = 2, plot.adj = FALSE,
sm.node.fill = "cadetblue1",
mm.node.fill = "lightgray")
# change new default theme - valid until R is restarted
# the new plot
Comparing CBSEM and PLS-PM Example
We can re-estimate a composite PLS-PM model as a common-factor CBSEM. Such a comparison might interest researchers seeking to evaluate how their constructs behave when modeled as composites versus
# Define measurements with famliar terms: reflective, multi-item constructs, etc.
measurements <- constructs(
composite("Image", multi_items("IMAG", 1:5)),
composite("Expectation", multi_items("CUEX", 1:3)),
composite("Loyalty", multi_items("CUSL", 1:3)),
composite("Complaints", single_item("CUSCO"))
# Create four relationships (two regressions) in one line!
structure <- relationships(
paths(from = c("Image", "Expectation"), to = c("Complaints", "Loyalty"))
# First, estimate the model using PLS
pls_model <- estimate_pls(data = mobi, measurements, structure)
# Reusable parts of the model to estimate CBSEM results
# note: we are using the `as.reflective()` function to convert composites to common factors
cbsem_model <- estimate_cbsem(data = mobi, as.reflective(measurements), structure)
# Re-estimate the model using common factors in Consistent PLS (PLSc)
pls_model <- estimate_pls(data = mobi, as.reflective(measurements), structure)
The vignette for Seminr can be found on CRAN or by running the vignette("SEMinR") command after installation.
Demo code for various use cases with SEMinR can be found in the seminr/demo/ folder or by running commands such as demo("seminr-contained") after installation.
Model Specification:
Model Visualization:
Syntax Style:
Sister Projects
• seminrstudio: A set of addins for RStudio to simplify using SEMinR.
Partner Projects
We communicate and collaborate with several other open-source projects on SEM related issues.
• plspm package for R: an early and limited PLS path modeling package for R that inspired the development of SEMinR, among others; it is no longer maintained.
• plspm package for Python: a well-maintained PLS modeling pakage for Python; it is tested against SEMinR and borrows some syntactic ideas from SEMinR.
• cSEM: a well-maintained and comprehensive composite analysis project implementing PLS and GSCA for R, using Lavaan style syntax
Facebook Group: https://www.facebook.com/groups/seminr
You will find the developers and other users here who might also be able to help or discuss.
Issue Tracker: https://github.com/sem-in-r/seminr/issues
This is the official place to submit potential bugs or request new features for consideration.
About Us
Primary Authors:
Key Contributors:
And many thanks to the growing number of folks who have reached out with feature requests, bug reports, and encouragement. You keep us going! | {"url":"https://stat.ethz.ch/CRAN/web/packages/seminr/readme/README.html","timestamp":"2024-11-10T09:22:02Z","content_type":"application/xhtml+xml","content_length":"44408","record_id":"<urn:uuid:fa87de67-c3ee-4efd-9b36-19d51f34d6f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00772.warc.gz"} |
Asset pricing theories - Every Daily News
Asset pricing theories
This article compares two leading asset pricing models: Capital Asset Pricing Model (CAPM) and Arbitrage Pricing Theory (APT): I argue that while APT is consistent with the data available to test
asset pricing theories, CAPM is not. In arriving at this conclusion, emphasis is placed on distinguishing between the unconditional (relatively incomplete) information that economists must use to
estimate asset pricing models and the conditional (complete) information that investors use in making portfolio decisions that determine asset prices. The empirical work so far indicates that APT is
unlikely to produce a simple equation that explains the differences in risk premium well with some parameters. If CAPM is true, it will provide such an equation.
Financial Asset Pricing Theory provides ‘a comprehensive overview of classic and current research in theoretical asset pricing. Asset pricing is developed around the concept of state price deflation
which links the price of an asset to its future (risk) profit and thus includes how to adjust to both time and risk in asset valuation.’ The willingness of a utility-maximizing investor to shift
consumption over time determines a country’s price-deflation factor that provides a link between optimal consumption and asset prices that leads to a depreciation-based capital asset pricing model
A simple version of the CCAPM cannot explain various realities of asset pricing, but the “puzzles” of asset pricing can be resolved through a number of recent additions that include habit formation,
recursive utilities, multiple consumer goods, and long-term depreciation risk. Valuation techniques and other modeling methods (such as factor models, range structure models, risk-neutral valuation,
option pricing models) are explained and correlated with a country’s price deflator.
Arbitrage Pricing Theory (APT)
What Is the Arbitrage Pricing Theory (APT)?
Arbitrage pricing theory (APT) is a multi-factor asset pricing model based on the idea that an asset’s returns can be predicted using the linear relationship between the asset’s expected return and a
number of macroeconomic variables that capture systematic risk. It is a useful tool for analyzing portfolios from a value investing perspective, in order to identify securities that may be
temporarily mispriced.
Arbitrage Pricing Theory
The Formula for the Arbitrage Pricing Theory Model Is
E(R)i=E(R)z+(E(I)−E(R)z)×βnwhere:E(R)i=Expected return on the assetRz=Risk-free rate of returnβn=Sensitivity of the asset price to macroeconomicfactor nEi=Risk premium associated with factor i\begin
{aligned} &\text{E(R)}_\text{i} = E(R)_z + (E(I) – E(R)_z) \times \beta_n\\ &\textbf{where:}\\ &\text{E(R)}_\text{i} = \text{Expected return on the asset}\\ &R_z = \text{Risk-free rate of return}\\ &
\beta_n = \text{Sensitivity of the asset price to macroeconomic} \\ &\text{factor}\textit{ n}\\ &Ei = \text{Risk premium associated with factor}\textit{ i}\\ \end{aligned}E(R)i=E(R)z+(E(I)−E(R)z)
×βnwhere:E(R)i=Expected return on the assetRz=Risk-free rate of returnβn=Sensitivity of the asset price to macroeconomicfactor nEi=Risk premium associated with factor i
The beta coefficients in the APT model are estimated by using linear regression. In general, historical securities returns are regressed on the factor to estimate its beta.
How the Arbitrage Pricing Theory Works
The arbitrage pricing theory was developed by the economist Stephen Ross in 1976, as an alternative to the capital asset pricing model (CAPM). Unlike the CAPM, which assume markets are perfectly
efficient, APT assumes markets sometimes misprice securities, before the market eventually corrects and securities move back to fair value. Using APT, arbitrageurs hope to take advantage of any
deviations from fair market value.
However, this is not a risk-free operation in the classic sense of arbitrage, because investors are assuming that the model is correct and making directional trades—rather than locking in risk-free
Mathematical Model for the APT
While APT is more flexible than the CAPM, it is more complex. The CAPM only takes into account one factor—market risk—while the APT formula has multiple factors. And it takes a considerable amount of
research to determine how sensitive a security is to various macroeconomic risks.
The factors as well as how many of them are used are subjective choices, which means investors will have varying results depending on their choice. However, four or five factors will usually explain
most of a security’s return. (For more on the differences between the CAPM and APT, read more about how CAPM and arbitrage pricing theory differ.)
APT factors are the systematic risk that cannot be reduced by the diversification of an investment portfolio. The macroeconomic factors that have proven most reliable as price predictors include
unexpected changes in inflation, gross national product (GNP), corporate bond spreads and shifts in the yield curve. Other commonly used factors are gross domestic product (GDP), commodities prices,
market indices, and exchange rates.
Key Takeaways
Arbitrage pricing theory (APT) is a multi-factor asset pricing model based on the idea that an asset’s returns can be predicted using the linear relationship between the asset’s expected return and a
number of macroeconomic variables that capture systematic risk.
Unlike the CAPM, which assume markets are perfectly efficient, APT assumes markets sometimes misprice securities, before the market eventually corrects and securities move back to fair value.
Using APT, arbitrageurs hope to take advantage of any deviations from fair market value.
Example of How Arbitrage Pricing Theory Is Used
For example, the following four factors have been identified as explaining a stock’s return and its sensitivity to each factor and the risk premium associated with each factor have been calculated:
Gross domestic product (GDP) growth: ß = 0.6, RP = 4%
Inflation rate: ß = 0.8, RP = 2%
Gold prices: ß = -0.7, RP = 5%
Standard and Poor’s 500 index return: ß = 1.3, RP = 9%
The risk-free rate is 3%
Using the APT formula, the expected return is calculated as:
Expected return = 3% + (0.6 x 4%) + (0.8 x 2%) + (-0.7 x 5%) + (1.3 x 9%) = 15.2%
Leave a Comment | {"url":"https://everydailynews.com/2021/11/13/asset-pricing-theories/","timestamp":"2024-11-13T12:50:50Z","content_type":"text/html","content_length":"53750","record_id":"<urn:uuid:9cf36d3d-5362-4444-8677-29fcc219bcfe>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00783.warc.gz"} |
Polynomial Models: Open Box
Status: Waiting for your answers.
Given: You have a cardboard sheet that is 23 inches long by 10 inches wide.
Problem: Congruent squares (x by x square inches in size, red in color in the diagram above) are cut from the corners of the cardboard. Once cut, the cardboard is folded upward to form an
open box.
• What is the length of the squares that must be cut in order to maximize the volume of the open box?
• What is the maximum volume of the open box?
Solution: x = inches [Report responses to the nearest tenth.]
V[max] = cubic inches | {"url":"https://www.mathguide.com/cgi-bin/quizmasters2/PMob.cgi","timestamp":"2024-11-09T14:10:25Z","content_type":"application/xhtml+xml","content_length":"2843","record_id":"<urn:uuid:df73ea93-6824-4b60-9f28-881f2b206354>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00125.warc.gz"} |
Extended Euclidean Algorithm (The Pulveriser) - Explanation and Implementation
A little Background
The Extended Euclidean Algorithm is, as the name suggests, an extension to the Euclidean algorithm and in addition to gcd, it also computes the coefficients of Bézout's identity. The algorithm has
roots dating back to ancient India where it was called Kuṭṭaka, meaning The Pulveriser, considered as the precursor to what is today known as the Extended Euclidean Algorithm.
One of the most common uses of the Extended Euclidean Algorithm is to solve linear Diophantine equations.
It is also used extensively in the field of cryptography, Extended Euclidean Algorithm forms the basis of computing Modular Multiplicative Inverse, which is a key step required in the famous RSA
public key cryptographic algorithm to derive the key pairs.
Bézout's Identity
Bézout's Identity is the following theorem in Number Theory(from Wikipedia).
Let a and b be integers with greatest common divisor d. Then there exist integers x and y such that ax + by = d. More generally, the integers of the form ax + by are exactly the multiples of d.
or informally
gcd of two integers a and b can be expressed as an integer linear combination of a and b.
The Algorithm
Euclid's Algorithm
According to Euclid's algorithm
gcd(a,b) = gcd(b, a%b)
this essentially means that gcd of a and b is the same as gcd of b and the remainder of a/b, if we compute this recursively until b becomes 0(this also happens to be the base case) then we are left
with a, and this new value of a is the gcd of the two original numbers a and b.
Here are the high-level steps of Euclid's Algorithm
Step 1(base case): if b is 0, then a is the gcd
Step 2: else set a = previous b and set b = a mod b.
Step 3: Repeat Step 2 until we arrive at the base case.
Here's a Java implementation of Euclid's Algorithm.
public int gcd(int a, int b){
return a;
return gcd(b,a%b);
Extended Algorithm
One of the key concepts that we'll be using is the basic Division Algorithm i.e
For any given two positive integers x and y, there exist two integers q and r which satisfy the equation
x = qy + r where 0 <= r <= y
x -> Dividend, y -> Divisor
q -> Quotient, r -> remainder
If you think about the previously shown Euclid's Algorithm again, you'll realize that gcd is actually the last non-zero remainder from all of the performed steps.
And, from the Division Algorithm we just saw, we have a way to represent the remainder in terms of a linear combination of the quotient, dividend, and divisor, all we need to do is rearrange the
equation the way we want it.
x = qy + r can also be written as r = x - qy
As part of the Extended Euclidean Algorithm, we just need to maintain some extra information at every step of the basic Euclid's Algorithm.
So, we now have the following important observations/tools that will help us compute Bézout's coefficients.
1. gcd(a,b) = gcd(b, a mod b)
2. gcd(a,b) is the last non-zero remainder.
3. r = x - qy
Now, all we need to do is represent the remainder r as an integer linear combination of a and b at every step of Euclid's algorithm and keep track of the coefficients and keep on updating them, once
we are at the final step of the algorithm, we will have the last non-zero remainder expressed as the linear combination, which is also the gcd and that is exactly what we need.
It will be easier to understand with an example.
Let's take two integers a=785646, b=252
Step 1)
785646 = (3117 * 252) + 162
// can also be written as
162 = 785646 - (3117 * 252)
//replace values with a and b
162 = 1*a - 3117*b
Step 2) We now have a=252, b=162
252 = (1 * 162) + 90
// or
90 = 252 - (1 * 162)
// from previous step we know that
162 = 1*a -3117*b
//after substituting these values we get
90 = b - (a - 3117b)
90 = -a + 3118b
Step 3) a=162, b=90
162 = 90 + 72
// or
72 = 162 - 90
// from previous steps
162 = 1*a -3117*b
90 = -a + 3118b
// substitue values
72 = (a - 3117b) - (-a + 3118b)
72 = 2a - 6235b
After repeating the steps few more times you'll finally end up with the following result.
Step a b r r as a linear combination
1. 785646 252 162 a - 3117b
2. 252 162 90 -a + 3118b
3. 162 90 72 2a - 6235b
4. 90 72 18 -3a + 9353b
5. 72 18 0
At Step 5 we get the remainder as 0, this means the gcd of a and b is 18 and from Step 4 we have 18 expressed as the linear combination of a and b.
So here we have it, gcd expressed as the linear combination -3a + 9353b
Bézout's coefficients:
s= -3
t = 9353
gcd(785646,252) = (-3 * 785646) + (9353 * 252) = 18
Implementation in Java
public static void extendedEuclidean(long a, long b){
long oldR = a, r=b;
long oldS = 1, s = 0;
long oldT = 0, t = 1;
while(r != 0){
var quotient = oldR / r;
var rTemp = r;
r = oldR - quotient * r; oldR = rTemp;
var sTemp = s;
s = oldS - quotient * s; oldS = sTemp;
var tTemp = t;
t = oldT - quotient * t; oldT = tTemp;
System.out.printf("Coefficients: s:%s, t:%s%n", oldS,oldT);
System.out.printf("GCD:%s%n", oldR); | {"url":"https://blog.bytefaction.com/posts/extended-euclidean-algorithm/","timestamp":"2024-11-07T09:51:42Z","content_type":"text/html","content_length":"107729","record_id":"<urn:uuid:3385f3a0-9a84-4c3f-9431-93616fb59708>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00147.warc.gz"} |
Tough GMAT Problem Solving Practice Questions | 700 800 level GMAT Math Practice | Q-51 by Wizako
1. GMAT 700 Level Sample Question in Algebra / Inequalities
If x is a positive integer such that (x-1)(x-3)(x-5)....(x-93) < 0, how many values can x take?
2. 700 800 Level GMAT Sample Question | Probability
If two distinct integers a and b are picked from {1,2,3,4....100} and multiplied, what is the probability that the resulting number has EXACTLY 3 factors?
3. 700 800 Level GMAT Word Problem | Rates Problem Solving Question
Working alone, A can complete a task in 'a' days and B in 'b' days. They take turns in doing the task with each working 2 days at a time. If A starts they finish the task in exactly 10 days. If B
starts, they take half a day more. How long does it take to complete the task if they both work together?
1. \\frac{46}{9})
2. \\frac{50}{9})
3. \\frac{50}{11})
4. \\frac{36}{7})
5. \\frac{210}{41})
4. GMAT 700 800 Level Question | Geometry | Area of Circles
In the figure given below, ABC and CDE are two identical semi-circles of radius 2 units. B and D are the mid points of the arc ABC and CDE respectively. What is the area of the shaded region?
5. GMAT Hard Math Questions | Algebra | Absolute Values Sample Question
If a, b, and c are not equal to zero, what is the difference between the maximum and minimum value of S?
\\ S = 1+\frac { \left| a \right| }{ a } +\frac {2 \left| b \right| }{ b } +\frac {3 \left| ab \right| }{ ab } -\frac {4 \left| c \right| }{ c } \\)
6. GMAT 700 Level Sample Question in Statistics
Consider a set S = {2, 4, 6, 8, x, y} with distinct elements. If x and y are both prime numbers and 0 < x < 40 and 0 < y < 40, which of the following MUST be true?
I. The maximum possible range of the set is greater than 33.
II. The median can never be an even number.
III. If y = 37, the average of the set will be greater than the median.
7. GMAT Hard Math Algebra - Absolute Values | GMAT Problem Solving
If x and y are integers and |x - y| = 12, what is the minimum possible value of xy?
8. GMAT Challenging Question | Statistics Problem Solving
Three positive integers a, b, and c are such that their average is 20 and a ≤ b ≤ c. If the median is (a + 11), what is the least possible value of c?
9. GMAT Hard Math | Arithmetic | Permutation Combination Sample Question
How many four-digit positive integers exist that contain the block 25 and are divisible by 75. (2250 and 2025 are two such numbers)?
10. 700 800 Level GMAT Word Problem | Number Properties & Equations Problem Solving Question
A movie hall sold tickets to one of its shows in two denominations, $11 and $7. A fourth of all those who bought a ticket also spent $4 each on refreshments at the movie hall. If the total
collections from tickets and refreshments for the show was $124, how many $7 tickets were sold? Note: The number of $11 tickets sold is different from the number of $7 tickets sold.
11. GMAT 700 800 Level Question | Coordinate Geometry & Permutation Combination
Rectangle ABCD is constructed in the xy-plane so that sides AB and CD are parallel to the x-axis. Both the x and y coordinates of all four vertices of the rectangle are integers. How many
rectangles can be constructed if x and y coordinates satisfy the inequality 11 < x < 29 and 5 ≤ y ≤ 13?
12. GMAT Hard Math Question | Algebra | Difficult Equations Question
Susan invited 13 of her friends for her birthday party and created return gift hampers comprising one each of $3, $4, and $5 gift certificates. One of her friends did not turn up and Susan
decided to rework her gift hampers such that each of the 12 friends who turned up got $13 worth gift certificates. How many gift hampers did not contain $5 gift certificates in the new
13. GMAT 700 Level Sample Question in Counting Methods
149 is a 3-digit positive integer, product of whose digits is 1 × 4 × 9 = 36. How many 3-digit positive integers exist, product of whose digits is 36?
14. GMAT Hard Math Arithmetic | Permutation Combination - Selections | GMAT Problem Solving
A student is required to solve 6 out of the 10 questions in a test. The questions are divided into two sections of 5 questions each. In how many ways can the student select the questions to solve
if not more than 4 questions can be chosen from either section?
15. GMAT 700 800 Level Question | Arithmetic | Counting Methods Problem Solving
How many 6-digit numbers can be formed using the digits {1, 2, 3, ... 9} such that any digit that appears in such a number appears at least twice?
16. GMAT Hard Math | Arithmetic | Number Properties Sample Question
If y is the highest power of a number 'x' that can divide 101! without leaving a remainder, then for which among the following values of x will y be the highest?
17. GMAT 650 Level Algebra Question | Polynomials
If a, b, .. , j are real numbers such that (a - 1)^2 + (b - 2)^4 + (c - 3)^6 + ... + (j - 10)^20 = 0, what is the value of b × d × f × h × j?
18. GMAT Hard Math Arithmetic - Counting Methods | GMAT Problem Solving
What is the sum of all 3-digit positive integers such that all the digits of each of the number is even?
19. GMAT Challenging Math Question | Arithmetic | Number Properties - Remainders
What is the least number that when divided by 44 leaves a remainder 31, when divided by 56 leaves a remainder 43, and when divided by 32 leaves a remainder 19?
20. GMAT Number Properties Practice | 650 Level Question | Factors Problem Solving
What is the product of all the factors of the cube of a positive integer 'n' if the product of all the factors of square of n is n^3?
21. GMAT Arithmetic Question | Numbers Counting Methods Problem Solving
How many even 3-digit positive integers with distinct digits are there?
22. GMAT Algebra Practice Question | Equations & Numbers PS
If x and y are non-negative integers such that 4x + 7y = 68, how many values are possible for (x + y)? | {"url":"https://gmatpractice.q-51.com/problem-solving/","timestamp":"2024-11-14T06:53:36Z","content_type":"text/html","content_length":"88491","record_id":"<urn:uuid:93ec2cb9-3b15-4069-9e9c-aa16aafcf5bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00855.warc.gz"} |
Straight Line Motion with a Stomper
Ann Brandon Joliet West High School
401 North Larkin Ave
Joliet IL 60435
(815) 727-6950
After this experience, the student should
1. be able to define average speed (distance traveled/time)
2. be able to graph distance vs time and velocity vs time
3. be able to find the velocity from the distance time graph (slope)
4. be able to find the acceleration from the velocity time graph (slope)
5. recognize constant velocity, constant acceleration and changing
acceleration from the shape of the distance-time and velocity-time graphs
Materials needed:
(for each group)
Stopwatch, meter stick, Stomper (battery powered toy car), large washer or 100 g
mass, tape, paper tape (ticker tape) and a recording (ticker tape) timer,
Tricycle (optional).
Part I: Begin with the Stomper. Show it going in a straight line along
the counter top. Ask: How fast is it going? What do we need to know to find
out? (You need a distance traveled and a time.) Give each group a Stomper,
stopwatch and meter stick. They should determine the average speed of their
Stomper. It should take 10 or 15 minutes for them to do several trials.
Part II: How fast was the Stomper going as it moved along? Was it going
the same speed everywhere or did it speed up or slow down? Point out: The
stopwatch is only good for substantial times, several seconds. It will not help
us answer this question. Demonstrate the recording timer by pulling a meter or
so of tape through it. Explain why the time intervals are equal and ask why the
spaces are not even. (The spaces are small where the speed was small.)
Each group should do two runs with the Stomper pulling a length of ticker
tape behind it. Mark off each tape in groups of six dots. (Each six is one
tenth of a second.) The first tape will become a distance-time graph. Rip each
six dot section and glue it to a piece of graph paper. The first section goes
on next to the origin. The second section goes over one width of tape, but
starts up, so that its bottom is next to the first tapes' top. The third goes
over one and its bottom lines up with the top of the second section, etc. Thus
the vertical axis is the distance traveled in cm. and the horizontal axis is
the time in tenths of a second. This graph will have a constant slope equal to
the average speed found with the stopwatch. Demonstrate how to find slope.
The second tape will become a velocity-time graph. Each six dot section is
the distance traveled per tenth of a second, so it is the average velocity for
that tenth of a second. For this graph the vertical axis is the velocity in cm
per tenth of a second, while the horizontal axis is again the time in tenths of
a second. For this graph, each tape has its bottom on the horizontal axis. The
tapes go next to each other in order. This graph will have a slope very close
to zero because the speed is very close to being constant. How did this
velocity compare to the slope of the first graph? How did it compare to the
speed found with the stopwatch?
Be certain that you ask the students to describe each of these two graphs
and to compare them to each other.
This is probably as much as you can expect for one day. I would then spend
a day or so doing constant velocity problems to reinforce this concept.
Part III: Constant Acceleration
Drop a mass. (Catch it!) Ask if this is constant velocity. Ask how you
can find out. Using the recording timers, attach the tape to the mass and drop
the mass over the side of the table. (It pays to protect the floor with a book,
or a newspaper.) Run two tapes. Mark every sixth dot and create a distance-
time and a velocity-time graph. Look at them! The distance graph will not be a
straight line this time. It should look like a parabola, because the speed is
increasing, the slope will increase. The velocity graph will be a straight
line, but will not be horizontal, because the speed is increasing at a constant
rate. The slope of this line is the acceleration. (The change in velocity/the
time it took (dV/dt).)
Again, it is very important to ask what these graphs look like and to
compare them to each other and to the graphs from Part II.
This is probably a second day's work. It should be followed by problems
with distance, velocity and constant acceleration.
Part IV: Constantly Increasing Acceleration
Lay a chain on the counter top. Push it over the edge one link at a time,
until it goes by itself. Ask what kind of motion this is. Again, we can
analyze this motion if we attach a tape to it. Mark every sixth dot. The
distance time graph is optional, be sure to do a velocity time graph. This will
not be a straight line because the velocity increases at an increasing rate.
(The acceleration (slope) increases at a constant rate, thus we get a parabola.)
It is once more imperative that you ask the students to describe the shape
of the graph(s) and compare them to the previous graphs.
Optional: Have Tricycle races. Have someone ride a tricycle pulling ticker
tape. The velocity-time graph is the most interesting. Be sure to ask about
the accelerations. They will be positive and negative!
Velocity is the distance traveled/time it took (dD/dt)
Velocity is the slope of a distance-time graph
Acceleration is the change in velocity/time it took (dV/dt)
Acceleration is the slope of a velocity-time graph
Constant velocity gives straight line graphs for both d-t and v-t
Constant acceleration gives parabolas for d-t, but straight lines for v-t
Changing acceleration gives v-t graphs with changing slopes
Performance Assessment:
Using TWO SPEED Racer (a friction car, about $2.75 at "Toys R Us"): Show it
to the students. Placing it on a flat surface, where all of them can watch it,
pull it back until you hear a click. Let it go. It will go in a straight line,
and suddenly increase its speed.
Ask the students to sketch two graphs. A Distance vs Time and a Velocity vs
Time graph.
5 pts Graphs are correctly labeled on each axis. Distance-time graph shows as
short straight line, sloping up followed by a steeper straight line; also
sloping up.
Velocity-time graph shows a short, horizontal line, followed by a higher
horizontal line.
4 pts Graphs not correctly labeled, but show the proper shapes.
3 pts Graphs not correctly labeled, and only one of the graphs shows the proper
2 pts Graphs properly labeled, but neither graph is the proper shape.
0 pt No labels, and wrong shapes.
Multicultural Applications:
There are many applications of distance vs time in life including the Summer
Olympics track competitions and the speed skating and cross country skiing in
the Winter Olympics.
Return to Physics Index | {"url":"https://smileprogram.info/ph93ab.html","timestamp":"2024-11-13T19:29:38Z","content_type":"text/html","content_length":"8053","record_id":"<urn:uuid:470872d9-bc44-4638-bec1-26b7dfee63bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00703.warc.gz"} |
Permutation Tensor -- from Wolfram MathWorld
The permutation tensor, also called the Levi-Civita tensor or isotropic tensor of rank 3 (Goldstein 1980, p. 172), is a pseudotensor which is antisymmetric under the interchange of any two slots.
Recalling the definition of the permutation symbol in terms of a scalar triple product of the Cartesian unit vectors,
the pseudotensor is a generalization to an arbitrary basis defined by
and metric tensor. nonzero iff the vectors are linearly independent.
When viewed as a tensor, the permutation symbol is sometimes known as the Levi-Civita tensor. The permutation tensor
(Weinberg 1972, p. 38). The rank four permutation tensor satisfies the identity | {"url":"https://mathworld.wolfram.com/PermutationTensor.html","timestamp":"2024-11-07T06:50:13Z","content_type":"text/html","content_length":"58279","record_id":"<urn:uuid:3b40f779-b0e8-4b97-a2e2-58bc911930d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00334.warc.gz"} |
Andrej Bauer: Impredicative encodings
Date of publication: 12. 5. 2014
Mathematics and theoretical computing seminar
Torek, 13. 5. 2014, od 12h do 14h, Plemljev seminar, Jadranska 19
Abstract: Impredicative encodings are a logician's trick that allows them to encode logical connectives and quantifiers using just ⇒ and ∀. In computer science it can be used to represent datatypes
in terms of polymorphic functions. In homotopy type theory we can use impredicative encodings to encode certain higher-inductive types, such as the circle. We thus obtain a completely logical
construction of the circle which circumvents even the higher-inductive types. | {"url":"https://www.fmf.uni-lj.si/en/news/news/28837/andrej-bauer-impredicative-encodings/","timestamp":"2024-11-04T08:00:51Z","content_type":"text/html","content_length":"17568","record_id":"<urn:uuid:3171d7a3-bcec-4203-9663-0aadc7ef1265>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00670.warc.gz"} |
Extension of homeomorphisms and vector fields of the circle: From Anti-de Sitter to Minkowski geometry.
Geometry Topology Seminar
Monday, May 1, 2023 - 2:00pm for 1 hour (actually 50 minutes)
Farid Diaf – Université Grenoble Alpes – diaffarid97@gmail.com
In 1990, Mess gave a proof of Thurston's earthquake theorem using the Anti-de Sitter geometry. Since then, several of Mess's ideas have been used to investigate the correspondence between surfaces in
3-dimensional Anti de Sitter space and Teichmüller theory.
In this spirit, we investigate the problem of the existence of vector fields giving infinitesimal earthquakes on the hyperbolic plane, using the so-called Half-pipe geometry which is the dual of
Minkowski geometry in a suitable sense. In particular, we recover Gardiner's theorem, which states that any Zygmund vector field on the circle can be represented as an infinitesimal earthquake. Our
findings suggest a connection between vector fields on the hyperbolic plane and surfaces in 3-dimensional Half-pipe space, which may be suggestive of a bigger picture. | {"url":"https://math.gatech.edu/seminars-colloquia/series/geometry-topology-seminar/farid-diaf-20230501","timestamp":"2024-11-06T17:40:46Z","content_type":"text/html","content_length":"32054","record_id":"<urn:uuid:f1647b7f-c4e8-433c-a401-9cef0a84eb2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00266.warc.gz"} |
Module Documentation
6.3. Module Documentation#
The specification of functions provided by a module can be found in its interface, which is what clients will consult. But what about internal documentation, which is relevant to those who implement
and maintain a module? The purpose of such implementation comments is to explain to the reader how the implementation correctly implements its interface.
It is inappropriate to copy the specifications of functions found in the module interface into the module implementation. Copying runs the risk of introducing inconsistency as the program evolves,
because programmers don’t keep the copies in sync. Copying code and specifications is a major source (if not the major source) of program bugs. In any case, implementers can always look at the
interface for the specification.
Implementation comments fall into two categories. The first category arises because a module implementation may define new types and functions that are purely internal to the module. If their
significance is not obvious, these types and functions should be documented in much the same style that we have suggested for documenting interfaces. Often, as the code is written, it becomes
apparent that the new types and functions defined in the module form an internal data abstraction or at least a collection of functionality that makes sense as a module in its own right. This is a
signal that the internal data abstraction might be moved to a separate module and manipulated only through its operations.
The second category of implementation comments is associated with the use of data abstraction. Suppose we are implementing an abstraction for a set of items of type 'a. The interface might look
something like this:
(** A set is an unordered collection in which multiplicity is ignored. *)
module type Set = sig
(** ['a t] represents a set whose elements are of type ['a]. *)
type 'a t
(** [empty] is the set containing no elements. *)
val empty : 'a t
(** [mem x s] is whether [x] is a member of set [s]. *)
val mem : 'a -> 'a t -> bool
(** [add x s] is the set containing all the elements of [s]
as well as [x]. *)
val add : 'a -> 'a t -> 'a t
(** [rem x s] is the set containing all the elements of [s],
minus [x]. *)
val rem : 'a -> 'a t -> 'a t
(** [size s] is the cardinality of [s]. *)
val size: 'a t -> int
(** [union s1 s2] is the set containing all the elements that
are in either [s1] or [s2]. *)
val union: 'a t -> 'a t -> 'a t
(** [inter s1 s2] is the set containing all the elements that
are in both [s1] and [s2]. *)
val inter: 'a t -> 'a t -> 'a t
Show code cell output Hide code cell output
module type Set =
type 'a t
val empty : 'a t
val mem : 'a -> 'a t -> bool
val add : 'a -> 'a t -> 'a t
val rem : 'a -> 'a t -> 'a t
val size : 'a t -> int
val union : 'a t -> 'a t -> 'a t
val inter : 'a t -> 'a t -> 'a t
In a real signature for sets, we’d want operations such as map and fold as well, but let’s omit these for now for simplicity. There are many ways to implement this abstraction.
As we’ve seen before, one easy way is as a list:
(** Implementation of sets as lists with duplicates. *)
module ListSet : Set = struct
type 'a t = 'a list
let empty = []
let mem = List.mem
let add = List.cons
let rem x = List.filter (( <> ) x)
let size lst = List.(lst |> sort_uniq Stdlib.compare |> length)
let union lst1 lst2 = lst1 @ lst2
let inter lst1 lst2 = List.filter (fun h -> mem h lst2) lst1
This implementation has the advantage of simplicity. For small sets that tend not to have duplicate elements, it will be a fine choice. Its performance will be poor for large sets or applications
with many duplicates but for some applications that’s not an issue.
Notice that the types of the functions do not need to be written down in the implementation. They aren’t needed because they’re already present in the signature, just like the specifications that are
also in the signature don’t need to be replicated in the structure.
Here is another implementation of Set that also uses 'a list but requires the lists to contain no duplicates. This implementation is also correct (and also slow for large sets). Notice that we are
using the same representation type, yet some important aspects of the implementation (add, size, union) are quite different.
(** Implementation of sets as lists without duplicates. *)
module UniqListSet : Set = struct
type 'a t = 'a list
let empty = []
let mem = List.mem
let add x lst = if mem x lst then lst else x :: lst
let rem x = List.filter (( <> ) x)
let size = List.length
let union lst1 lst2 = lst1 @ lst2 |> List.sort_uniq Stdlib.compare
let inter lst1 lst2 = List.filter (fun h -> mem h lst2) lst1
An important reason why we introduced the writing of function specifications was to enable local reasoning: once a function has a spec, we can judge whether the function does what it is supposed to
without looking at the rest of the program. We can also judge whether the rest of the program works without looking at the code of the function. However, we cannot reason locally about the individual
functions in the three module implementations just given. The problem is that we don’t have enough information about the relationship between the concrete type (int list) and the corresponding
abstract type (set). This lack of information can be addressed by adding two new kinds of comments to the implementation: the abstraction function and the representation invariant for the abstract
data type. We turn to discussion of those, next.
6.3.1. Abstraction Functions#
The client of any Set implementation should not be able to distinguish it from any other implementation based on its functional behavior. As far as the client can tell, the operations act like
operations on the mathematical ideal of a set. In the first implementation, the lists [3; 1], [1; 3], and [1; 1; 3] are distinguishable to the implementer, but not to the client. To the client, they
all represent the abstract set {1, 3} and cannot be distinguished by any of the operations of the Set signature. From the point of view of the client, the abstract data type describes a set of
abstract values and associated operations. The implementer knows that these abstract values are represented by concrete values that may contain additional information invisible from the client’s
view. This loss of information is described by the abstraction function, which is a mapping from the space of concrete values to the abstract space. The abstraction function for the implementation
ListSet looks like this:
Notice that several concrete values may map to a single abstract value; that is, the abstraction function may be many-to-one. It is also possible that some concrete values do not map to any abstract
value; the abstraction function may be partial. That is not the case with ListSet, but it might be with other implementations.
The abstraction function is important for deciding whether an implementation is correct, therefore it belongs as a comment in the implementation of any abstract data type. For example, in the ListSet
module, we could document the abstraction function as follows:
module ListSet : Set = struct
(** Abstraction function: The list [[a1; ...; an]] represents the
set [{b1, ..., bm}], where [[b1; ...; bm]] is the same list as
[[a1; ...; an]] but with any duplicates removed. The empty list
[[]] represents the empty set [{}]. *)
type 'a t = 'a list
This comment explicitly points out that the list may contain duplicates, which is helpful as a reinforcement of the first sentence. Similarly, the case of an empty list is mentioned explicitly for
clarity, although some might consider it to be redundant.
The abstraction function for the second implementation, which does not allow duplicates, hints at an important difference. We can write the abstraction function for this second representation a bit
more simply because we know that the elements are distinct.
module UniqListSet : Set = struct
(** Abstraction function: The list [[a1; ...; an]] represents the set
[{a1, ..., an}]. The empty list [[]] represents the empty set [{}]. *)
type 'a t = 'a list
6.3.2. Implementing the Abstraction Function#
What would it mean to implement the abstraction function for ListSet? We’d want a function that took an input of type 'a ListSet.t. But what should its output type be? The abstract values are
mathematical sets, not OCaml types. If we did hypothetically have a type 'a set that our abstraction function could return, there would have been little point in developing ListSet; we could have
just used that 'a set type without doing any work of our own.
On the other hand, we might implement something close to the abstraction function by converting an input of type 'a ListSet.t to a built-in OCaml type or standard library type:
• We could convert to a string. That would have the advantage of being easily readable by humans in the toplevel or in debug output. Java programmers use toString() for similar purposes.
• We could convert to 'a list. (Actually there’s little conversion to be done). For data collections this is a convenient choice, since lists can at least approximately represent many data
structures: stacks, queues, dictionaries, sets, heaps, etc.
The following functions implement those ideas. Note that to_string has to take an additional argument string_of_val from the client to convert 'a to string.
module ListSet : Set = struct
let uniq lst = List.sort_uniq Stdlib.compare lst
let to_string string_of_val lst =
let interior =
lst |> uniq |> List.map string_of_val |> String.concat ", "
"{" ^ interior ^ "}"
let to_list = uniq
Installing a custom formatter, as discussed in the section on encapsulation, could also be understood as implementing the abstraction function. But in that case it’s usable only by humans at the
toplevel rather than other code, programmatically.
6.3.3. Commutative Diagrams#
Using the abstraction function, we can now talk about what it means for an implementation of an abstraction to be correct. It is correct exactly when every operation that takes place in the concrete
space makes sense when mapped by the abstraction function into the abstract space. This can be visualized as a commutative diagram:
A commutative diagram means that if we take the two paths around the diagram, we have to get to the same place. Suppose that we start from a concrete value and apply the actual implementation of some
operation to it to obtain a new concrete value or values. When viewed abstractly, a concrete result should be an abstract value that is a possible result of applying the function as described in its
specification to the abstract view of the actual inputs. For example, consider the union function from the implementation of sets as lists with repeated elements covered last time. When this function
is applied to the concrete pair [1; 3], [2; 2], it corresponds to the lower-left corner of the diagram. The result of this operation is the list [2; 2; 1; 3], whose corresponding abstract value is
the set {1, 2, 3}. Note that if we apply the abstraction function AF to the input lists [1; 3] and [2; 2], we have the sets {1, 3} and {2}. The commutative diagram requires that in this instance the
union of {1, 3} and {2} is {1, 2, 3}, which is of course true.
6.3.4. Representation Invariants#
The abstraction function explains how information within the module is viewed abstractly by module clients. But that is not all we need to know to ensure correctness of the implementation. Consider
the size function in each of the two implementations. For ListSet, which allows duplicates, we need to be sure not to double-count duplicate elements:
let size lst = List.(lst |> sort_uniq Stdlib.compare |> length)
But for UniqListSet, in which the lists have no duplicates, the size is just the length of the list:
How do we know that latter implementation is correct? That is, how do we know that “lists have no duplicates”? It’s hinted at by the name of the module, and it can be deduced from the implementation
of add, but we’ve never carefully documented it. Right now, the code does not explicitly say that there are no duplicates.
In the UniqListSet representation, not all concrete data items represent abstract data items. That is, the domain of the abstraction function does not include all possible lists. There are some
lists, such as [1; 1; 2], that contain duplicates and must never occur in the representation of a set in the UniqListSet implementation; the abstraction function is undefined on such lists. We need
to include a second piece of information, the representation invariant (or rep invariant, or RI), to determine which concrete data items are valid representations of abstract data items. For sets
represented as lists without duplicates, we write this as part of the comment together with the abstraction function:
module UniqListSet : Set = struct
(** Abstraction function: the list [[a1; ...; an]] represents the set
[{a1, ..., an}]. The empty list [[]] represents the empty set [{}].
Representation invariant: the list contains no duplicates. *)
type 'a t = 'a list
If we think about this issue in terms of the commutative diagram, we see that there is a crucial property that is necessary to ensure correctness: namely, that all concrete operations preserve the
representation invariant. If this constraint is broken, functions such as size will not return the correct answer. The relationship between the representation invariant and the abstraction function
is depicted in this figure:
We can use the rep invariant and abstraction function to judge whether the implementation of a single operation is correct in isolation from the rest of the functions in the module. A function is
correct if these conditions:
1. The function’s preconditions hold of the argument values.
2. The concrete representations of the arguments satisfy the rep invariant.
imply these conditions:
1. All new representation values created satisfy the rep invariant.
2. The commutative diagram holds.
The rep invariant makes it easier to write code that is provably correct, because it means that we don’t have to write code that works for all possible incoming concrete representations—only those
that satisfy the rep invariant. For example, in the implementation UniqListSet, we do not care what the code does on lists that contain duplicate elements. However, we do need to be concerned that on
return, we only produce values that satisfy the rep invariant. As suggested in the figure above, if the rep invariant holds for the input values, then it should hold for the output values, which is
why we call it an invariant.
6.3.5. Implementing the Representation Invariant#
When implementing a complex abstract data type, it is often helpful to write an internal function that can be used to check that the rep invariant holds of a given data item. By convention, we will
call this function rep_ok. If the module accepts values of the abstract type that are created outside the module, say by exposing the implementation of the type in the signature, then rep_ok should
be applied to these to ensure the representation invariant is satisfied. In addition, if the implementation creates any new values of the abstract type, rep_ok can be applied to them as a sanity
check. With this approach, bugs are caught early, and a bug in one function is less likely to create the appearance of a bug in another.
A convenient way to write rep_ok is to make it an identity function that just returns the input value if the rep invariant holds and raises an exception if it fails.
(* Checks whether x satisfies the representation invariant. *)
let rep_ok x =
if (* check the RI holds of x *) then x else failwith "RI violated"
Here is an implementation of Set that uses the same data representation as UniqListSet, but includes copious rep_ok checks. Note that rep_ok is applied to all input sets and to any set that is ever
created. This ensures that if a bad set representation is created, it will be detected immediately. In case we somehow missed a check on creation, we also apply rep_ok to incoming set arguments. If
there is a bug, these checks will help us quickly figure out where the rep invariant is being broken.
(** Implementation of sets as lists without duplicates. *)
module UniqListSet : Set = struct
(** Abstraction function: The list [[a1; ...; an]] represents the
set [{a1, ..., an}]. The empty list [[]] represents the empty set [{}].
Representation invariant: the list contains no duplicates. *)
type 'a t = 'a list
let rep_ok lst =
let u = List.sort_uniq Stdlib.compare lst in
match List.compare_lengths lst u with 0 -> lst | _ -> failwith "RI"
let empty = []
let mem x lst = List.mem x (rep_ok lst)
let add x lst = rep_ok (if mem x (rep_ok lst) then lst else x :: lst)
let rem x lst = rep_ok (List.filter (( <> ) x) (rep_ok lst))
let size lst = List.length (rep_ok lst)
let union lst1 lst2 =
(fun u x -> if mem x lst2 then u else x :: u)
(rep_ok lst2) (rep_ok lst1))
let inter lst1 lst2 = rep_ok (List.filter (fun h -> mem h lst2) (rep_ok lst1))
Calling rep_ok on every argument can be too expensive for the production version of a program. The rep_ok above, for example, requires linearithmic time, which destroys the efficiency of all the
previously constant time or linear time operations. For production code, it may be more appropriate to use a version of rep_ok that only checks the parts of the rep invariant that are cheap to check.
When there is a requirement that there be no run-time cost, rep_ok can be changed to an identity function (or macro) so the compiler optimizes away the calls to it. However, it is a good idea to keep
around the full code of rep_ok so it can be easily reinstated during future debugging:
let rep_ok lst = lst
let rep_ok_expensive =
let u = List.sort_uniq Stdlib.compare lst in
match List.compare_lengths lst u with 0 -> lst | _ -> failwith "RI"
Some languages provide support for conditional compilation, which provides some kind of support for compiling some parts of the codebase but not others. The OCaml compiler supports a flag noassert
that disables assertion checking. So you could implement rep invariant checking with assert, and turn it off with noassert. The problem with that is that some portions of your codebase might require
assertion checking to be turned on to work correctly. | {"url":"https://cs3110.github.io/textbook/chapters/correctness/module_docs.html","timestamp":"2024-11-02T02:43:06Z","content_type":"text/html","content_length":"78299","record_id":"<urn:uuid:f8633e7d-499c-456e-ae39-114f1b60415c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00808.warc.gz"} |
Understanding Discrete Algorithms - Do My GRE Exam
Discrete Mathematics is the study of very fundamental mathematical structures that are fundamentally discrete and not continuous. It is often used in the field of math and science, but is also used
in many other disciplines such as architecture, finance, business and other related fields. There are a few different types of discrete mathematics.
Discrete equations are mathematical expressions that do not have solutions. This includes both integral equations and discrete linear equations, as well as non-integral equations. The idea behind
discrete equations is that they are the most difficult to solve because they have many complex factors. This is why it can be so hard to make the big money using these formulas in real life, which is
why they are studied so much in academia.
Discrete games are mathematical games which have no solution. The best example of this is the game of monopoly. This can be seen as a very basic example, but it shows how complex these problems can
be. In order to get a truly good grasp of them you need to look into them more closely. These are extremely important in education and in the field of business.
Discrete algorithms are mathematical algorithms which are very difficult to solve, yet they must be solved in order for the algorithm to be effective. This is often the case with mathematical
algorithms which have multiple steps and require solving them in order to work.
Discrete fractals are mathematical objects that are normally fractal in nature, and thus there is no true solution to the object. However, by applying a mathematical method known as geometric
reasoning, we can make a very accurate visualization of the object.
Discrete algorithms are all about algorithms, and these can only work if an algorithm is able to produce a unique and reliable output. The best example of this would be the algorithm which calculates
Fibonacci numbers. It uses a mathematical formula which looks at every step and tries to find a pattern in the output. If there is a pattern in the output, then it knows that the algorithm has
succeeded and that it will continue to produce the same outputs every time.
Discrete graphing involves the use of mathematics, which is based on the use of curves in a mathematical form. These curves are generally found in nature and can be found on many things such as a
car’s wing or a leaf. For example, if you saw a car and saw a leaf, you can figure out how fast the car would be going to the next stop if it were stopped and how long it will take to reach it.
This is actually the main purpose of the theory behind discrete graphing in mathematics. A mathematical formula is used to find something called an “Euler” problem, which is a type of a “Theory of
Optimal Solving”.
Using geometric reasoning, you can figure out where the curve is going to show, and from that point can figure out what the curve would look like at a different point in space. After doing this, you
can then figure out if you can change the point in the space in order to affect the curve. By doing this, you can figure out whether or not it is possible to alter the path to alter the curves in
your visualizations.
Discrete fractals are also called discrete geometric objects. In this case, you have a mathematical object such as a fractal image, and when you use geometric reasoning you can see whether or not the
shape is stable. or not, which can then help you figure out how the object can change over time.
Discrete geometry is used in order to find the relationship between two geometrical objects. This relationship is then used to determine if the objects are congruent, which is one of the main uses
for discrete geometry. If the objects are congruent, then they will appear to be the same shape, and if they are not then they will appear to be different.
Discrete algorithms are the most used type of algorithms in mathematics. These algorithms work to find out what the relationships between two or more mathematical objects are. They work to determine
if they are congruent or not, and what their differences are. | {"url":"https://domygreexam.com/understanding-discrete-algorithms/","timestamp":"2024-11-07T15:45:52Z","content_type":"text/html","content_length":"107032","record_id":"<urn:uuid:dbad5f03-17cf-499f-bae5-5539e84f8056>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00314.warc.gz"} |
21.7 The Complex Spiral of the Imaginary World
Home   Science Page   Data Stream Momentum   Directionals   Root Beings   The Experiment
21.7 The Complex Spiral of the Imaginary World
Now that we have discovered the Slope of our Line, let us generate the Equation for the Line itself. But first let us look at a computer experiment that solidifies our discoveries and simplifies the
A. A Computer Experiment
To better illustrate the algebraic discoveries of this past session, let us examine a computer experiment, which verifies the necessary results, and will lead us to the appropriate equation.
The Square Root Series and the Divine Ratio
The chart is labeled The Square Root Series and the Divine Ratio. In the box at the upper left is the Root Number whose Square Root we are generating. The 1st column represents the number of
iterations, N. In this computer experiment we did 100 iterations. In the 2nd column is the iterated D Series, d(N). Its standard equation for all square roots is listed below under d(N). In the 3rd
column is the Square Root Ratio, which equals the famous Fraction Series plus one, (N) + 1. This is generated from the ratio of the consecutive members of the D. The Formula used is listed below
under (N). The terms, Num and Den, in these equations refers to the Num and Den of the little box on the upper left. In the 4th column is the difference between the Root Ratio of column 3 and the
computer generated Root, which we will call the Real Root, knowing full well that the computer only goes up to 16 places of accuracy. This difference between the iterated Root Ratio and the Real Root
we called (N). In column 5 we have the ratio of consecutive members of column four, the Difference Series. This Difference Ratio, (N-1)/(N), is theoretically supposed to approach the Divine Ratio.
Thus in column 6 we subtract the Difference Ratio from the Divine Ratio, (N-1)/(N)-d. The formula for the Divine Ratio is listed below. This number should approach zero, but doesnt due to built-in
computer inaccuracies, of merely 16 place precision. In the 7th column we have the Log of the Difference Series of column 4, Log((N))
Comments on computer accuracy
The second column, d(N), gets so large so quickly that it probably exceeds computer accuracy after less than 25 iterations. By the time it reaches 100 iterations, the number is at the 86th power,
supposedly a number with 86 whole number places, while the computer truncates it to 16. Thus the accuracy of the rest of the table is flawed a bit because of this problem, i.e. only 16-place
accuracy. Notice however that in looking at the Difference Series of column 4 that even with the inaccuracies that it has still reached 13 place accuracy after 100 iterations. Further we can see from
column 6 that the difference between the divine ratio and Difference ratio is very slight. After 100 iterations it is within 8 thousandths of the Divine Ratio.
Experimental results
There are quite a few significant results from this computer experiment. First it confirms that the D series that generates the F Series is an accurate way of generating square roots. This had
already been confirmed in many other contexts. Second it confirms that the iterated Difference Ratio is very close to the Divine Ratio from column 6. However the most exciting result is from column
Correlations and slopes
On the left below our 100 iterations are all of the equations and constants that are used in this table of numbers. In the middle and the right bottom are the correlations and slopes of the
appropriate columns, i.e. those just above, with the iteration column, N, i.e. column 1. Thus column four, the Difference Series is correlated about 34% with the number of iterations with a best-fit
slope of .02. Judging from the sharp curving exponential graph, all the correlation between these two sets of data is based upon the flat area of the curve when for all practical purposes the F
Series is the Root. The correlation of the Difference Ratio Column with the Iteration Column is about 16% with a slope of about .002. All this says is that the Divine Ratio and the Difference Ratio
are nearly equal, determined by the flat best-fit slope.
Most significant result
In column 7, the Log of the Difference Series, there is a 100.00% correlation, (or 99.995% to show that it is not quite exact) with the number of iterations, N, from column 1. Looking at the erratic
nature of the numbers that are used to generate this series, this is an incredible result. This means that there is a nearly perfect linear correlation between the number of iterations and the
logarithm of the Difference Series. Further the slope of the best-fit line is within 2 ten thousandths of the negative of the log of the Divine Ratio. Thus it is abundantly clear that the negative of
the log of the Divine Ratio is the slope of this logarithmic line that we have seen graphed many times. This result generates the line equation that we have been searching so long for.
B. Generating the Equation for the Line
Before moving into the development of our equation, let us review some basic rules. Because we have been speaking about logarithms and exponentiation let us first just review a few of the rules that
govern their behavior.
Rules of Exponentiation
Let us start with exponentiation.
Let us briefly note that the commutative property does not apply to the property of exponentiation. This is shown clearly in the first three equations. Thus the placing of the parentheses in
exponentiation is crucial.
Rules for Logarithms
Below are some of the definitional rules of logarithms.
Note that a negative logarithm represents an inversion.
Classic Equation for a line
Now that we have reviewed a few of these basic relations we can move on to the classic representation of the line equation. Shown below.
m, the slope, the negative log of the Divine Ratio
While x & y are variables, m and b are constants. The constant m is called the slope of the line. The slope of our logarithmic line is the negative of the Log of the Divine Ratio, m -> -log (d2).
This was illustrated clearly in the just mentioned computer experiment. The Divine Ratio we shall call d2, for reasons that will become apparent later on.
The variable x = N, the number of iterations
This means that for each iteration N, that our value for (N) becomes closer to Limit, which is the Square Root minus one. It becomes closer by a distinct increment which is the log of the Divine
Ratio, -log (d2), For each iteration our Root Being takes a logarithmic step closer to the Limit. Thus N, the number of iterations, is the variable that drives the equation and so would be the X in
the above equation, x -> N.
The variable y is actually log y
Turning the y-axis logarithmic was what straightened out our line, thus the y value is represented as a logarithm. Thus we can represent the y in the above equation as y -> log y. The value of the
y-axis, which corresponds with each iteration, represents logarithmically how close our iterated function is to the Limit. Thus as our graph moves to the right, the negative exponent that represents
how close we are to the limit gets larger and larger. Remember that as N approaches infinity that log y approaches negative infinity. This means that the limit of y is simply zero, because this is
the Difference between the F Series and the Limit. In crude terms: When N is infinity the F Series equals the Limit. Thus their difference, i.e. y, equals zero. But for our graph, we are talking
about the logarithm of this difference, it becomes a very big negative number, eventually approaching negative infinity.
The constant b doesnt really matter.
The constant b is called the y intercept, because it represents the value of y when x or N, the number of iterations, equals zero. This point doesnt really exist, although it could be projected.
However this point doesnt really matter. Because we are getting closer and closer to the Limit with each iteration, it doesnt really matter where we start. Because of the nature of our iterative
expressions the beginning doesnt matter. We become logarithmically closer with each iteration in a linear fashion. Thus although the starting point could definitely slow us down in our Quest, it
could certainly not deter us. No matter where we start, we will take these consistent baby steps towards our goal. Thus in the derivation that follows we will let b = 0, because the value has no real
meaning anyway and this makes our equation of a line the simplest.
These substitutions are shown all together below.
Our expression for the logarithmic line
Now let us drop these expressions into the equation for a line and see what weve got.
This equation states that the log of the Difference between iterated function and its limit is equal to a linear function of the number of Iterations, N, where the slope is -log d2.
Cleaning our the Logarithms
Of course like any good mathematician, we need to raise all those logarithms to the 10th power to clean them out.
First 10 raised to the log y power equals y. Shown below. 2nd: The negative in the exponent on the right means to invert. Therefore we place the expression in the denominator of a fraction,
simultaneously eliminating the negative sign in the exponent. Further we invert the exponential product for clarification.
This expression converts easily to the following simple expressions.
Equation for real points of the Difference Series bouncing back and forth around zero
Now that we are out of logarithms, we can remove the absolute value signs that should have surrounded d2. Below is the real expression for d2., complete with negative sign.
This negative sign is very helpful for expressing our true graph because now the function bounces back and forth between a little above zero to a little below zero. Remember that y equals the
difference between the iterated F Series and the Root minus one, of any square root. As N, the number of iterations approaches , y, the difference, approaches zero. The equation below approximates
our Difference Series, i.e. the difference between the F series and the Root minus one. Remember that the Divine Ratio between the parentheses is only an approximation of the slope as N approaches
infinity. But we saw that it was a pretty good approximation.
Thus far our equation from step 6 only describes the behavior of our function in the realm of real numbers. Now let us move into the imaginary world.
C. Equation for a Complex Spiral
In the above equation, N is restricted to being a positive integer because N represents distinct iterations. Thus y, which represents the Difference Series, is only a set of points, not a continuous
F Series, only a set of points
Indeed the Fraction Series, which the Difference Series is based upon, is itself only a set of points that approaches the Root minus one as a Limit. Indeed some of the points are greater than the
Limit and some are less than the Limit. When we graph the Difference Series, which is the difference between the F Series and its Limit, i.e. the Root minus one, the individual points converge
rapidly, bouncing from over to under zero. If however we graph this difference from a logarithmic perspective, we see a line of points. The visuals for this phenomenon were seen at the beginning of
the last section.
Points of the Real connected by the Imaginary
What happens to our Difference Series between these iterative points? We are going to postulate the existence of a continuous function that moves in and out of the real world from the imaginary
world. The points that appear in the real world exist only because the imaginary component equals zero.
Common mathematical phenomenon
This is a common mathematical phenomenon. Indeed Stephen Hawkings postulates in his book, a Brief History of Time, that the real universe is only expanding into the imaginary universe, but that the
combination of the two is a constant. When the real universe starts shrinking then the imaginary universe will begin growing. The Big Bang occurred when the real component of the Universe was nearly
zero. Anyway this section is looking for the equation that will represent our distinct, discrete F Series of real points from the continuous perspective of the Complex world.
Common phenomenon of consciousness
This is also a common phenomenon behind consciousness also. We are truly aware of the Present in discontinuous segments, fragmentary at best. These experiences of Being are separated from each other
by the imaginary world of thoughts. These thoughts create an incredible imaginary world of desires, fears, pain and pleasure, questing and dissipation. Every once in a while the world of thoughts is
forced back into the Now for brief moments to deal with Reality. Over all, however, the thread of our connection with the Now of the real world is connected by imaginary thought projection. Thought
projection is not evil, or even something that needs to be resisted. It just needs to be recognized as the false. It just needs to be differentiated from the real of Being. Cultivate the real and the
false will naturally melt away, exposed to the light of consciousness.
Difference Series for square Roots could be seen as a complex spiral
Our Difference Series for Square Roots could be seen as representing the points of real between the imaginary. Thus the Difference Series could easily be represented by a simple complex spiral, whose
ultimate limit is zero. Thus it spirals from a little above zero, to a little below zero, to a little closer above zero, to a little closer below zero. As mentioned the continuity is furnished by
imaginary components. When the imaginary components equal zero then this proposed continuous function approximates our Difference Series values.
Moving into the imaginary realm
In the previous discussion N, represented the number of iterations and thus was a distinct whole number. Let us allow N to move continuously instead of in distinct steps. We will assign it a new
variable, theta q. Further we will let it move in and out of the real realm by assigning the following value to N.
With this substitution N is only completely real when q is an integer. When q is not an integer, the imaginary component, i sin q 0, is not equal to zero, but when q is an integer then the imaginary
component, i sin q = 0, equals zero and disappears. Further because N is always positive we must square our trigonometric components, i.e. the cosine and sine functions, to balance things out. When q
is an even integer then cos q always equals one and sin q always equals zero, neutralizing the imaginary component. However when q is an odd integer then cos q equals -1 while sin equals zero. We
dont want our function to be negative so we square our cos and sin function.
A continuous function assigned to step function N, or letting N = (q)
We have assigned a continuous function, (q), to the number of iterations, N. We show this substitution into equation 6 in equation 8 below, where y is a function of q rather than a function of N.
In step 9, we merely show the expanded version of the same (q), defined in step 7.
In step 10 we apply the law of exponents again to get our final representation of the complex spiral that we were aiming for.
D. What does our equation look like?
What does our equation of the Complex spiral do? It spirals ever closer to zero in geometric increments. What does it look like?
Normal perspective, real and complex
From the real perspective, our equation for y above would only be a few points that quickly reach zero. (We saw this graph at the beginning of Section 3, entitled The Slope.) From a complex
perspective the graph looks like a spiral on a funnel, which quickly reaches zero. Remember this zero is the Difference between the F Series and its Limit, the Root minus One. The bottom and top most
points of our spiral both form sharp curves like a roller coaster, which start sharply down or up and then quickly level out. When our top and bottom curves level out they intersect at zero.
Logarithmic perspective, real and complex
From a logarithmic perspective of the real world, again we have two series of points, both suggesting lines, which intersect at zero. From a logarithmic perspective of the complex world our graph is
of a spiral that gets progressively closer to zero. This perspective is shown below.
Note that while the spiral seems to moving quickly towards zero that it actually never reaches there. We have truncated the logarithmic region between 1.0 * E-16 and zero for purposes of display.
Complex Spiral with Implied Lines
In the next graph we show the Implied Lines of the Complex Spiral. The points where the Lines intersect the Spiral are the F Series iterations; the rest of the graph is only the imaginary projected
upon the real for understanding.
Our logarithmic graph is shown from a complex perspective. The points are real. The curves from the points move into the imaginary realm. The lines, except for the points, while existing in the real
plane, are pure projection. Remember that the complex perspective includes the imaginary world, while the real world does not.
The implied lines never intersect and dont exist
The two implied curves of an ordinary perspective intersect rapidly at zero. From a logarithmic perspective these curves turn into two implied lines, which never intersect, no matter how close they
seem to get. It is a spiral that moves continuously inwards. The top and bottom points of this spiral form two lines that point towards zero but never actually reach there. Their imaginary point of
intersection is emptiness, nothing, zero. Note that the implied lines of the real world dont even exist in the complex world. All the motion from one point to the other happens in a spiral fashion
from the imaginary world rather than from a linear fashion in the real world.
Our simple number has become a inward spiral with an unreal Limit
Taking a step back. Note that our simple number has now become a bounded spiral, moving through the complex plane, in and out of the real plane. Each time it reaches the real plane, the F Series for
square roots, which is itself a function of a D series, can characterize it. Our number doesnt really have a Limit in the ordinary plane. The spiral moves continually inwards, never reaching the
E. Philosophy: Perspective is everything
Now that we have explored the physical attributes of our Complex Spiral, let us explore the metaphorical and metaphysical implications of our equation.
Thoughts break world into discontinuous parts
What appears as two sets of discontinuous points from the real perspective, actually could be described as a single spiral from the complex perspective, which combines the real and imaginary worlds
together in a single plane. Thus it is many times with our mind. Our thoughts break the world into many seemingly discontinuous parts. Further because our rational mind can only perceive the real
world and not the imaginary, it firmly believes that the world is truly broken into an infinity of different and conflicting pieces, which align themselves in a chaotic and seemingly random pattern.
Continuity through Transcendence of thought
However in the transcendence of thought in the perception of the world, the imaginary realm connects the physical world together in the complex plane, which combines the real and the imaginary.
Suddenly all that made no sense to the mind, because it is broken into real parts becomes connected by our imaginary spiral in the complex plane. Further our logical mind cannot even begin to
perceive what this imaginary world consists of. Our minds cannot hope to perceive what it means to be the square root of negative one. All we know is that the imaginary plane bonds the real plane
into the complex world in a continuous fashion. Thus the first metaphorical lesson to be learned is that thoughts limit direct perception.
Ordinary vs. special perspective: Polarities vs. continuous
A second lesson has to do with the depth of perception influencing the appearance of reality. Sometimes reality will even be perceived in diametrically opposite ways. Those seeing our graph from an
ordinary perspective see a bunch of points that converge quickly. Those who are able to see into the complex plane logarithmically see a Complex Spiral moving ever inwards. The ordinary perspective
turns the world into positive and negative polarities, while the special perspective sees an unbroken continuity of perspectives. In this ordinary worldview distinct goals are stressed over the
continuity of process.
Logarithmic vs. linear perspective
Another flaw of perception that is revealed in our equation and graph has to do with the concentration of perspective. Many have only cultivated a linear perspective. Under this perspective, our
graph reaches its goal quickly, while under the more concentrated perspective, the goal is never reached. From a linear perspective the iterations quickly reach the goal, while from the more
concentrated logarithmic perspective, the goal is always infinitely far away. From the linear perspective, enlightenment is the end goal to be reached, while from the logarithmic perspective, it is
merely a step along the path. From the linear perspective the students movements fairly quickly intersects the line of the Master, while from the more subtle logarithmic perspective, the Masters
movements are incomprehensibly subtler. From the perspective of the novice student, the Master is enlightened, while from the Masters perspective he has a long way to go.
Increasing perception, decreases self-confidence and increases humility
Initially our student/observers perceptions are very blunt because they have not been cultivated. They might even think that their movements converge with the Master. They feel a sense of false
confidence. When our student increases the depth of his perception, two things happen in this metaphor. Huge gaps appear between the end points of his mini-goals and the end suddenly seems
increasingly further away. The chasm of uncertainty opens up, which eventually leads to a deeper understanding. Thus a blunt self-confidence has been replaced by a more subtle humility. This is
called going backwards to go forwards. Pay attention to this mechanism and cultivate its perception .
Transcendence and emergence
When the student cultivates his perceptions and reaches the above enlightenment, he sees something radically different to what he saw before. With his goal-oriented mind, he saw only two quickly
converging curves. Now with increased perceptions he sees that his curves are only a series of distinct points. Further a chasm has opened up between the end points of his curves, which have turned
straight. Thus the truth of two converging curves has been replaced by the truth of two straight lines, which are not continuous at all but consist merely of points. Further these point lines never
converge. Thus the phenomenon is the same, but the level of perception yields up two contradictory notions of truth. The lesson here is that an increase in perceptions can lead to transcendence of
the ordinary realm. The difference is not just a matter of degree, but is an emergent property of perception.
Hopefully humility not pride
With this enlightenment based upon increased perception, the student comes to realize that his notion of truth has transcended that which went before, while still including it. Hopefully the student
gains humility from this transition, not pride. If he holds onto his emergence as a new truth that he has gained, he will probably feel pride. However if he realizes that this emergence will only be
followed by more emergent truths as his perceptions increase then he feels a great humility before the awesomeness of it all. Now our student might be able to perceive that while they are on the same
Path as the Master that his refinement far surpasses what they have achieved. At this point they have made a huge step forward and are able to comprehend refinement.
The emergence of experiential reality is spontaneous
Our Student has increased his perception from a linear to the more concentrated logarithmic perspective, but has still not increased the depth of his perception to include the imaginary world. While
our student has increased perception, he has not yet let go of thought as his mode of apprehension. Thought based in polarity does not include the continuity of the complex realm. As the Students
perception is focused inwards, thoughts are dissolved and experiential reality emerges. Because experiential reality is so spontaneous, the inner subtle circles might manifest unexpectedly and
possibly even erratically in the exterior world. Thus we arrive at the converse. Those, who havent developed their perception thoroughly and can only see through their eyes rather than their
intuitions, might see internal continuity as external discontinuity. Of course the Master disguises the internal spontaneity as external continuous movement further tricking those that cant see.
Advanced students might be seen as raw beginners, while intermediate students might be perceived as next in line to the master.
Integration of body and mind happens in the imaginary experiential world beyond thought
Thus the perception must increase to apprehend the imaginary world of internal movement in order to integrate mind and body on the highest levels. Remember however that the perception of this
imaginary realm must be sensed intuitively. It cannot be directly perceived by thoughts and analysis. This is simply too slow and discontinuous. Thus while thoughts might act as a bridge to the
experiential level of internal movement, they must be abandoned when you get there.
Thought must be purged
Further the mind must be continually be purged of thoughts in order to stay in the experiential realm. Thoughts, because they break things into pieces and separate observer and observed – experiencer
and experience, are necessarily discontinuous. Further because of the back and forth between mind and experience, it is far too slow, blocking direct perception of reality. The mind while a great
signpost is incredibly limited in the perception of experiential reality and must be abandoned.
A new level of emergent truth which transcends and includes that which went before
Now that thought has been purged and our student is able to perceive reality from the complex plane, he now sees the two lines of points as an infinite spiral moving continuously in the complex
realm. Therefore another level of truth has emerged which is seemingly contradictory to the level of truth that went before, but which transcends and includes it. Increasing the level of non-verbal
perception is essential for this next step.
F. This Complex Spiral only a metaphor
Before getting carried away let us remember that this whole section on the complex spiral is metaphorical in nature.
Only an attempt to emulate the real data of Difference Series
The F Series data is real. The differences between the F Series and its Limit, the iterated Difference Series, are also real. Each iteration drives the function closer and closer to the Root in a
logarithmic fashion. While these points are real and exact, the visual line that connects these iterative points is imaginary and approximate. While our line has a 99.995% correlation with the data,
it is not exact. Thus our line is not the real line. It only represents a best-fit line, as in statistics. That is why we compared it with the best-fit line from statistics in our earlier analysis of
the chart. Therefore the only thrust of this section was to write an approximate equation that would describe our Difference Series for square roots in complex terms.
Cant use ordinary to describe iterative equations
This leads us into the topic of differences between ordinary equations and iterative equations. With ordinary equations they are either right or wrong. Using ordinary equations to describe iterative
equations is a hopeless task. They can be more or less right but can never be completely right, because even at the most fundamental levels the slope of the iterative line is only approximate. That
these iterative equations generate lines at all is miraculous, that we could expect to have an ordinary equation describe this linear relation perfectly is a little much to ask.
The algebra of imperfection
To be a little more specific, let us look at some algebra. Remember what y represents. It is supposed to emulate the Difference series, (N), just as a physics equation attempts to emulate the real
world. Note that the Difference Series is the difference of the F Series and the Limit, which contains the Root.
To simplify let us look at our equation for y without the imaginary components, equation 6. If y equals this exponential ratio then it only approaches the Difference Series as a limit because the
slope is only based upon a limit also.
Using some elementary algebra we rewrite the first equation in terms of (N).
We remember that the limit of y as N approaches infinity is zero. Thus the relation between the Root and the F Series is independent of the components of y.
The Complex Spiral, y, doesnt help determine Root or F Series
With this representation it is obvious that while y contains both (N) and R, the Root Number, it in no way assists us in the computation of (N) or the Root Number, R. It just approaches zero as N
approaches infinity. But it gives us no clue as nature of its components. The ordinary equation, i.e. with y, must base its result upon the answer, i.e. the Root, while the F series is independent of
F Series moves inexorably towards the Root
Thus our complex spiral will not help compute the Root, while the F Series heads inexorably towards the Root no matter what nonsense is fed into it. Therefore the F Series never gets lost while the
ordinary equation must align itself with the master symbolized by the Root. The F Series is guided by an internal pattern and doesnt need the Root to find its way. It can never get lost. Conversely
the ordinary equation for y needs the Master Root for guidance.
Ordinary equations content-based; iterative context-based
In this case an ordinary equation is a content-based equation, while our iterative feedback equations are context based. An ordinary equation yields the same instantaneous answer, after the numbers
are plugged in for the variables while with the iterative equations one must ask how many times it has been iterated to get the right result. One can ask the question where an iterative equation is
going while ordinary equations are already there. Therefore iterative equations grow, while ordinary equations are static. Thus ordinary equations are ideal for describing dead phenomenon, i.e. those
based purely in the physical realm. Contrarily, they are hopelessly inadequate for describing live phenomenon. On the other hand the context-based iterative equations are unnecessarily animistic when
describing inanimate objects, while their growth patterns are ideal for mimicking and describing context-based situations, where the result is determined by the situation rather than some absolute
Ordinary equation only metaphor, not reality
Thus bear in mind that in this section we are attempting to use an ordinary equation to describe an iterative phenomenon. Our search is ultimately futile except as a metaphor. However the result that
we achieved was so beautiful that it is tempting to assign reality to it.
Complex Spiral Theory, while facilitating understanding is not true
This is the last level of metaphor. Basically the complex spiral metaphor, which seems so beautiful, which reflects the reality of the F Series so accurately, is only an imperfect reflection. It is
the same with words or thoughts. No matter how closely words or thoughts reflect reality, they are only a metaphor, which is not quite reality. Our complex spiral with all its metaphorical
significance is only a pale and dependent reflection of the reality of the F Series. The F Series really approaches its Limit, while the complex spiral only emulates the approach. The beauty and
regularity of the complex spiral is attractive, safe and secure, and probably most importantly, understandable. It is a content-based equation, which is static. Our real context based equation, the
iterative F Series, is based upon ambiguous shadings and half steps. It doesnt need to focus upon its goal. Through a constant balancing it proceeds inexorably towards its Limit, independent of
input. Thus the complex reality of this iterative process is made more understandable by the complex spiral. But the understanding while assisting our conception is not really true. In this metaphor,
the F Series represents Reality, while the Complex Spiral is but a beautiful reflection.
G. Review: Special Effects of the Infinite Continued Fraction Family
Let us review a few of our discoveries concerning the nature of the F&D Series.
An Infinity of numbers between each number
In looking at the family of numbers in the infinite continued fractions, there are some interesting conclusions that can be reached. First there is an infinity of points between any 2 numbers. As
mentioned earlier, the odd functions are below while the even functions are above the square root. Hence the square root of any positive rational number, which includes positive rational numbers,
exists only between even and odd. Because nothing is there, there is always an infinity of points between a little more and a little less. Each number is only an approximation. One can become as
precise as necessary but one can never reach between even and odd. Our ideal number behaves as a practical magnitude. We are back where we started.
Irrationally rational
We discovered in this Notebook that: Given any rational number, X, a series can easily be written, such that the ratio of consecutive terms approaches the square root of that rational number. Written
another way, any square root can be written as the limit of the ratio of consecutive members of an integer series. The irrational square root can be written as the limit of a ratio. This is just the
The Pattern Reigns supreme
This integer series, which we call the Denominator Series or D Series, is generated by self-referencing the last two members of the series in a specific way. However the pattern is so dominant that
the first two members of the series are inconsequential to the inevitable result. Hence the pattern becomes more important than the numbers used to begin the series. Number fades in significance to
Pattern as determining the identity of the approximate number. Because of the supremacy of the pattern of the D Series for determining square roots, we decided to look a little more closely at the
pattern. In looking at these patterns, we found an incredible underlying structure, where coefficients became elements of the D series.
Onward to the Higher Roots!
Because of all these marvelous connections and discoveries we are craving an exploration of the Higher Roots thru the glasses of the Infinite Continued Fraction Series and its derivative, the
Denominator Series, i.e. the F&D Series. We dont know if there is any connection and if there is any connection what it is. Onward to the next Notebook, Higher Roots, to find out. | {"url":"http://theinformationdynamics.com/Science/21%20Square%20Root%20Family/217.htm","timestamp":"2024-11-05T22:14:51Z","content_type":"text/html","content_length":"45895","record_id":"<urn:uuid:eea1fc6e-48cc-44ec-894b-bb0e83d1eda8>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00092.warc.gz"} |
Simply Logical
3.4. Other uses of cut#
Consider the following propositional program:
This inefficiency can be avoided by putting s,! at the beginning of the body of the first clause. However, in full clausal logic the goals preceding s might supply necessary variable bindings, which
requires them to be called first. A possible solution would be the introduction of an extra proposition symbol:
Show that the query ?-p succeeds, but that q and r are tried twice.
Show that q and r are now tried only once.
Just as we did with not, we can rewrite this new proposition symbol to a generally applicable meta-predicate:
Note that we can nest applications of if_then_else, for instance
Unfolding the definition of if_then_else yields
which clearly shows the meaning of the predicate: ‘if \(P\) then \(Q\) else if \(R\) then \(S\) else \(T\)’. This resembles the CASE-statement of procedural languages, only the above notation is much
more clumsy. Most Prolog interpreters provide the notation P->Q;R for if-then-else; the nested variant then becomes P->Q;(R->S;T). The parentheses are not strictly necessary, but in general the
outermost if-then-else literal should be enclosed in parentheses. A useful lay-out is shown by the following program:
( T=<37 -> blood_pressure(Patient,Condition)
; T>37,T<38 -> Condition=ok
; otherwise -> diagnose_fever(Patient,Condition)
otherwise is always assigned the truth-value true, so the last rule applies if all the others fail.
not and if-then-else show that many uses of cut can be replaced by higher-level constructs, which are easier to understand. However, this is not true for every use of cut. For instance, consider the
following program:
This program plays a game by recursively looking for best moves. Suppose one game has been finished; that is, the query ?-play(Start,First) (with appropriate instantiations of the variables) has
succeeded. As usual, we can ask Prolog whether there are any alternative solutions. Prolog will start backtracking, looking for alternatives for the most recent move, then for the move before that
one, and so on. That is, Prolog has maintained all previous board situations, and every move made can be undone. Although this seems a desirable feature, in reality it is totally unpractical because
of the memory requirements: after a few moves you would get a stack overflow. In such cases, we tell Prolog not to reconsider any previous moves, by placing a cut just before the recursive call. This
way, we pop the remaining choice points from the stack before entering the next recursion. In fact, this technique results in a use of memory similar to that of iterative loops in procedural
Note that this only works if the recursive call is the last call in the body. In general, it is advisable to write your recursive predicates like play above: the non-recursive clause before the
recursive one, and the recursive call at the end of the body. A recursive predicate written this way is said to be tail recursive. If in addition the literals before the recursive call are
deterministic (yield only one solution), some Prolog interpreters may recognise this and change recursion into iteration. This process is called tail recursion optimisation. As illustrated above, you
can force this optimisation by placing a cut before the recursive call. | {"url":"https://book.simply-logical.space/src/text/1_part_i/3.4.html","timestamp":"2024-11-13T22:47:07Z","content_type":"text/html","content_length":"42724","record_id":"<urn:uuid:e28c777d-903f-4ef2-aacc-884d6c569dd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00617.warc.gz"} |
Introduction to Univariate, Bivariate, and Multivariate Analysis
When it comes to the level of analysis in statistics, there are three different data analysis techniques that exist:
Univariate: When there is just one variable in the data and no mention of causes, effects, or causal links. For instance, the researcher may want to count the number of males and girls in a school
when conducting a survey. The data, in this case, would just show a number, i.e., a variable and amount. The main goal of univariate data is to describe the data using mean, median, variance, mode,
dispersion, range, standard deviation, etc. to identify patterns within the data. Univariate Data can be analyzed with the help of the following:
• Frequency Distribution Tables
• Histograms
• Frequency Polygons
• Pie Charts
• Bar Charts
Bivariate: When the dataset contains two variables and researchers aim to undertake a comparison between the two datasets then the bivariate analysis is the right technique. For instance, in a survey
of a classroom, the researcher may be looking to analyze the ratio of students who scored above 85% corresponding to their genders. In this case, there are two variables: gender (independent
variable) and marks (dependant variable). A bivariate analysis will measure the correlation between the two variables. Bivariate data can be analyzed with the help of the following:
• Correlation Coefficients
• Regression Analysis
Multivariate: When there are more than two variables in the dataset, a more complicated statistical analysis method called multivariate analysis is utilized. For instance, a doctor has gathered
information on weight, blood pressure, and cholesterol. Additionally, she has gathered information about the individuals’ dietary habits. She wants to look at how eating habits and the three health
metrics are related. In this case, the multivariate analysis would be necessary to comprehend how one variable related to the other. Multivariate data can be analyzed with the help of the following:
• Factor Analysis
• Cluster Analysis
• Variance Analysis
• Discriminant Analysis
• Multidimensional Scaling
• Principal Component Analysis
• Redundancy Analysis | {"url":"https://blog.veedyapp.store/introduction-to-univariate-bivariate-and-multivariate-analysis","timestamp":"2024-11-05T08:46:29Z","content_type":"text/html","content_length":"96401","record_id":"<urn:uuid:aa802202-1d5a-479e-9f41-9c07c7ae18ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00828.warc.gz"} |
Probabilistic Inference
The most common and useful probabilistic inference task is to compute the posterior distribution of a query variable or variables given some evidence. Unfortunately, even the problem of estimating
the posterior probability in a belief network within an absolute error (of less than $0.5$), or within a constant multiplicative factor, is NP-hard, so general efficient implementations will not be
available. Computing the posterior probability of a variable is in a complexity class called $\#NP$ (pronounced “sharp-NP”). $NP$ is the complexity of determining whether there is a solution to a
decision problem where solutions can be verified in polynomial time. $\#NP$ is the complexity of counting the number of solutions. These are for the worst case, however, there is often structure that
can be exploited, such as conditional independence.
The main approaches for probabilistic inference in belief networks are:
Exact inference
where the probabilities are computed exactly. A naive way is to enumerate the worlds that are consistent with the evidence, but this algorithm takes time exponential in the number of variables.
It is possible to do much better than this by exploiting the structure of the network. The recursive conditioning and variable elimination algorithms (below) are exact algorithms that exploit
conditional independence so that they can be much more efficient for networks that are not highly interconnected.
Approximate inference
where probabilities are approximated. Such methods are characterized by the guarantees they provide:
□ •
They could produce guaranteed bounds on the probabilities. That is, they return a range $[l,u]$ where the exact probability $p$ is guaranteed to have $l\leq p\leq u$. An anytime algorithm may
guarantee that $l$ and $u$ get closer to each other as computation time (and perhaps space) increases.
□ •
They could produce probabilistic bounds on the error. Such algorithms might guarantee that the error, for example, is within 0.1 of the correct answer 95% of the time. They might also have
guarantees that, as time increases, probability estimates will converge to the exact probability. Some even have guarantees of the rates of convergence. Stochastic simulation is a class of
algorithms, many of which have such guarantees.
□ •
They could make a best effort to produce an approximation that may be good enough, even though there may be cases where they do not work very well. One such class of techniques is called
variational inference, where the idea is to find an approximation to the problem that is easy to compute. First choose a class of representations that are easy to compute. This class could be
as simple as the set of disconnected belief networks (with no arcs). Next try to find a member of the class that is closest to the original problem. That is, find an easy-to-compute
distribution that is as close as possible to the posterior distribution to be computed. Thus, the problem reduces to an optimization problem of minimizing the error, followed by a simple
inference problem.
This book presents some exact methods and some stochastic simulation methods. | {"url":"https://artint.info/3e/html/ArtInt3e.Ch9.S4.html","timestamp":"2024-11-05T23:26:43Z","content_type":"text/html","content_length":"20200","record_id":"<urn:uuid:98896d83-f126-498b-84ae-358ea3463f7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00672.warc.gz"} |
Creating Labor Estimate Curves
Submitted by Chris Barber
Originally posted 11/17/09
This information comes our way from Chris Barber at Fine Art Shipping.
Read it to find out more about dealing in a very systematic way with an age-old issue. This article offers step by step
instruction in how to use basic information technology to create a tool to increase your efficiency and effectiveness in managing projects.
Depending on how comfortable you are with creating and managing a partially automated system, a custom estimate and cut-list program can be a ridiculous time saver for your crating department. My
“crating engine” uses mostly simple math functions in a simple database application. With it, I can estimate the cost and dimensions of a crate and have a formatted cut list ready to print for the
craters in as little as fifteen seconds. Unusual crating scenarios often require only a few extra minutes of data entry before the results can be sent to a client services representative or printed
for execution. Once all specs are chosen from menus, every square inch of building material is automatically priced and calculated for weight, both for estimates and for the actual price of the built
But whether you have your own crating program, or whether you do all of your math with pencil and paper, the big unknown for crating estimates is labor. Any given builder will have good days and bad
days. Averaging their past performance won’t always give a perfect estimate, but it will take their history into account and mitigate guesswork based on misleading examples. Naturally, the more
examples of past performance you record, the more likely you are to approach a good reliable mean. The other sticking point in estimating labor is the duration/volume ratio. For obvious reasons, this
ratio is not a straight line, but a curve. The smaller the cubic footage of any style of crate, the more minutes it will take to build per cubic foot, at a disproportionate rate of increase.
Likewise, the same curve levels off to nearly flat in the upper size range. I’ve plotted these curves for my lead crater so that I can make a reliable prediction of his performance on any style of
crate, regardless of the size job. Even if you do everything else in your head, an accurate time curve is an elegant alternative to guesswork. Of course, this isn’t limited to crating. It can be
applied to any production task with a similarly predictable set of actions. Here’s how to make your own:
Step 1. The first thing you will need is the raw data. Start recording exactly how long it takes you or your staff to build crates. Start a separate log for each crater, and each style of crate that
crater produces. Every log should include a series for minutes and a series for cubic feet. Then make a third series, dividing minutes by cubic feet. I put these series in columns; so if cell A3 =
minutes, and cell B3 = cubic feet, cell C3 = A3/B3. You will only use the second and third columns in the next step - cubic feet & minutes/cubic foot. Here’s an example log for “B-crates” with two
hypothetical craters, one a faster builder than the other:
Soon you should have enough data in those series to get reasonable estimates. The data collection is an ongoing process, however, and your logs should be updated regularly. Older numbers could be
dropped eventually to account for your crater’s growing experience and speed, but the aim is to collect as much information on each builder as possible. This is not to spy on your crew. It is to
accurately predict the time it will likely take this person or that to build the next crate.
There are two ways you can process your database into functional labor estimate curves. First I’ll show the quick way, and then I’ll explain what these numbers mean by showing the chart method.
Step 2a. Find the “power trendline” of each crating log you have made, and multiply it by the estimated cubic feet. I’ll explain what the power trendline is in some depth below, but for now you can
just treat it like a magic spell. If you aren’t a math geek and don’t care how, why or whether this really works, you can stop reading at the end of this step.
The fastest and most efficient way to process a given crater’s average curve on a given style of crate can be done in five math functions, and will fit on a spreadsheet the size of a postage stamp.
Cells Functions Descriptions
A1 =[length]*[width]*[height]*1/1728 estimated cubic feet
A2 =EXP(INDEX(LINEST(LN(y),LN(x),,),1,2)) coefficient A
A3 =INDEX(LINEST(LN(y),LN(x),,),1) coefficient b
A4 =A*(x^(b)) trendline equation
A5 =[cell A1]*[cell A4]*1/60 labor estimate
A1) The first cell should simply display the cubic footage of the crate being estimated. The least fussy way is to link this function to three blank cells somewhere else where you enter the crate’s
L, W, & H. Those same three blank cells can be linked to every curve you make (since you need a separate curve for each crater on each style of crate).
A2) The second cell should return the value of A to be used in the equation in cell 4. This cell should contain the exact function shown, but in place of x, link to the whole cubic feet series in
your crater’s log (B3:B14, to use the slower crater shown above as an example). Likewise, y must be linked to the whole series of data in the minutes/cubic foot column of your crater’s log (In this
example; C3:C14).
A3) The third cell should return the value of b for the equation in cell 4. Treat series variables x & y the same way here as you did in cell 2.
A4) The forth cell should contain the function shown, but replacing x, A, b with the results of cells 1-3 respectively. Caution: in this equation, x refers only to the cubic footage of the crate
being estimated, because it is graphed on the x-axis. It is not the same 'x' variable as in cells 2 & 3.
A5) The fifth cell is the product of the values returned in cell A1 and cell A4, then divided by 60 to convert minutes into hours.
You can use these five steps to bypass the charting step described below and get your trendline equations straight from your database. But the chart actually shows what these numbers mean, and I
prefer to see graphic representations of the curves anyway.
Step 2b. If the step described above seems too cryptic, the numbers involved can be more readily understood by graphing them. The program I use allows me to insert a visual chart into my spreadsheet,
define the x & y parameters and link them to the two relevant series of data. This is pretty basic, and I’m sure that it’s a universal feature in spreadsheet applications. The type of graph you want
is an x-y scatter chart. Your chart’s values are simply: x = cubic feet, and y = minutes/cubic foot. Once your graph is linked to those two series, you will see points plotted in the field – each
point representing the crater’s total time spent on a specific crate.
The more information you have (and the more consistent your crater is), the more it should suggest the hint of a curve starting in the top left corner and ending in the bottom right. Now you can give
the graph a trendline. The trendline extrapolates an average curve from your unwieldy cloud of points, in a visible line. You may need to choose from several types of trendline. I prefer what my
application calls the “power” type, which appears to produce the most realistic curve, leveling off dramatically as it approaches zero on each axis. The “exponential” and “logarithmic” types both
trace the trendline right off the chart at each end, and there’s no way a large crate will ever take negative minutes to build. Nor will a small crate ever have negative dimensions. The “linear” type
overrides the curve that I believe is there. The “moving average” type defeats our purpose entirely. The “polynomial” type creates a dip in the middle ground that doesn’t make sense to me. Even if I
wanted to address the handling logistics of larger crates, this potential issue is completely unrelated to the polynomial formula.
As you can see above, there is less data from the faster builder, and the blue curve is barely visible. This makes the blue trendline less reliable in the extreme size ranges; particularly the
smaller sizes. This problem can be addressed quickly by giving that crater a very small crate to build and a very large one. Getting just a few points plotted past the margins of that crating history
will give the blue trendline a wider range of accurate predictions.
Step 2c. Once you have your trendline plotted, tell your graph to show the trendline’s equation (which is hidden by default). Each trendline is described by a math equation reflecting the moving
average of your plotted data. The power trendline equation should look like this:
y = Ax^b
The values of x and y are still cubic feet & minutes per cubic foot respectively, as the chart suggests. The coefficients “A” and “b” come directly from the trendline, which in turn is a biased
average of the data your chart illustrates. *
Step 2d. Now here’s the nice part: Your trendline equation can be recuperated back into the spreadsheet for the purpose of estimating labor. Once you estimate the cubic footage of your prospective
crate, you can simply multiply it by the trendline to get the most accurate possible labor estimate for any given crater. The spreadsheet function for this looks a little tricky, but here it is using
the same variables, A & b, as my example of the trendline equation above:
So if your trendline shows the equation: y = 35.956x^-0.789
…the spreadsheet cell representing it should say: =35.956*(x^(-0.789)).
If your trendline shows the equation: y = 5.5678x^-0.2912
…the spreadsheet cell representing it should say: =5.5678*(x^(-0.2912)).
Note that to make either of these examples functional, x must refer to the cell that displays the crate’s estimated cubic feet. The current value of x must be folded into the trendline equation
before it can return a relative unit of duration/volume adjusted by the crate’s size. While the trendline equation merely displays the coefficients A & b, the spreadsheet cell as typed above will
return the actual value of y -- as long as x points to the cell displaying the current value of x and the function begins with the equal sign. Once you have a spreadsheet cell representing the
trendline linked to the variable cubic footage cell, all you need do is multiply the two cells. Keep in mind that this will result in minutes; so if you prefer estimated hours, just divide the result
by 60.
So to mentally separate this step from the raw database illustrated above, let’s skip over (arbitrarily) to column H on our example spreadsheet.
The blue and orange numbers in this screenshot represent the faster and slower craters, like in the curve chart. The top number in each set is the cubic footage of the crate currently being
estimated. This cell changes with every estimate, as it is the product of the crate’s length, width & height, divided by 1728 to convert from inches to feet. Let’s say for the sake of argument that
the cell displaying orange cubic footage is in position H4 on the spreadsheet. The next cell down, H5, is the trendline equation for that crater, with the current cubic footage plugged into it. So in
place of “x” in =A*(x^(b)), the function says H4. And in place of “A” and “b”, the function shows the actual trendline coefficients. In this case what I actually typed into cell H5 is: =70.254*(H4^
(-0.656)). Refer to the orange trendline on the chart to see how I got A and b. This is a functional version of the trendline equation, responding automatically to the cubic footage displayed above
it. If the cubic footage dropped, the result displayed in cell H5 would rise appropriately for the crater in question. The next cell down, H6, is the product of the first two cells, divided by 60 to
convert from minutes to hours. This is the estimated hours it will likely take that crater to build that style of crate at that particular size.
Step 3. Update and fine-tune your logs. Some spikes may occur that throw the whole curve out of whack. They are usually in the negative direction – like when a crater made a big mistake and spent a
lot of extra time correcting it. I toss the worst spikes. I would rather take the hit when random problems happen than let them affect every estimate. Such large spikes are very rare, and I’ve only
eliminated about four crates from my whole database for that reason.
Packing estimates: Of course, packing a crate involves many more variables than building it, so you should keep building time and packing time separate in your database, charts and equations. I don’t
even use packing curves myself. I use a flat time for each type of flatwork, and estimate dimensional items in my bean.
There are many different ways you can approach the problem of labor in estimates, depending on how tight you want your estimates to be. Plotting curves is admittedly a bit anal, but quite easy to set
up. And it only improves over time as you add more information.
Chris Barber
Fine Art Shipping | {"url":"https://www.paccin.org/content.php?10-Creating-Labor-Estimate-Curves&s=09c86ad60fe12525190a7f006a21ccac","timestamp":"2024-11-03T12:53:20Z","content_type":"application/xhtml+xml","content_length":"43044","record_id":"<urn:uuid:26440e09-08d8-461f-a263-b8d54db6ee0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00533.warc.gz"} |
An approach for constructing loop algebra via exterior algebra and its applications
With the help of some properties of exterior algebra defined by us, a general approach for constructing multi-component matrix loop algebra is proposed. By making use of the approach, a new 3M loop
algebra X is constructed. This algebra can be easily reduced to the existing multi-component loop algebra. Another an new extended loop algebra Y is also presented. As their applicable examples, a
generalized multi-component AKNS hierarchy with arbitrary smooth functions and a generalized multi-component KN hierarchy are worked out. As a reduction cases of the first hierarchy, the standard
multi-component heat-conduction equation and a coupled generalized multi-component Burgers equation are given. The approach presented in the paper can be used generally.
Scopus Subject Areas
• Statistical and Nonlinear Physics
• Mathematics(all)
• Physics and Astronomy(all)
• Applied Mathematics
Dive into the research topics of 'An approach for constructing loop algebra via exterior algebra and its applications'. Together they form a unique fingerprint. | {"url":"https://scholars.hkbu.edu.hk/en/publications/an-approach-for-constructing-loop-algebra-via-exterior-algebra-an","timestamp":"2024-11-14T07:23:56Z","content_type":"text/html","content_length":"53929","record_id":"<urn:uuid:dd2fbca5-c83d-4f0d-bdd0-8fc0fd882ae9>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00218.warc.gz"} |
Area of a Square Worksheets
Buckle up practice by using our area of a square worksheets packed with engaging exercises involving squares of varying dimensions, measured in customary or metric units. Watch children skillfully
navigate through integers, decimals, and fractional side lengths as they calculate the area by squaring the measure of the side length, or determine missing side length by finding the square root of
the area provided. The answer keys are a tremendous help to check the answers instantly. Our free worksheets are invaluable tools for quick review.
Select the Measurement Units
Find the area of the squares with 1-digit side lengths in the first half. Read each word problem, identify the side length, and apply the formula A = s x s, where s is the side length to find the
Area of Squares - Integers | Easy
Children in grade 3 and grade 4 practice finding the product of the side lengths and compute the area of each square in our area of a square worksheets focussing on integer side lengths up to 20.
Area of Squares - Integers | Moderate
The moderate difficulty level presented through integers < 100 units as side dimensions in these printables work as an ideal practice resource for kids.
Finding the Side Length of the Square from Area
Each of our finding the side length of a square worksheet guides grade 7, grade 8, and high school students to determine the precise side lengths by finding the square root of the area.
Obtain the product of two fractional side measures of the square and simplify their answers as fractions or mixed numbers.
Multiply the decimal measure of the length of one side of the square by itself to determine the area in this PDF resource for grade 5 and grade 6 children.
Obtain the side length by dividing the perimeter by 4, then use the side length to find the area. To find the perimeter, find the square root of the area to get the side length, then multiply it by 4
in this section from our area of squares worksheets. | {"url":"https://www.tutoringhour.com/worksheets/area/squares/","timestamp":"2024-11-03T18:33:05Z","content_type":"text/html","content_length":"75298","record_id":"<urn:uuid:bd659fb5-8962-445b-bbd1-6d5d06879d53>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00278.warc.gz"} |
Self-study of mathematics with Cubens
Cubens is the best educational portal for independent and remote study of mathematics. Cubens contains convenient mathematical handbook, where all the topics are organized into sections, which
include elementary, school and higher mathematics.
Distance learning
Training materials on Cubens will be useful for pupils, students and teachers. Cubens created to assist in solving mathematical problems and study, contains all the necessary theory, calculators,
tables and formulas for training in universities, academies and colleges online.
Background and theoretical material on Cubens maximally systematic and contain simple and clear examples for remote learning and training math.
Mathematics and education
Mathematics has emerged as one of the areas of search for the truth, for the practical needs of man: to calculate, to measure, to explore. Mathematics sometimes operates a fairly complex concepts
that are not always clear. So, using Cubens you can forget about the question: "How to study math?".
The school studied elementary mathematics — arithmetic, functions, algebra. Universities — higher mathematics: differential, integral calculus, topology, theory of operators and everything else that
is not included in elementary mathematics. Cubens includes areas of elementary and higher mathematics topics: numbers and expressions, equations and inequalities, geometry, trigonometry, functions
and graphs, algebra and beginning analysis, combinatorics, and others.
Together with Cubens
With us you can get any educational information that you need to achieve results in school and later success in life. There is no ready homework for life. What will you do after finishing school? So
do not lose your time and devote it to learning.
In the near future Cubens will appear in all subjects, such as biology, geography, history, chemistry, physics, literature. you no longer have to search for information on a terrible and
uncomfortable websites, you will know now where to go if something is forgotten, and we will always be in your bookmarks.
And always remember: "Knowledge is power". Be strong with Cubens! | {"url":"https://cubens.com/en/","timestamp":"2024-11-02T01:50:55Z","content_type":"text/html","content_length":"38851","record_id":"<urn:uuid:e5b028c0-d0b9-4061-a747-f2b1d4ec56b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00468.warc.gz"} |
ball mill calculation xls
The milling process definitions Cutting speed,v c Indicates the surface speed at which the cutting edge machines the workpiece. Effective or true cutting speed, v e Indicates the surface speed at the
effective diameter (DC ap).This value is necessary for determining the true cutting data at the actual depth of cut (a p).This is a particularly important value when using round insert cutters ... | {"url":"https://legitemauve.fr/05-16/ball-mill-calculation-xls.html","timestamp":"2024-11-10T12:47:48Z","content_type":"text/html","content_length":"45386","record_id":"<urn:uuid:aa964362-a4d2-4b4e-9175-bde9708b8aa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00528.warc.gz"} |
The Stacks project
Lemma 85.12.8. In Situation 85.3.3.
1. An object $K$ of $D(\mathcal{C}_{total})$ is cartesian if and only if $H^ q(K)$ is a cartesian abelian sheaf for all $q$.
2. Let $\mathcal{O}$ be a sheaf of rings on $\mathcal{C}_{total}$ such that the morphisms $f_{\delta ^ n_ j} : (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}_ n), \mathcal{O}_ n) \to (\mathop{\mathit
{Sh}}\nolimits (\mathcal{C}_{n - 1}), \mathcal{O}_{n - 1})$ are flat. Then an object $K$ of $D(\mathcal{O})$ is cartesian if and only if $H^ q(K)$ is a cartesian $\mathcal{O}$-module for all $q$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0D7L. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0D7L, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0D7L","timestamp":"2024-11-07T03:25:52Z","content_type":"text/html","content_length":"14492","record_id":"<urn:uuid:68d81443-d590-421c-8758-e2418eec5d4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00552.warc.gz"} |
Power Calculation | HyperX Pad
Examples of Power Calculation
The first user staked 200,000 HGPT in a 360-day pool.
Due to the number of HGPT staked, this user is classified as Tier5-Titanium, with a pool weight of 4.00. Thus, they have a pool weight of 200,000 x 4.00 = 800,000.
Since they participated in the 360-day pool, a multiplier of 2.5 is applied, resulting in a total power of 800,000 x 2.5 = 2,000,000.
The second user staked 10,000 HGPT in a 180-day pool.
This user is classified as Tier2-Bronze, with a pool weight of 1.40. Therefore, they have a pool weight of 10,000 x 1.4 = 14,000.
Since they participated in the 180-day pool, a multiplier of 1.4 is applied, resulting in a total power of 14,000 x 1.4 = 19,600.
Allocation Calculation
As described in the example calculations, the scores of all tier holders are computed to determine the total pool scores. For instance, letβ s assume the total power amounts to 100,000,000.
The first user has 2,000,000 / 100,000,000 = 2%, which means they receive an allocation equivalent to 2% of any new sale pool. For a pool worth 100,000 USDT, they gain a guaranteed participation
right of 2,000 USDT.
The second user has 19,600 / 100,000,000 = 0.0196%, which means they receive an allocation equivalent to 0.0196% of any new sale pool. For a pool worth 100,000 USDT, they gain a guaranteed
participation right of 19.6 USDT. | {"url":"https://docs.hyperxpad.com/power-calculation","timestamp":"2024-11-11T01:36:06Z","content_type":"text/html","content_length":"109642","record_id":"<urn:uuid:801a2f69-42de-43c6-b285-92bc96803936>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00329.warc.gz"} |
What Is 3 1/2 as a Decimal + Solution With Free Steps
The fraction 3 1/2 as a decimal is equal to 3.5.
When two whole numbers are written in the form of a ratio as p/q, they are referred to as a Fraction. Here p represents the Numerator and q represents the Denominator of a fraction. The separation
indicates this expression to be a fraction. Any numbers written in the form of p/q are called fractions.
A Mixed Fraction is a certain type of fraction which is formed by the combination of a proper fraction and a whole number. It is obtained from an improper fraction by writing its quotient as the
whole number and remainder as the numerator.
Usually, Decimal Numbers are preferred to be used in mathematical computations because they are easy to comprehend. The most common method used for converting fractions into decimals is the Long
Division method.
In this example, we have a fraction of 3 1/2, which will be converted into a decimal using the Long Division.
A mixed fraction must first be transformed into an improper fraction before being converted into a decimal number. For this conversion, the denominator of the mixed fraction must be multiplied by its
whole number and the result thus obtained is added to the numerator.
The outcome we get is the numerator of the improper fraction and there will be no change in the denominator.
In the fraction given to solve, 2 is multiplied by 3, and the product is added to 1, which gives 7 as the numerator of the improper fraction. While its denominator is 2. Thus, our desired fraction is
Thus, to get the decimal value of 3 1/2, 7 is divided by 2. Thus, we have:
Dividend = 7
Divisor = 2
Our ultimate result, known as the quotient, is obtained by dividing this fraction.
Quotient = Dividend \div Divisor = 7 \div 2
Sometimes, we are left with some remaining quantity because the division process is not completed. This left-over quantity is given the name Remainder.
Figure 1
3 1/2 Long Division Method
The fraction to be solved is:
7 $\div$ 2
Whenever there is a dividend smaller than the divisor, there is a requirement of a Decimal Point. On the other hand, we can move on without the decimal point if a fraction has a larger dividend.
7 $\div$ 2 $\approx$ 3
2 x 3 = 6
When we subtract 6 from 7, we get the remaining value as:
7 – 6 =1
We are now unable to continue without a decimal point because the value of the remainder 1 is less than 2, the divisor. Thus, we multiply 1 by 10 to get a decimal point.
10 $\div$ 2 $\approx$ 5
2 x 5 = 10
We don’t have any remainder now.
Therefore 10 – 10 =0.
Therefore, we conclude that the fraction of 3 1/2 can be solved completely, and the Quotient’s value is 3.5 without any remainder.
Images/mathematical drawings are created with GeoGebra. | {"url":"https://www.storyofmathematics.com/fractions-to-decimals/what-is-3-1-2-as-a-decimal/","timestamp":"2024-11-11T10:05:36Z","content_type":"text/html","content_length":"148080","record_id":"<urn:uuid:c9434555-dd43-416a-ae9b-26068c387f61>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00652.warc.gz"} |
Question 11 Review Exercise 6
Solutions of Question 11 of Review Exercise 6 of Unit 06: Permutation, Combination and Probablity. This is unit of A Textbook of Mathematics for Grade XI is published by Khyber Pakhtunkhwa Textbook
Board (KPTB or KPTBB) Peshawar, Pakistan.
Question 11
Given the following spinner, determine the probability.
Total number colors $$n(S)=4$$ P(orange) The orange color covers one fourth $\dfrac{1}{4}$ of the spinner,
thus the probability of is: $$\quad P( orange )=\dfrac{1}{4}$$
P(Red or Green)
Red color cover one fourth $\dfrac{1}{4}$ and green color ccvers one fourth $\dfrac{1}{4}$ of the spinner.
Therefore, P(\operatorname{Red})&=\dfrac{1}{4}\\ P( Green )&=\dfrac{1}{4} Also these two are mutually exclusive events.
Therefore $P(R \cap G)=\phi$, where $R$ stands for red event and $G$ stands for green event.
By addition law of probability, we have \boldsymbol{P}( Red or Green )&=P(\text { Red })+P( Green )-P( Red and Green )\\ \Rightarrow P(Red or Green )&=\dfrac{1}{4}+\dfrac{1}{4}-\phi\\ &=\dfrac{1}{2}
$\mathbf{P}( Not Red)$ The probability of red is: $$P(\text { Red })=\dfrac{1}{4}$$ Then by complementary event theorem: P(\text { not red })&=1-P(\text { Red }) \\ & =1-\dfrac{1}{4}=\dfrac{3}{4}
Since pink color covers one fourth $\dfrac{1}{4}$ of the spinner,
thus the probability of is: $$P( pink )=\dfrac{1}{4}$$ $$\text{Hence} \dfrac{1}{4},\dfrac{1}{2},\dfrac{3}{4},\dfrac{1}{4}$$
Go To | {"url":"https://www.mathcity.org/math-11-kpk/sol/unit06/re-ex6-p7","timestamp":"2024-11-06T07:37:39Z","content_type":"application/xhtml+xml","content_length":"23772","record_id":"<urn:uuid:277cd5de-9641-4252-97a0-9604f72bdc51>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00737.warc.gz"} |
how to get coeffiecient of discharge from qh log graph calculation for Calculations
23 Mar 2024
Popularity: ⭐⭐⭐
Coefficient of Discharge from Q-H Log Graph
This calculator provides the calculation of coefficient of discharge from Q-H log graph for Calculations.
Calculation Example: The coefficient of discharge is a dimensionless quantity that represents the efficiency of a flow constriction. It is defined as the ratio of the actual flow rate to the
theoretical flow rate that would occur if the constriction were not present. The coefficient of discharge can be determined from a Q-H log graph, which is a plot of the flow rate (Q) versus the head
(H) across the constriction.
Related Questions
Q: What is the significance of the coefficient of discharge in fluid mechanics?
A: The coefficient of discharge is a crucial parameter in fluid mechanics as it provides insights into the efficiency of flow constrictions. It helps engineers design and optimize systems involving
fluid flow, such as pipelines, valves, and nozzles.
Q: How can the coefficient of discharge be used to improve the performance of fluid systems?
A: Understanding and optimizing the coefficient of discharge can lead to improved performance of fluid systems. By selecting appropriate materials, geometries, and operating conditions, engineers can
minimize energy losses and enhance the efficiency of fluid flow.
| —— | —- | —- |
g Acceleration Due to Gravity m/s^2
Calculation Expression
Coefficient of Discharge: The coefficient of discharge is given by Cd = (Q / (A * sqrt(2 * g * H))) ^ 2
(Q / (A * sqrt(2 * g * H))) ^ 2
Calculated values
Considering these as variable values: Q=0.05, D=0.2, g=9.81, H=0.1, L=100.0, the calculated value(s) are given in table below
| —— | —- |
Coefficient of Discharge 0.00127420998980632/A^2
Similar Calculators
Calculator Apps
Matching 3D parts for how to get coeffiecient of discharge from qh log graph calculation for Calculations
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/how_to_get_coeffiecient_of_discharge_from_qh_log_graph_calculation_for_Calculations.html","timestamp":"2024-11-03T01:22:52Z","content_type":"text/html","content_length":"29027","record_id":"<urn:uuid:9ff453b0-b367-445b-a183-58710dd15d51>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00878.warc.gz"} |
Forecasting Using the Simple Linear Regression Model and Correlation - ppt video online download
1 Forecasting Using the Simple Linear Regression Model and Correlation
2 What is a forecast? Using a statistical method on past data to predict the future. Using experience, judgment and surveys to predict the future.
3 Why forecast? to enhance planning. to force thinking about the future.to fit corporate strategy to future conditions. to coordinate departments to the same future. to reduce corporate costs.
4 Kinds of Forecasts Causal forecasts are when changes in a variable (Y) you wish to predict are caused by changes in other variables (X's). Time series forecasts are when changes in a variable (Y)
are predicted based on prior values of itself (Y). Regression can provide both kinds of forecasts.
7 Relationships If the relationship is not linear, the forecaster often has to use math transformations to make the relationship linear.
8 Correlation Analysis Correlation measures the strength of the linear relationship between variables. It can be used to find the best predictor variables. It does not assure that there is a causal
relationship between the variables.
9 The Correlation CoefficientRanges between -1 and 1. The Closer to -1, The Stronger Is The Negative Linear Relationship. The Closer to 1, The Stronger Is The Positive Linear Relationship. The Closer
to 0, The Weaker Is Any Linear Relationship.
10 Graphs of Various Correlation (r) ValuesY Y Y X X X r = -1 r = -.6 r = 0 Y Y X X r = .6 r = 1
11 The Scatter Diagram Plot of all (Xi , Yi) pairs
12 The Scatter Diagram Is used to visualize the relationship and to assess its linearity. The scatter diagram can also be used to identify outliers.
13 Regression Analysis Regression Analysis can be used to model causality and make predictions. Terminology: The variable to be predicted is called the dependent or response variable. The variables
used in the prediction model are called independent, explanatory or predictor variables.
14 Simple Linear Regression ModelThe relationship between variables is described by a linear function. A change of one variable causes the other variable to change.
15 Population Linear RegressionPopulation Regression Line Is A Straight Line that Describes The Dependence of One Variable on The Other Population Slope Coefficient Random Error Population Y
intercept Dependent (Response) Variable Population Regression Line Independent (Explanatory) Variable
16 How is the best line found?Y Observed Value = Random Error X Observed Value
17 Sample Linear RegressionSample Regression Line Provides an Estimate of The Population Regression Line Sample Slope Coefficient Sample Y Intercept Residual Sample Regression Line provides an
estimate of provides an estimate of
18 Simple Linear Regression: An ExampleAnnual Store Square Sales Feet ($1000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 You wish to examine the relationship between the square footage of
produce stores and their annual sales. Sample data for 7 stores were obtained. Find the equation of the straight line that fits the data best
20 The Equation for the Regression Line From Excel Printout:
22 Interpreting the Results Yi = Xi The slope of means that each increase of one unit in X, we predict the average of Y to increase by an estimated units. The model estimates that for each increase
of 1 square foot in the size of the store, the expected annual sales are predicted to increase by $1487.
23 The Coefficient of DeterminationSSR regression sum of squares r2 = = SST total sum of squares The Coefficient of Determination (r2 ) measures the proportion of variation in Y explained by the
independent variable X.
24 Coefficients of Determination (R2) and Correlation (R)Y ^ Y = b + b X i 1 i X
25 Coefficients of Determination (R2) and Correlation (R)(continued) r2 = .81, r = +0.9 Y ^ Y = b + b X i 1 i X
26 Coefficients of Determination (R2) and Correlation (R)(continued) r2 = 0, r = 0 Y ^ Y = b + b X i 1 i X
27 Coefficients of Determination (R2) and Correlation (R)(continued) r2 = 1, r = -1 Y ^ Y = b + b X i 1 i X
28 Correlation: The SymbolsPopulation correlation coefficient (‘rho’) measures the strength between two variables. Sample correlation coefficient r estimates based on a set of sample
30 Inferences About the Slopet Test for a Population Slope Is There A Linear Relationship between X and Y ? Null and Alternative Hypotheses H0: 1 = 0 (No Linear Relationship) H1: 1 0 (Linear
Relationship) Test Statistic: Where and df = n - 2
31 Example: Produce StoresData for 7 Stores: Estimated Regression Equation: Annual Store Square Sales Feet ($000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 Yi = Xi The slope of this model is
Is Square Footage of the store affecting its Annual Sales?
32 Inferences About the Slope: t Test ExampleTest Statistic: Decision: Conclusion: H0: 1 = 0 H1: 1 0 .05 df = 5 Critical value(s): From Excel Printout Reject Reject Reject H0 .025 .025
There is evidence of a linear relationship. t 2.5706
33 Inferences About the Slope Using A Confidence IntervalConfidence Interval Estimate of the Slope b1 tn-2 Excel Printout for Produce Stores At 95% level of Confidence The confidence Interval for
the slope is (1.062, 1.911). Does not include 0. Conclusion: There is a significant linear relationship between annual sales and the size of the store.
34 Residual Analysis Is used to evaluate validity of assumptions. Residual analysis uses numerical measures and plots to assure the validity of the assumptions.
35 Linear Regression Assumptions1. X is linearly related to Y. 2. The variance is constant for each value of Y (Homoscedasticity). 3. The Residual Error is Normally Distributed. 4. If the data is
over time, then the errors must be independent.
36 Residual Analysis for LinearityX X e e X X Not Linear Linear
37 Residual Analysis for HomoscedasticityX X e e X X Homoscedasticity Heteroscedasticity
38 Residual Analysis for Independence: The Durbin-Watson StatisticIt is used when data is collected over time. It detects autocorrelation; that is, the residuals in one time period are related to
residuals in another time period. It measures violation of independence assumption. Calculate D and compare it to the value in Table E.8.
40 Interval Estimates for Different Values of XConfidence Interval for the mean of Y Confidence Interval for a individual Yi Y Yi = b0 + b1Xi _ X X A Given X
41 Estimation of Predicted ValuesConfidence Interval Estimate for YX The Mean of Y given a particular Xi Size of interval vary according to distance away from mean, X. Standard error of the estimate
t value from table with df=n-2
42 Estimation of Predicted ValuesConfidence Interval Estimate for Individual Response Yi at a Particular Xi Addition of 1 increases width of interval from that for the mean of Y
43 Example: Produce StoresData for 7 Stores: Annual Store Square Sales Feet ($000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 Predict the annual sales for a store with 2000 square feet.
Regression Model Obtained: Yi = Xi
44 Estimation of Predicted Values: ExampleConfidence Interval Estimate for YX Find the 95% confidence interval for the average annual sales for stores of 2,000 square feet Predicted Sales Yi = Xi
= ($000) tn-2 = t5 = X = SYX = = Confidence interval for mean Y
45 Estimation of Predicted Values: ExampleConfidence Interval Estimate for Individual Y Find the 95% confidence interval for annual sales of one particular store of 2,000 square feet Predicted
Sales Yi = Xi = ($000) tn-2 = t5 = X = SYX = = Confidence interval for individual Y | {"url":"http://slideplayer.com/slide/3440546/","timestamp":"2024-11-11T13:00:29Z","content_type":"text/html","content_length":"229571","record_id":"<urn:uuid:8e9d2d7e-8077-42c0-8bf4-ccbb519c7ea6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00526.warc.gz"} |
2.7 Measures of the Spread of the Data
An important characteristic of any set of data is the variation in the data. In some data sets, the data values are concentrated closely near the mean; in other data sets, the data values are more
widely spread out from the mean. The most common measure of variation, or spread, is the standard deviation. The standard deviation is a number that measures how far data values are from their mean.
The Standard Deviation
The standard deviation
• provides a numerical measure of the overall amount of variation in a data set and
• can be used to determine whether a particular data value is close to or far from the mean.
The standard deviation provides a measure of the overall variation in a data set.
The standard deviation is always positive or zero. The standard deviation is small when all the data are concentrated close to the mean, exhibiting little variation or spread. The standard deviation
is larger when the data values are more spread out from the mean, exhibiting more variation.
Suppose that we are studying the amount of time customers wait in line at the checkout at Supermarket A and Supermarket B. The average wait time at both supermarkets is five minutes. At Supermarket
A, the standard deviation for the wait time is two minutes; at Supermarket B, the standard deviation for the wait time is four minutes.
Because Supermarket B, has a higher standard deviation, we know that there is more variation in the wait times at Supermarket B. Overall, wait times at Supermarket B are more spread out from the
average; wait times at Supermarket A are more concentrated near the average.
The standard deviation can be used to determine whether a data value is close to or far from the mean.
Suppose that both Rosa and Binh shop at Supermarket A. Rosa waits at the checkout counter for seven minutes, and Binh waits for one minute. At Supermarket A, the mean waiting time is five minutes,
and the standard deviation is two minutes. The standard deviation can be used to determine whether a data value is close to or far from the mean. A z-score is a standardized score that lets us
compare data sets. It tells us how many standard deviations a data value is from the mean and is calculated as the ratio of the difference in a particular score and the population mean to the
population standard deviation.
We can use the given information to create the table below.
Supermarket Population Standard Deviation, σ Individual Score, x Population Mean, μ
Supermarket A 2 minutes 7, 1 5
Supermarket B 4 minutes 5
Since Rosa and Binh only shop at Supermarket A, we can ignore the row for Supermarket B.
We need the values from the first row to determine the number of standard deviations above or below the mean each individual wait time is; we can do so by calculating two different z-scores.
Rosa waited for seven minutes, so the z-score representing this deviation from the population mean may be calculated as
$z= x−μ σ = 7−5 2 =1 . z= x−μ σ = 7−5 2 =1 .$
The z-score of one tells us that Rosa’s wait time is one standard deviation above the mean wait time of five minutes.
Binh waited for one minute, so the z-score representing this deviation from the population mean may be calculated as
$z= x−μ σ = 1−5 2 =−2 . z= x−μ σ = 1−5 2 =−2 .$
The z-score of −2 tells us that Binh’s wait time is two standard deviations below the mean wait time of five minutes.
A data value that is two standard deviations from the average is just on the borderline for what many statisticians would consider to be far from the average. Considering data to be far from the mean
if they are more than two standard deviations away is more of an approximate rule of thumb than a rigid rule. In general, the shape of the distribution of the data affects how much of the data is
farther away than two standard deviations. You will learn more about this in later chapters.
The number line may help you understand standard deviation. If we were to put five and seven on a number line, seven is to the right of five. We say, then, that seven is one standard deviation to the
right of five because 5 + (1)(2) = 7.
If one were also part of the data set, then one is two standard deviations to the left of five because 5 + (–2)(2) = 1.
• In general, a value = mean + (#ofSTDEV)(standard deviation)
• where #ofSTDEVs = the number of standard deviations
• #ofSTDEV does not need to be an integer
• One is two standard deviations less than the mean of five because 1 = 5 + (–2)(2).
The equation value = mean + (#ofSTDEVs)(standard deviation) can be expressed for a sample and for a population as follows:
• Sample: $x = x ¯ + (#ofSTDEV)(s) x = x ¯ + (#ofSTDEV)(s)$
• Population: $x=μ+(#ofSTDEV)(σ) . x=μ+(#ofSTDEV)(σ) .$
The lowercase letter s represents the sample standard deviation and the Greek letter σ (lower case) represents the population standard deviation.
The symbol $x¯ x$ is the sample mean, and the Greek symbol $μμ$ is the population mean.
Calculating the Standard Deviation
If x is a number, then the difference x – mean is called its deviation. In a data set, there are as many deviations as there are items in the data set. The deviations are used to calculate the
standard deviation. If the numbers belong to a population, in symbols, a deviation is x – μ. For sample data, in symbols, a deviation is x – $x ¯ x ¯$.
The procedure to calculate the standard deviation depends on whether the numbers are the entire population or are data from a sample. The calculations are similar but not identical. Therefore, the
symbol used to represent the standard deviation depends on whether it is calculated from a population or a sample. The lowercase letter s represents the sample standard deviation and the Greek letter
σ (lowercase sigma) represents the population standard deviation. If the sample has the same characteristics as the population, then s should be a good estimate of σ.
To calculate the standard deviation, we need to calculate the variance first. The variance is the average of the squares of the deviations (the x – $x ¯ x ¯$ values for a sample or the x – μ values
for a population). The symbol σ^2 represents the population variance; the population standard deviation σ is the square root of the population variance. The symbol s^2 represents the sample variance;
the sample standard deviation s is the square root of the sample variance. You can think of the standard deviation as a special average of the deviations.
If the numbers come from a census of the entire population and not a sample, when we calculate the average of the squared deviations to find the variance, we divide by N, the number of items in the
population. If the data are from a sample rather than a population, when we calculate the average of the squared deviations, we divide by n – 1, one less than the number of items in the sample.
Formulas for the Sample Standard Deviation
• $s= Σ (x − x ¯ ) 2 n−1 s= Σ (x − x ¯ ) 2 n−1$ or $s= Σf (x− x ¯ ) 2 n−1 s= Σf (x− x ¯ ) 2 n−1$
• For the sample standard deviation, the denominator is n−; that is, the sample size minus 1.
Formulas for the Population Standard Deviation
• $σ = Σ (x−μ) 2 N σ = Σ (x−μ) 2 N$ or $σ = Σf (x–μ) 2 N σ = Σf (x–μ) 2 N$
• For the population standard deviation, the denominator is N, the number of items in the population.
In these formulas, f represents the frequency with which a value appears. For example, if a value appears once, f is one. If a value appears three times in the data set or population, f is three.
Types of Variability in Samples
Types of Variability in Samples
When researchers study a population, they often use a sample, either for convenience or because it is not possible to access the entire population. Variability is the term used to describe the
differences that may occur in these outcomes. Common types of variability include the following:
• Observational or measurement variability
• Natural variability
• Induced variability
• Sample variability
Here are some examples to describe each type of variability.
Example 1: Measurement variability
Measurement variability occurs when there are differences in the instruments used to measure or in the people using those instruments. If we are gathering data on how long it takes for a ball to drop
from a height by having students measure the time of the drop with a stopwatch, we may experience measurement variability if the two stopwatches used were made by different manufacturers. For
example, one stopwatch measures to the nearest second, whereas the other one measures to the nearest tenth of a second. We also may experience measurement variability because two different people are
gathering the data. Their reaction times in pressing the button on the stopwatch may differ; thus, the outcomes will vary accordingly. The differences in outcomes may be affected by measurement
Example 2: Natural variability
Natural variability arises from the differences that naturally occur because members of a population differ from each other. For example, if we have two identical corn plants and we expose both
plants to the same amount of water and sunlight, they may still grow at different rates simply because they are two different corn plants. The difference in outcomes may be explained by natural
Example 3: Induced variability
Induced variability is the counterpart to natural variability; this occurs because we have artificially induced an element of variation that, by definition, was not present naturally. For example, we
assign people to two different groups to study memory, and we induce a variable in one group by limiting the amount of sleep they get. The difference in outcomes may be affected by induced
Example 4: Sample variability
Sample variability occurs when multiple random samples are taken from the same population. For example, if I conduct four surveys of 50 people randomly selected from a given population, the
differences in outcomes may be affected by sample variability.
Sampling Variability of a Statistic
Sampling Variability of a Statistic
The statistic of a sampling distribution was discussed in Descriptive Statistics: Measures the Center of the Data. How much the statistic varies from one sample to another is known as the sampling
variability of a statistic. You typically measure the sampling variability of a statistic by its standard error. The standard error of the mean is an example of a standard error. The standard error
is the standard deviation of the sampling distribution. In other words, it is the average standard deviation that results from repeated sampling. You will cover the standard error of the mean in the
chapter The Central Limit Theorem (not now). The notation for the standard error of the mean is $σ n σ n$, where σ is the standard deviation of the population and n is the size of the sample.
In practice, USE A CALCULATOR OR COMPUTER SOFTWARE TO CALCULATE THE STANDARD DEVIATION. If you are using a TI-83, 83+, or 84+ calculator, you need to select the appropriate standard deviation σ[x] or
s[x] from the summary statistics. We will concentrate on using and interpreting the information that the standard deviation gives us. However, you should study the following step-by-step example to
help you understand how the standard deviation measures variation from the mean. The calculator instructions appear at the end of this example.
Example 2.33
In a fifth-grade class, the teacher was interested in the average age and the sample standard deviation of the ages of her students. The following data are the ages for a SAMPLE of n = 20 fifth-grade
students; the ages are rounded to the nearest half year:
9, 9.5, 9.5, 10, 10, 10, 10, 10.5, 10.5, 10.5, 10.5, 11, 11, 11, 11, 11, 11, 11.5, 11.5, 11.5
$x ¯ = 9 + 9.5(2) + 10(4) + 10.5(4) + 11(6) + 11.5(3) 20 =10.525 x ¯ = 9 + 9.5(2) + 10(4) + 10.5(4) + 11(6) + 11.5(3) 20 =10.525$
The average age is 10.53 years, rounded to two places.
The variance may be calculated by using a table. Then the standard deviation is calculated by taking the square root of the variance. We will explain the parts of the table after calculating s.
Data Frequency Deviations Deviations^2 (Frequency)(Deviations^2)
x f (x – $x ¯ x ¯$) (x – $x ¯ x ¯$)^2 (f)(x – $x ¯ x ¯$)^2
9 1 9 – 10.525 = –1.525 (–1.525)^2 = 2.325625 1 × 2.325625 = 2.325625
9.5 2 9.5 – 10.525 = –1.025 (–1.025)^2 = 1.050625 2 × 1.050625 = 2.101250
10 4 10 – 10.525 = –0.525 (–0.525)^2 = 0.275625 4 × 0.275625 = 1.1025
10.5 4 10.5 – 10.525 = –0.025 (–.025)^2 = 0.000625 4 × .000625 = 0.0025
11 6 11 – 10.525 = 0.475 (.475)^2 = 0.225625 6 × .225625 = 1.35375
11.5 3 11.5 – 10.525 = 0.975 (0.975)^2 = 0.950625 3 × .950625 = 2.851875
The total is 9.7375.
The last column simply multiplies each squared deviation by the frequency for the corresponding data value.
The sample variance, s^2, is equal to the sum of the last column (9.7375) divided by the total number of data values minus one (20 – 1):
$s2= 9.737520−1=.5125 s 2 = 9.7375 20−1 =.5125$
The sample standard deviation s is equal to the square root of the sample variance:
$s= .5125 =.715891, s= .5125 =.715891,$ which is rounded to two decimal places, s = .72.
Typically, you do the calculation for the standard deviation on your calculator or computer. The intermediate results are not rounded. This is done for accuracy.
For the following problems, recall that value = mean + (#ofSTDEVs)(standard deviation); verify the mean and standard deviation on a calculator or computer:
Note that these formulas are derived by algebraically manipulating the z-score formulas, given either parameters or statistics.
• For a sample: x = $x ¯ x ¯$ + (#ofSTDEVs)(s)
• For a population: x = μ + (#ofSTDEVs)(σ)
• For this example, use x = $x ¯ x ¯$ + (#ofSTDEVs)(s) because the data is from a sample
a. Verify the mean and standard deviation on your calculator or computer.
b. Find the value that is one standard deviation above the mean. Find ($x ¯ x ¯$ + 1s).
c. Find the value that is two standard deviations below the mean. Find ($x ¯ x ¯$ – 2s).
d. Find the values that are 1.5 standard deviations from (below and above) the mean.
Solution 2.33
a. Using the TI-83, 83+, 84, 84+ Calculator
□ Clear lists L1 and L2. Press STAT 4:ClrList. Enter 2^nd 1 for L1, the comma (,), and 2^nd 2 for L2.
□ Enter data into the list editor. Press STAT 1:EDIT. If necessary, clear the lists by arrowing up into the name. Press CLEAR and arrow down.
□ Put the data values (9, 9.5, 10, 10.5, 11, 11.5) into list L1 and the frequencies (1, 2, 4, 4, 6, 3) into list L2. Use the arrow keys to move around.
□ Press STAT and arrow to CALC. Press 1:1-VarStats and enter L1 (2^nd 1), L2 (2^nd 2). Do not forget the comma. Press ENTER.
□ $x ¯ x ¯$ = 10.525.
□ Use Sx because this is sample data (not a population): Sx=.715891.
b. ($x ¯ x ¯$ + 1s) = 10.53 + (1)(.72) = 11.25
c. ($x ¯ x ¯$ – 2s) = 10.53 – (2)(.72) = 9.09
□ ($x ¯ x ¯$ – 1.5s) = 10.53 – (1.5)(.72) = 9.45
□ ($x ¯ x ¯$ + 1.5s) = 10.53 + (1.5)(.72) = 11.61
Try It 2.33
On a baseball team, the ages of each of the players are as follows:
21, 21, 22, 23, 24, 24, 25, 25, 28, 29, 29, 31, 32, 33, 33, 34, 35, 36, 36, 36, 36, 38, 38, 38, 40
Use your calculator or computer to find the mean and standard deviation. Then find the value that is two standard deviations above the mean.
Explanation of the standard deviation calculation shown in the table
The deviations show how spread out the data are about the mean. The data value 11.5 is farther from the mean than is the data value 11, which is indicated by the deviations .97 and .47. A positive
deviation occurs when the data value is greater than the mean, whereas a negative deviation occurs when the data value is less than the mean. The deviation is –1.525 for the data value nine. If you
add the deviations, the sum is always zero. We can sum the products of the frequencies and deviations to show that the sum of the deviations is always zero. $1( −1.525 )+2( −1.025 )+4( −0.525 )+4(
−0.025 )+6( 0.475 )+3( 0.975 )=0 1( −1.525 )+2( −1.025 )+4( −0.525 )+4( −0.025 )+6( 0.475 )+3( 0.975 )=0$ For Example 2.33, there are n = 20 deviations. So you cannot simply add the deviations to get
the spread of the data. By squaring the deviations, you make them positive numbers, and the sum will also be positive. The variance, then, is the average squared deviation.
The variance is a squared measure and does not have the same units as the data. Taking the square root solves the problem. The standard deviation measures the spread in the same units as the data.
Notice that instead of dividing by n = 20, the calculation divided by n – 1 = 20 – 1 = 19 because the data is a sample. For the sample variance, we divide by the sample size minus one (n – 1). Why
not divide by n? The answer has to do with the population variance. The sample variance is an estimate of the population variance. Based on the theoretical mathematics that lies behind these
calculations, dividing by (n – 1) gives a better estimate of the population variance.
Your concentration should be on what the standard deviation tells us about the data. The standard deviation is a number that measures how far the data are spread from the mean. Let a calculator or
computer do the arithmetic.
The standard deviation, s or σ, is either zero or larger than zero. Describing the data with reference to the spread is called variability. The variability in data depends on the method by which the
outcomes are obtained, for example, by measuring or by random sampling. When the standard deviation is zero, there is no spread; that is, all the data values are equal to each other. The standard
deviation is small when all the data are concentrated close to the mean and larger when the data values show more variation from the mean. When the standard deviation is a lot larger than zero, the
data values are very spread out about the mean; outliers can make s or σ very large.
The standard deviation, when first presented, can seem unclear. By graphing your data, you can get a better feel for the deviations and the standard deviation. You will find that in symmetrical
distributions, the standard deviation can be very helpful, but in skewed distributions, the standard deviation may not be much help. The reason is that the two sides of a skewed distribution have
different spreads. In a skewed distribution, it is better to look at the first quartile, the median, the third quartile, the smallest value, and the largest value. Because numbers can be confusing,
always graph your data. Display your data in a histogram or a box plot.
Example 2.34
Use the following data (first exam scores) from Susan Dean's spring precalculus class.
33, 42, 49, 49, 53, 55, 55, 61, 63, 67, 68, 68, 69, 69, 72, 73, 74, 78, 80, 83, 88, 88, 88, 90, 92, 94, 94, 94, 94, 96, 100
a. Create a chart containing the data, frequencies, relative frequencies, and cumulative relative frequencies to three decimal places.
b. Calculate the following to one decimal place using a TI-83+ or TI-84 calculator:
i. The sample mean
ii. The sample standard deviation
iii. The median
iv. The first quartile
v. The third quartile
vi. IQR
c. Construct a box plot and a histogram on the same set of axes. Make comments about the box plot, the histogram, and the chart.
Solution 2.34
a. See Table 2.33.
b. Entering the data values into a list in your graphing calculator and then selecting Stat, Calc, and 1-Var Stats will produce the one-variable statistics you need.
c. The x-axis goes from 32.5 to 100.5; the y-axis goes from –2.4 to 15 for the histogram. The number of intervals is 5, so the width of an interval is (100.5 – 32.5) divided by 5, equal to 13.6.
Endpoints of the intervals are as follows:
• the starting point is 32.5, 32.5 + 13.6 = 46.1, 46.1 + 13.6 = 59.7, 59.7 + 13.6 = 73.3
• 73.3 + 13.6 = 86.9, 86.9 + 13.6 = 100.5 = the ending value
• no data values fall on an interval boundary
The long left whisker in the box plot is reflected in the left side of the histogram. The spread of the exam scores in the lower 50 percent is greater (73 – 33 = 40) than the spread in the upper 50
percent (100 – 73 = 27). The histogram, box plot, and chart all reflect this. There are a substantial number of A and B grades (80s, 90s, and 100). The histogram clearly shows this. The box plot
shows us that the middle 50 percent of the exam scores (IQR = 29) are Ds, Cs, and Bs. The box plot also shows us that the lower 25 percent of the exam scores are Ds and Fs.
Data Frequency Relative Frequency Cumulative Relative Frequency
33 1 0.032 0.032
42 1 0.032 0.064
49 2 0.065 0.129
53 1 0.032 0.161
55 2 0.065 0.226
61 1 0.032 0.258
63 1 0.032 0.29
67 1 0.032 0.322
68 2 0.065 0.387
69 2 0.065 0.452
72 1 0.032 0.484
73 1 0.032 0.516
74 1 0.032 0.548
78 1 0.032 0.580
80 1 0.032 0.612
83 1 0.032 0.644
88 3 0.097 0.741
90 1 0.032 0.773
92 1 0.032 0.805
94 4 0.129 0.934
96 1 0.032 0.966
100 1 0.032 0.998 (Why isn't this value 1?)
Try It 2.34
The following data show the different types of pet food that stores in the area carry:
6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12
Calculate the sample mean and the sample standard deviation to one decimal place using a TI-83+ or TI-84 calculator.
Standard Deviation of Grouped Frequency Tables
Standard deviation of Grouped Frequency Tables
Recall that for grouped data we do not know individual data values, so we cannot describe the typical value of the data with precision. In other words, we cannot find the exact mean, median, or mode.
We can, however, determine the best estimate of the measures of center by finding the mean of the grouped data with the formula $Mean of Frequency Table= ∑ fm ∑ f , Mean of Frequency Table= ∑ fm ∑ f
where $f=f=$ interval frequencies and m = interval midpoints.
Just as we could not find the exact mean, neither can we find the exact standard deviation. Remember that standard deviation describes numerically the expected deviation a data value has from the
mean. In simple English, the standard deviation allows us to compare how unusual individual data are when compared to the mean.
Example 2.35
Find the standard deviation for the data in Table 2.34.
Class Frequency, f Midpoint, m m^2 $x¯ x¯$^2 fm^2 Standard Deviation
0–2 1 1 1 7.58 1 3.5
3–5 6 4 16 7.58 96 3.5
6–8 10 7 49 7.58 490 3.5
9–11 7 10 100 7.58 700 3.5
12–14 0 13 169 7.58 0 3.5
15–17 2 16 256 7.58 512 3.5
For this data set, we have the mean, $x ¯ x ¯$ = 7.58, and the standard deviation, s[x] = 3.5. This means that a randomly selected data value would be expected to be 3.5 units from the mean. If we
look at the first class, we see that the class midpoint is equal to one. This is almost two full standard deviations from the mean since 7.58 – 3.5 – 3.5 = .58. While the formula for calculating the
standard deviation is not complicated, $s x = f (m− x ¯ ) 2 n−1 , s x = f (m− x ¯ ) 2 n−1 ,$ where s[x] = sample standard deviation, $x ¯ x ¯$ = sample mean; the calculations are tedious. It is
usually best to use technology when performing the calculations.
Try It 2.35
Find the standard deviation for the data from the previous example:
Class Frequency, f
0–2 1
3–5 6
6–8 10
9–11 7
12–14 0
15–17 2
First, press the STAT key and select 1:Edit.
Input the midpoint values into L1 and the frequencies into L2.
Select STAT, CALC, and 1: 1-Var Stats.
Select 2^nd, then 1, then, 2^nd, then 2 Enter.
You will see displayed both a population standard deviation, σ[x], and the sample standard deviation, s[x].
Comparing Values from Different Data Sets
Comparing Values from Different Data Sets
As explained before, a z-score allows us to compare statistics from different data sets. If the data sets have different means and standard deviations, then comparing the data values directly can be
• For each data value, calculate how many standard deviations away from its mean the value is.
• In symbols, the formulas for calculating z-scores become the following:
Sample $z= x − x ¯ s z= x − x ¯ s$
Population $z= x − μ σ z= x − μ σ$
As shown in the table, when only a sample mean and sample standard deviation are given, the top formula is used. When the population mean and population standard deviation are given, the bottom
formula is used.
Example 2.36
Two students, John and Ali, from different high schools, wanted to find out who had the highest GPA when compared to his school. Which student had the highest GPA when compared to his school?
Student GPA School Mean GPA School Standard Deviation
John 2.85 3.0 0.7
Ali 77 80 10
Solution 2.36
For each student, determine how many standard deviations (#ofSTDEVs) his GPA is away from the average, for his school. Pay careful attention to signs when comparing and interpreting the answer.
$z= # of STDEVs= value –mean standard deviation = x+μ σ z= # of STDEVs= value –mean standard deviation = x+μ σ$
For John,
For Ali, $z=#ofSTDEVs= 77−80 10 =−.3 z=#ofSTDEVs= 77−80 10 =−0.3$
John has the better GPA when compared to his school because his GPA is 0.21 standard deviations below his school's mean, while Ali's GPA is 0.3 standard deviations below his school's mean.
John's z-score of –0.21 is higher than Ali's z-score of –0.3. For GPA, higher values are better, so we conclude that John has the better GPA when compared to his school. The z-score representing
John's score does not fall as far below the mean as the z-score representing Ali's score.
Try It 2.36
Two swimmers, Angie and Beth, from different teams, wanted to find out who had the fastest time for the 50-meter freestyle when compared to her team. Which swimmer had the fastest time when compared
to her team?
Swimmer Time (seconds) Team Mean Time Team Standard Deviation
Angie 26.2 27.2 0.8
Beth 27.3 30.1 1.4
The following lists give a few facts that provide a little more insight into what the standard deviation tells us about the distribution of the data:
For any data set, no matter what the distribution of the data is, the following are true:
• At least 75 percent of the data is within two standard deviations of the mean.
• At least 89 percent of the data is within three standard deviations of the mean.
• At least 95 percent of the data is within 4.5 standard deviations of the mean.
• This is known as Chebyshev's Rule.
A bell-shaped distribution is one that is normal and symmetric, meaning the curve can be folded along a line of symmetry drawn through the median, and the left and right sides of the curve would fold
on each other symmetrically. With a bell-shaped distribution, the mean, median, and mode are all located at the same place.
For data having a distribution that is bell-shaped and symmetric, the following are true:
• Approximately 68 percent of the data is within one standard deviation of the mean.
• Approximately 95 percent of the data is within two standard deviations of the mean.
• More than 99 percent of the data is within three standard deviations of the mean.
• This is known as the Empirical Rule.
• It is important to note that this rule applies only when the shape of the distribution of the data is bell-shaped and symmetric; we will learn more about this when studying the Normal or Gaussian
probability distribution in later chapters. | {"url":"https://texasgateway.org/resource/27-measures-spread-data?book=79081&binder_id=78221","timestamp":"2024-11-11T06:46:56Z","content_type":"text/html","content_length":"113997","record_id":"<urn:uuid:90390270-12e3-4f33-bea0-c733cdf259f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00754.warc.gz"} |
Class 7 Chapter 8 A Shirt in the Market MCQs - Notes In Hindi
Class 7 Chapter 8 A Shirt in the Market MCQs
Class 7 Chapter 8 A Shirt in the Market MCQs
1. Why did Swapna sell her cotton to the local trader instead of the Kurnool cotton market?
(A) The trader offered a better price
(B) She had borrowed money from the trader
(C) The market was closed
(D) She didn’t have transportation to the market Answer
Answer: (B) She had borrowed money from the trader
2. What condition did the trader impose on Swapna when she borrowed money for seeds and fertilizers?
(A) She had to work on his farm
(B) She had to sell all her cotton to him
(C) She had to pay a lower interest rate
(D) She could only sell her cotton in the market Answer
Answer: (B) She had to sell all her cotton to him
3. What challenge do small farmers like Swapna face during cotton cultivation?
(A) They lack the required labor
(B) They must borrow money for high-input costs
(C) They do not have access to markets
(D) They lack knowledge of modern farming techniques Answer
Answer: (B) They must borrow money for high-input costs
4. How much money did the trader pay Swapna for her cotton after deducting the loan repayment?
(A) ₹1,500
(B) ₹3,000
(C) ₹4,500
(D) ₹6,000 Answer
Answer: (B) ₹3,000
5. Why did Swapna not argue with the trader even though she knew cotton was selling for a higher price in the market?
(A) She trusted the trader completely
(B) The trader was powerful, and she depended on him for loans
(C) She was afraid of losing her farm
(D) She did not want to sell in the market Answer
Answer: (B) The trader was powerful, and she depended on him for loans
6. What is Erode in Tamil Nadu known for?
(A) Cotton cultivation
(B) Being one of the largest cloth markets
(C) Garment factories
(D) Weaving cotton cloth Answer
Answer: (B) Being one of the largest cloth markets
7. Who supplies the weavers with yarn in the Erode cloth market?
(A) Government agencies
(B) Local farmers
(C) Cloth merchants
(D) Garment exporters Answer
Answer: (C) Cloth merchants
8. How does the ‘putting-out system’ benefit the weavers in Erode?
(A) They get access to modern machinery
(B) They do not have to buy their own yarn
(C) They receive a higher payment for their work
(D) They control the selling of the finished product Answer
Answer: (B) They do not have to buy their own yarn
9. What is a significant disadvantage of the ‘putting-out system’ for the weavers?
(A) They have to purchase their own yarn
(B) They have no control over the price of the cloth
(C) They are responsible for marketing the cloth
(D) They can only sell to international buyers Answer
Answer: (B) They have no control over the price of the cloth
10. How many hours a day do small weavers and their families work to produce cloth?
(A) 6 hours
(B) 8 hours
(C) 12 hours
(D) 14 hours Answer
Answer: (C) 12 hours
11. Who are the main buyers of the cloth produced in Erode?
(A) Local villagers
(B) International garment firms
(C) Government agencies
(D) Local weavers Answer
Answer: (B) International garment firms
12. Why do foreign buyers demand lower prices from garment exporters?
(A) They want to maximize their profits
(B) They want to support local economies
(C) They face competition from other markets
(D) They produce low-quality products Answer
Answer: (A) They want to maximize their profits
13. How do garment exporters cut costs to meet the demands of foreign buyers?
(A) By reducing the quality of the cloth
(B) By paying workers the lowest wages possible
(C) By limiting production
(D) By purchasing cheaper machinery Answer
Answer: (B) By paying workers the lowest wages possible
14. What happens if there is a delay in delivery or defects in the garments exported to foreign buyers?
(A) The buyers are lenient
(B) The exporters are fined
(C) The workers are given bonuses
(D) The entire shipment is rejected Answer
Answer: (B) The exporters are fined
15. Who are the highest-paid workers in the garment exporting factories?
(A) Tailors
(B) Ironing workers
(C) Helpers
(D) Thread cutters Answer
Answer: (A) Tailors
16. How much does a tailor earn per month in the garment factory described in the text?
(A) ₹2,000
(B) ₹3,000
(C) ₹4,000
(D) ₹5,000 Answer
Answer: (B) ₹3,000
17. Why are most workers in the garment factories employed on a temporary basis?
(A) They are unskilled
(B) The factories cannot afford to hire permanent staff
(C) It allows the employer to dismiss them easily when not needed
(D) They prefer short-term contracts Answer
Answer: (C) It allows the employer to dismiss them easily when not needed
18. What is the wage for workers involved in ironing in the garment factory?
(A) ₹1 per piece
(B) ₹1.50 per piece
(C) ₹2 per piece
(D) ₹3 per piece Answer
Answer: (B) ₹1.50 per piece
19. What is the approximate price of a shirt sold in the United States that was made in the garment factory near Delhi?
(A) $15
(B) $20
(C) $26
(D) $30 Answer
Answer: (C) $26
20. Why do businesspersons in the United States spend so much on advertising the shirts they sell?
(A) To increase the price of the shirts
(B) To create a brand image and attract customers
(C) To pay workers higher wages
(D) To compete with local markets Answer
Answer: (B) To create a brand image and attract customers
21. How much profit does the businessperson in the United States make on each shirt sold for $26?
(A) ₹500
(B) ₹700
(C) ₹900
(D) ₹1,000 Answer
Answer: (C) ₹900
22. At what price did the garment exporter sell each shirt to the businessperson in the United States?
(A) ₹100
(B) ₹200
(C) ₹300
(D) ₹400 Answer
Answer: (C) ₹300
23. What cost does the garment exporter incur for raw materials to make each shirt?
(A) ₹25
(B) ₹50
(C) ₹100
(D) ₹150 Answer
Answer: (C) ₹100
24. How much are the workers’ wages per shirt in the garment factory?
(A) ₹25
(B) ₹50
(C) ₹75
(D) ₹100 Answer
Answer: (A) ₹25
25. What is one reason that the businessperson is able to make huge profits in the market?
(A) The high wages paid to workers
(B) The low price paid to garment exporters
(C) The quality of the cloth
(D) The lack of competition Answer
Answer: (B) The low price paid to garment exporters
26. What is a ginning mill responsible for in the cotton production process?
(A) Weaving cloth
(B) Removing seeds from cotton bolls
(C) Dyeing the fabric
(D) Spinning yarn Answer
Answer: (B) Removing seeds from cotton bolls
27. How do foreign buyers impact the garment exporting factories in India?
(A) They pay high wages to workers
(B) They demand low prices and strict quality standards
(C) They encourage local market sales
(D) They invest in weaving technology Answer
Answer: (B) They demand low prices and strict quality standards
28. What is one way the Indian government supports weaver cooperatives?
(A) By offering loans to weavers
(B) By buying cloth from cooperatives for programs like school uniforms
(C) By reducing taxes on cloth production
(D) By establishing new markets for weavers Answer
Answer: (B) By buying cloth from cooperatives for programs like school uniforms
29. What system do weavers in Erode rely on that gives merchants significant power?
(A) Direct market sales
(B) Putting-out system
(C) Cooperative system
(D) State-run markets Answer
Answer: (B) Putting-out system
30. How much do small weavers in the putting-out system typically earn for 12 hours of daily work?
(A) ₹1,500
(B) ₹2,000
(C) ₹3,500
(D) ₹5,000 Answer
Answer: (C) ₹3,500
31. What allows foreign buyers to make huge profits in the garment industry?
(A) High quality of raw materials
(B) Low wages paid to workers
(C) Government subsidies
(D) Tax exemptions Answer
Answer: (B) Low wages paid to workers
32. Why do garment exporters agree to the strict demands of foreign buyers?
(A) They get paid more than local buyers
(B) They have long-term contracts
(C) Foreign buyers are powerful and set strict conditions
(D) They want to expand their business into new markets Answer
Answer: (C) Foreign buyers are powerful and set strict conditions
33. How much does the businessperson make in profit for each shirt sold at $26 in the United States?
(A) ₹400
(B) ₹700
(C) ₹900
(D) ₹1,100 Answer
Answer: (C) ₹900
34. What is the role of women in the garment factories?
(A) They mostly work as tailors
(B) They are employed in lower-paying jobs such as buttoning and thread cutting
(C) They receive the highest wages
(D) They manage the factories Answer
Answer: (B) They are employed in lower-paying jobs such as buttoning and thread cutting
35. What is the approximate monthly wage for a tailor in the garment factory near Delhi?
(A) ₹1,500
(B) ₹2,500
(C) ₹3,000
(D) ₹4,000 Answer
Answer: (C) ₹3,000
36. What is the primary reason why poor people are often exploited in the market?
(A) They have access to better opportunities
(B) They depend on the rich and powerful for loans, raw materials, and employment
(C) They have limited access to education
(D) They prefer to work with middlemen Answer
Answer: (B) They depend on the rich and powerful for loans, raw materials, and employment
37. What is one way in which weavers can reduce their dependence on merchants?
(A) By working longer hours
(B) By forming weaver cooperatives
(C) By selling their cloth at a lower price
(D) By switching to another trade Answer
Answer: (B) By forming weaver cooperatives
38. How does the Tamil Nadu government support weaver cooperatives?
(A) By giving free yarn
(B) By purchasing cloth for programs like Free School Uniforms
(C) By providing subsidies for looms
(D) By training workers in weaving Answer
Answer: (B) By purchasing cloth for programs like Free School Uniforms
39. What is one key advantage of the putting-out system for weavers?
(A) They can sell their cloth directly to buyers
(B) They do not have to purchase yarn themselves
(C) They get to choose their work hours
(D) They receive a fixed high wage Answer
Answer: (B) They do not have to purchase yarn themselves
40. What does the merchant provide to the weaver in the putting-out system?
(A) A finished product
(B) Yarn and instructions for the cloth to be made
(C) New weaving technology
(D) Land and financial support Answer
Answer: (B) Yarn and instructions for the cloth to be made
41. Who benefits the most in the chain of markets described in the text?
(A) Small farmers
(B) Garment workers
(C) Foreign businesspersons
(D) Local traders Answer
Answer: (C) Foreign businesspersons
42. How do foreign businesspersons make huge profits in the market?
(A) By paying fair wages to workers
(B) By buying directly from farmers
(C) By demanding low prices from suppliers
(D) By selling products at low prices Answer
Answer: (C) By demanding low prices from suppliers
43. Why do workers in garment factories earn barely enough to cover their needs?
(A) They are unskilled
(B) The factories are small
(C) They work fewer hours
(D) They are paid very low wages Answer
Answer: (D) They are paid very low wages
44. Which group in the market chain earns more than weavers but less than exporters?
(A) Farmers
(B) Local merchants or traders
(C) Tailors
(D) Foreign buyers Answer
Answer: (B) Local merchants or traders
45. What does democracy also imply, according to the text?
(A) Getting fair wages in the market
(B) Being able to sell products internationally
(C) Working long hours for little pay
(D) Owning large shops and factories Answer
Answer: (A) Getting fair wages in the market
46. What is the role of the ginning mill in cotton production?
(A) Weaving cotton cloth
(B) Removing seeds from cotton bolls
(C) Spinning yarn
(D) Harvesting cotton Answer
Answer: (B) Removing seeds from cotton bolls
47. What does the term ‘exporter’ refer to in the context of this chapter?
(A) Someone who sells goods domestically
(B) Someone who buys goods locally
(C) Someone who sells goods abroad
(D) Someone who manages workers Answer
Answer: (C) Someone who sells goods abroad
48. What is ‘profit’ according to the glossary in this chapter?
(A) The amount left after deducting all costs from earnings
(B) The total earnings from sales
(C) The price of raw materials
(D) The cost of production Answer
Answer: (A) The amount left after deducting all costs from earnings
49. Who benefits the most from the chain of markets described in the chapter?
(A) Small farmers
(B) Garment workers
(C) Foreign businesspersons
(D) Local traders Answer
Answer: (C) Foreign businesspersons
50. How does the market often exploit the poor, according to the text?
(A) By offering them high-interest loans
(B) By depending on the rich and powerful for loans, raw materials, and marketing
(C) By preventing them from working in garment factories
(D) By charging them more for raw materials Answer
Answer: (B) By depending on the rich and powerful for loans, raw materials, and marketing
51. What is the role of weaver cooperatives in the textile industry?
(A) To buy yarn from foreign suppliers
(B) To reduce dependence on merchants and increase weavers’ income
(C) To replace the putting-out system completely
(D) To sell cloth only to local markets Answer
Answer: (B) To reduce dependence on merchants and increase weavers’ income
52. How does the Tamil Nadu government support weaver cooperatives?
(A) By providing loans for buying looms
(B) By purchasing cloth for government programs like school uniforms
(C) By offering tax exemptions
(D) By training weavers in modern techniques Answer
Answer: (B) By purchasing cloth for government programs like school uniforms
53. What problem do weavers face in the putting-out system?
(A) Excessive wages for their work
(B) Dependence on merchants for raw materials and pricing
(C) Too much freedom in choosing work hours
(D) Ability to sell cloth at higher prices Answer
Answer: (B) Dependence on merchants for raw materials and pricing
54. What is one advantage for weavers under the putting-out system?
(A) They have to buy their own yarn
(B) They can set their own prices for the cloth
(C) They do not need to worry about marketing their finished products
(D) They have full control over production processes Answer
Answer: (C) They do not need to worry about marketing their finished products
55. What does the term ‘putting-out system’ refer to?
(A) Weaving cloth without any external help
(B) Merchants providing raw materials and receiving finished products
(C) Workers selling cloth directly to consumers
(D) A cooperative system among weavers Answer
Answer: (B) Merchants providing raw materials and receiving finished products
56. What is one of the main advantages of forming weaver cooperatives?
(A) They can sell cloth at higher prices directly to consumers
(B) They reduce their dependence on merchants and earn a fair price for their products
(C) They receive government grants without conditions
(D) They eliminate the need for raw materials Answer
Answer: (B) They reduce their dependence on merchants and earn a fair price for their products
57. What type of assistance does the Tamil Nadu government provide to weaver cooperatives?
(A) Loans for purchasing looms
(B) Training for weaving techniques
(C) Purchasing cloth for government programs like school uniforms
(D) Tax reductions on raw materials Answer
Answer: (C) Purchasing cloth for government programs like school uniforms
58. What do weavers in the putting-out system typically need to borrow money for?
(A) Marketing their finished products
(B) Buying yarn and other raw materials
(C) Paying high interest on loans
(D) Investing in new looms Answer
Answer: (B) Buying yarn and other raw materials
59. How much do small weavers generally earn for their hard work?
(A) ₹1,500 per month
(B) ₹2,500 per month
(C) ₹3,500 per month
(D) ₹5,000 per month Answer
Answer: (C) ₹3,500 per month
60. What is the role of merchants in the putting-out system?
(A) They control the prices paid to the weavers and supply the raw materials
(B) They are responsible for exporting the finished cloth
(C) They provide training to the weavers
(D) They directly sell the cloth to consumers Answer
Answer: (A) They control the prices paid to the weavers and supply the raw materials
61. What is the main benefit of a weaver cooperative for its members?
(A) Increased dependence on merchants
(B) Ability to sell cloth at higher prices
(C) Shared resources and better income
(D) Reduced production time Answer
Answer: (C) Shared resources and better income
62. Which of the following is a key advantage of cooperatives for weavers?
(A) Complete control over marketing
(B) Ability to set high prices independently
(C) Collective buying of yarn at lower costs
(D) Elimination of all external suppliers Answer
Answer: (C) Collective buying of yarn at lower costs
63. How does the Tamil Nadu government assist weaver cooperatives?
(A) By giving them loans for purchasing machinery
(B) By buying cloth for government schemes like school uniforms
(C) By setting a fixed price for their products
(D) By providing free raw materials Answer
Answer: (B) By buying cloth for government schemes like school uniforms
64. What problem do weavers face when they rely on merchants in the putting-out system?
(A) They earn high wages
(B) They have to purchase their own yarn
(C) They receive low prices for their cloth
(D) They have complete control over production Answer
Answer: (C) They receive low prices for their cloth
65. In the context of cooperatives, what is a common goal of the members?
(A) To maximize individual profits
(B) To create monopolies
(C) To work together for mutual benefit
(D) To compete against each other Answer
Answer: (C) To work together for mutual benefit
Class 7 Chapter 8 A Shirt in the Market MCQs
mcqs = [ (“Why did Swapna sell her cotton to the local trader instead of the Kurnool cotton market?”, {“A”: “The trader offered a better price”, “B”: “She had borrowed money from the trader”, “C”:
“The market was closed”, “D”: “She didn’t have transportation to the market”}, “B”), (“What condition did the trader impose on Swapna when she borrowed money for seeds and fertilizers?”, {“A”: “She
had to work on his farm”, “B”: “She had to sell all her cotton to him”, “C”: “She had to pay a lower interest rate”, “D”: “She could only sell her cotton in the market”}, “B”), (“What challenge do
small farmers like Swapna face during cotton cultivation?”, {“A”: “They lack the required labor”, “B”: “They must borrow money for high-input costs”, “C”: “They do not have access to markets”, “D”:
“They lack knowledge of modern farming techniques”}, “B”), (“How much money did the trader pay Swapna for her cotton after deducting the loan repayment?”, {“A”: “₹1,500”, “B”: “₹3,000”, “C”:
“₹4,500”, “D”: “₹6,000”}, “B”), (“Why did Swapna not argue with the trader even though she knew cotton was selling for a higher price in the market?”, {“A”: “She trusted the trader completely”, “B”:
“The trader was powerful, and she depended on him for loans”, “C”: “She was afraid of losing her farm”, “D”: “She did not want to sell in the market”}, “B”), (“What is Erode in Tamil Nadu known for?
”, {“A”: “Cotton cultivation”, “B”: “Being one of the largest cloth markets”, “C”: “Garment factories”, “D”: “Weaving cotton cloth”}, “B”), (“Who supplies the weavers with yarn in the Erode cloth
market?”, {“A”: “Government agencies”, “B”: “Local farmers”, “C”: “Cloth merchants”, “D”: “Garment exporters”}, “C”), (“How does the ‘putting-out system’ benefit the weavers in Erode?”, {“A”: “They
get access to modern machinery”, “B”: “They do not have to buy their own yarn”, “C”: “They receive a higher payment for their work”, “D”: “They control the selling of the finished product”}, “B”),
(“What is a significant disadvantage of the ‘putting-out system’ for the weavers?”, {“A”: “They have to purchase their own yarn”, “B”: “They have no control over the price of the cloth”, “C”: “They
are responsible for marketing the cloth”, “D”: “They can only sell to international buyers”}, “B”), (“How many hours a day do small weavers and their families work to produce cloth?”, {“A”: “6
hours”, “B”: “8 hours”, “C”: “12 hours”, “D”: “14 hours”}, “C”), (“Who are the main buyers of the cloth produced in Erode?”, {“A”: “Local villagers”, “B”: “International garment firms”, “C”:
“Government agencies”, “D”: “Local weavers”}, “B”), (“Why do foreign buyers demand lower prices from garment exporters?”, {“A”: “They want to maximize their profits”, “B”: “They want to support local
economies”, “C”: “They face competition from other markets”, “D”: “They produce low-quality products”}, “A”), (“How do garment exporters cut costs to meet the demands of foreign buyers?”, {“A”: “By
reducing the quality of the cloth”, “B”: “By paying workers the lowest wages possible”, “C”: “By limiting production”, “D”: “By purchasing cheaper machinery”}, “B”), (“What happens if there is a
delay in delivery or defects in the garments exported to foreign buyers?”, {“A”: “The buyers are lenient”, “B”: “The exporters are fined”, “C”: “The workers are given bonuses”, “D”: “The entire
shipment is rejected”}, “B”), (“Who are the highest-paid workers in the garment exporting factories?”, {“A”: “Tailors”, “B”: “Ironing workers”, “C”: “Helpers”, “D”: “Thread cutters”}, “A”), (“How
much does a tailor earn per month in the garment factory described in the text?”, {“A”: “₹2,000”, “B”: “₹3,000”, “C”: “₹4,000”, “D”: “₹5,000”}, “B”), (“Why are most workers in the garment factories
employed on a temporary basis?”, {“A”: “They are unskilled”, “B”: “The factories cannot afford to hire permanent staff”, “C”: “It allows the employer to dismiss them easily when not needed”, “D”:
“They prefer short-term contracts”}, “C”), (“What is the wage for workers involved in ironing in the garment factory?”, {“A”: “₹1 per piece”, “B”: “₹1.50 per piece”, “C”: “₹2 per piece”, “D”: “₹3 per
piece”}, “B”), (“What is the approximate price of a shirt sold in the United States that was made in the garment factory near Delhi?”, {“A”: “$15”, “B”: “$20”, “C”: “$26”, “D”: “$30”}, “C”), (“Why do
businesspersons in the United States spend so much on advertising the shirts they sell?”, {“A”: “To increase the price of the shirts”, “B”: “To create a brand image and attract customers”, “C”: “To
pay workers higher wages”, “D”: “To compete with local markets”}, “B”), (“How much profit does the businessperson in the United States make on each shirt sold for $26?”, {“A”: “₹500”, “B”: “₹700”,
“C”: “₹900”, “D”: “₹1,000”}, “C”), (“At what price did the garment exporter sell each shirt to the businessperson in the United States?”, {“A”: “₹100”, “B”: “₹200”, “C”: “₹300”, “D”: “₹400”}, “C”),
(“What cost does the garment exporter incur for raw materials to make each shirt?”, {“A”: “₹25”, “B”: “₹50”, “C”: “₹100”, “D”: “₹150”}, “C”), (“How much are the workers’ wages per shirt in the
garment factory?”, {“A”: “₹25”, “B”: “₹50”, “C”: “₹75”, “D”: “₹100”}, “A”), (“What is one reason that the businessperson is able to make huge profits in the market?”, {“A”: “The high wages paid to
workers”, “B”: “The low price paid to garment exporters”, “C”: “The quality of the cloth”, “D”: “The lack of competition”}, “B”), (“What is a ginning mill responsible for in the cotton production
process?”, {“A”: “Weaving cloth”, “B”: “Removing seeds from cotton bolls”, “C”: “Dyeing the fabric”, “D”: “Spinning yarn”}, “B”), (“How do foreign buyers impact the garment exporting factories in
India?”, {“A”: “They pay high wages to workers”, “B”: “They demand low prices and strict quality standards”, “C”: “They encourage local market sales”, “D”: “They invest in weaving technology”}, “B”),
(“What is one way the Indian government supports weaver cooperatives?”, {“A”: “By offering loans to weavers”, “B”: “By buying cloth from cooperatives for programs like school uniforms”, “C”: “By
reducing taxes on cloth production”, “D”: “By establishing new markets for weavers”}, “B”), (“What system do weavers in Erode rely on that gives merchants significant power?”, {“A”: “Direct market
sales”, “B”: “Putting-out system”, “C”: “Cooperative system”, “D”: “State-run markets”}, “B”), (“How much do small weavers in the putting-out system typically earn for 12 hours of daily work?”, {“A”:
“₹1,500”, “B”: “₹2,000”, “C”: “₹3,500”, “D”: “₹5,000”}, “C”), (“What allows foreign buyers to make huge profits in the garment industry?”, {“A”: “High quality of raw materials”, “B”: “Low wages paid
to workers”, “C”: “Government subsidies”, “D”: “Tax exemptions”}, “B”), (“Why do garment exporters agree to the strict demands of foreign buyers?”, {“A”: “They get paid more than local buyers”, “B”:
“They have long-term contracts”, “C”: “Foreign buyers are powerful and set strict conditions”, “D”: “They want to expand their business into new markets”}, “C”), (“How much does the businessperson
make in profit for each shirt sold at $26 in the United States?”, {“A”: “₹400”, “B”: “₹700”, “C”: “₹900”, “D”: “₹1,100”}, “C”), (“What is the role of women in the garment factories?”, {“A”: “They
mostly work as tailors”, “B”: “They are employed in lower-paying jobs such as buttoning and thread cutting”, “C”: “They receive the highest wages”, “D”: “They manage the factories”}, “B”), (“What is
the approximate monthly wage for a tailor in the garment factory near Delhi?”, {“A”: “₹1,500”, “B”: “₹2,500”, “C”: “₹3,000”, “D”: “₹4,000”}, “C”), (“What is the primary reason why poor people are
often exploited in the market?”, {“A”: “They have access to better opportunities”, “B”: “They depend on the rich and powerful for loans, raw materials, and employment”, “C”: “They have limited access
to education”, “D”: “They prefer to work with middlemen”}, “B”), (“What is one way in which weavers can reduce their dependence on merchants?”, {“A”: “By working longer hours”, “B”: “By forming
weaver cooperatives”, “C”: “By selling their cloth at a lower price”, “D”: “By switching to another trade”}, “B”), (“How does the Tamil Nadu government support weaver cooperatives?”, {“A”: “By giving
free yarn”, “B”: “By purchasing cloth for programs like Free School Uniforms”, “C”: “By providing subsidies for looms”, “D”: “By training workers in weaving”}, “B”), (“What is one key advantage of
the putting-out system for weavers?”, {“A”: “They can sell their cloth directly to buyers”, “B”: “They do not have to purchase yarn themselves”, “C”: “They get to choose their work hours”, “D”: “They
receive a fixed high wage”}, “B”), (“What does the merchant provide to the weaver in the putting-out system?”, {“A”: “A finished product”, “B”: “Yarn and instructions for the cloth to be made”, “C”:
“New weaving technology”, “D”: “Land and financial support”}, “B”), (“Who benefits the most in the chain of markets described in the text?”, {“A”: “Small farmers”, “B”: “Garment workers”, “C”:
“Foreign businesspersons”, “D”: “Local traders”}, “C”), (“How do foreign businesspersons make huge profits in the market?”, {“A”: “By paying fair wages to workers”, “B”: “By buying directly from
farmers”, “C”: “By demanding low prices from suppliers”, “D”: “By selling products at low prices”}, “C”), (“Why do workers in garment factories earn barely enough to cover their needs?”, {“A”: “They
are unskilled”, “B”: “The factories are small”, “C”: “They work fewer hours”, “D”: “They are paid very low wages”}, “D”), (“Which group in the market chain earns more than weavers but less than
exporters?”, {“A”: “Farmers”, “B”: “Local merchants or traders”, “C”: “Tailors”, “D”: “Foreign buyers”}, “B”), (“What does democracy also imply, according to the text?”, {“A”: “Getting fair wages in
the market”, “B”: “Being able to sell products internationally”, “C”: “Working long hours for little pay”, “D”: “Owning large shops and factories”}, “A”), (“What is the role of the ginning mill in
cotton production?”, {“A”: “Weaving cotton cloth”, “B”: “Removing seeds from cotton bolls”, “C”: “Spinning yarn”, “D”: “Harvesting cotton”}, “B”), (“What does the term ‘exporter’ refer to in the
context of this chapter?”, {“A”: “Someone who sells goods domestically”, “B”: “Someone who buys goods locally”, “C”: “Someone who sells goods abroad”, “D”: “Someone who manages workers”}, “C”),
(“What is ‘profit’ according to the glossary in this chapter?”, {“A”: “The amount left after deducting all costs from earnings”, “B”: “The total earnings from sales”, “C”: “The price of raw
materials”, “D”: “The cost of production”}, “A”), (“Who benefits the most from the chain of markets described in the chapter?”, {“A”: “Small farmers”, “B”: “Garment workers”, “C”: “Foreign
businesspersons”, “D”: “Local traders”}, “C”), (“How does the market often exploit the poor, according to the text?”, {“A”: “By offering them high-interest loans”, “B”: “By depending on the rich and
powerful for loans, raw materials, and marketing”, “C”: “By preventing them from working in garment factories”, “D”: “By charging them more for raw materials”}, “B”), (“What is the role of weaver
cooperatives in the textile industry?”, {“A”: “To buy yarn from foreign suppliers”, “B”: “To reduce dependence on merchants and increase weavers’ income”, “C”: “To replace the putting-out system
completely”, “D”: “To sell cloth only to local markets”}, “B”), (“How does the Tamil Nadu government support weaver cooperatives?”, {“A”: “By providing loans for buying looms”, “B”: “By purchasing
cloth for government programs like school uniforms”, “C”: “By offering tax exemptions”, “D”: “By training weavers in modern techniques”}, “B”), (“What problem do weavers face in the putting-out
system?”, {“A”: “Excessive wages for their work”, “B”: “Dependence on merchants for raw materials and pricing”, “C”: “Too much freedom in choosing work hours”, “D”: “Ability to sell cloth at higher
prices”}, “B”), (“What is one advantage for weavers under the putting-out system?”, {“A”: “They have to buy their own yarn”, “B”: “They can set their own prices for the cloth”, “C”: “They do not need
to worry about marketing their finished products”, “D”: “They have full control over production processes”}, “C”), (“What does the term ‘putting-out system’ refer to?”, {“A”: “Weaving cloth without
any external help”, “B”: “Merchants providing raw materials and receiving finished products”, “C”: “Workers selling cloth directly to consumers”, “D”: “A cooperative system among weavers”}, “B”),
(“What is one of the main advantages of forming weaver cooperatives?”, {“A”: “They can sell cloth at higher prices directly to consumers”, “B”: “They reduce their dependence on merchants and earn a
fair price for their products”, “C”: “They receive government grants without conditions”, “D”: “They eliminate the need for raw materials”}, “B”), (“What type of assistance does the Tamil Nadu
government provide to weaver cooperatives?”, {“A”: “Loans for purchasing looms”, “B”: “Training for weaving techniques”, “C”: “Purchasing cloth for government programs like school uniforms”, “D”:
“Tax reductions on raw materials”}, “C”), (“What do weavers in the putting-out system typically need to borrow money for?”, {“A”: “Marketing their finished products”, “B”: “Buying yarn and other raw
materials”, “C”: “Paying high interest on loans”, “D”: “Investing in new looms”}, “B”), (“How much do small weavers generally earn for their hard work?”, {“A”: “₹1,500 per month”, “B”: “₹2,500 per
month”, “C”: “₹3,500 per month”, “D”: “₹5,000 per month”}, “C”), (“What is the role of merchants in the putting-out system?”, {“A”: “They control the prices paid to the weavers and supply the raw
materials”, “B”: “They are responsible for exporting the finished cloth”, “C”: “They provide training to the weavers”, “D”: “They directly sell the cloth to consumers”}, “A”), (“What is the main
benefit of a weaver cooperative for its members?”, {“A”: “Increased dependence on merchants”, “B”: “Ability to sell cloth at higher prices”, “C”: “Shared resources and better income”, “D”: “Reduced
production time”}, “C”), (“Which of the following is a key advantage of cooperatives for weavers?”, {“A”: “Complete control over marketing”, “B”: “Ability to set high prices independently”, “C”:
“Collective buying of yarn at lower costs”, “D”: “Elimination of all external suppliers”}, “C”), (“How does the Tamil Nadu government assist weaver cooperatives?”, {“A”: “By giving them loans for
purchasing machinery”, “B”: “By buying cloth for government schemes like school uniforms”, “C”: “By setting a fixed price for their products”, “D”: “By providing free raw materials”}, “B”), (“What
problem do weavers face when they rely on merchants in the putting-out system?”, {“A”: “They earn high wages”, “B”: “They have to purchase their own yarn”, “C”: “They receive low prices for their
cloth”, “D”: “They have complete control over production”}, “C”), (“In the context of cooperatives, what is a common goal of the members?”, {“A”: “To maximize individual profits”, “B”: “To create
monopolies”, “C”: “To work together for mutual benefit”, “D”: “To compete against each other”}, “C”)] | {"url":"https://notesinhindi.online/class-7-chapter-8-a-shirt-in-the-market-mcqs/","timestamp":"2024-11-06T22:09:08Z","content_type":"text/html","content_length":"115951","record_id":"<urn:uuid:83a433c9-49ee-475c-b213-bb1e82dad0d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00605.warc.gz"} |
12th physics objective questions and answers in Hindi 2021 Archives - STARK STUDY POINT
12th Board Exam Physics Chapter 2 Electric Potential And Capacitance Objective Question Electrostatics Potential And Capacitance 1. If uniform electric field exists along Z-axis, equipotential
is along (A) XY-plane (B) XZ-plane (C) YZ-plane (D) anywhere Answer:- A 2. Eight drops each of radius r and charge q are merged to form a big drop. The potential energy […] | {"url":"https://starkstudypoint.com/tag/12th-physics-objective-questions-and-answers-in-hindi-2021/","timestamp":"2024-11-14T01:43:02Z","content_type":"text/html","content_length":"101267","record_id":"<urn:uuid:2f67f8d1-088d-4269-a070-47cc80fb718a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00318.warc.gz"} |
Lab 12 Dragon Fractal To practice the use of accumulators and generative recursion. - Programming Help
Accumulator Practice
Exercise (Reviewed) 1 Design the function digits->num which takes a non-empty list of digits (i.e. natural numbers between 0 and 9) and combines all the digits into a single number in base-10.
For example:
(check-expect (digits->num (list 1 2 3)) 123)
Exercise (Reviewed) 2 Design the function longest-streak which takes in a list of natural numbers and computes the largest number of times a single value appears in the list in a row. (The actual
repeated value doesn’t matter; we only care how many times it repeats.) For example:
(check-expect (longest-streak (list 2 2 2)) 3)
(check-expect (longest-streak (list 1 2 1 2)) 1)
(check-expect (longest-streak (list 1 1 2 2 2 3 4 4)) 3)
(check-expect (longest-streak (list 1 2 2 2 2 3 4 4 4 4 5)) 4)
In this section of the lab we will create a simple line-drawing program. The program starts with a blank canvas and the user can draw lines of a fixed size by pressing the arrow keys.
Step 1: What stays the same?
For your convenience we have provided some constants to use below. You can modify them or add to them if you find that you need to keep track of something else.
Step 2: What changes?
To keep track of what changes we will keep a list of all the directional keys the user has pressed. Here is a data definition for a direction:
Exercise 3 Define the template for functions that take in a Dir.
Step 3: Which handlers do we need?
Exercise 4 Write down the signatures and purpose statements for the handler functions you need. This is your “wishlist” of functions that you will need to create. Keep in mind that we will need
to use a list of Dirs as our world state.
Step 4: Design your handlers
We will start by designing the function that draws the lines in the given directions. However, let’s break this down into some simpler functions first:
Exercise 5 Design the function move-posn that takes a Posn, a Dir, and a Number and produces that position shifted by the given amount in the given direction. For example, (move-posn (make-posn 1
2) “left” 3) would produce (make-posn -2 2).
Exercise 6 Design the function draw-from-start, which takes a [List-of Dir], a Posn, and an Image. It then adds lines starting at the given position and going in each direction. Each line should
start where the last one ended. You can use the pre-defined function add-line to add lines to an image. Note: You should not use a list abstraction for this problem.
Exercise 7 Use draw-from-start to design the function for your to-draw clause.
Now we need to design the on-key handler. When a user presses an arrow key it should add a direction to the end of the list of directions.
Exercise 8 Design the function for your on-key clause. When an arrow key is pressed, add the appropriate direction to the end of the list so far. When another key is pressed, nothing should
Step 5: Put it all together!
Exercise 9 Design the function etch-a-sketch which runs your line-drawing program.
Generating Fractals
For the next part of the lab you will design functions to draw (and iterate) something called the dragon curve which is a fractal. Put simply, a fractal is just an image that is the same at various
levels of detail. We will make use of the functions we wrote for our line-drawing program in order to develop this program.
Step 1: What stays the same?
We should be able to use the same constants as we had in our line-drawing program so there is nothing further we need to do for this step.
Step 2: What changes?
We will use a natural number to keep track of the number of iterations of the fractal to show. When someone presses the up arrow we will increase the number, and when someone presses the down arrow
we will decrease the number (unless it is already zero). Since a NaturalNumber is atomic data there is no need for us to design any new data.
Step 3: Which handlers do we need?
Like our line-drawing program we will need a drawing function and a function that can handle keyboard inputs so that you can increase or decrease the number of iterations by pressing the arrow keys.
Step 4: Design your handlers
We will start by designing the function that draws the fractal. Remember that all we are given is the number of iterations. Therefore the first thing we need to do is find out what lines to draw.
Exercise 10 Design the function rotate-dir which takes a Dir and rotates it 90 degrees counter-clockwise.
Exercise 11 Design the function rotate-all-dirs which takes a [List-of Dir] and rotates every Dir in the given list 90 degrees counter-clockwise.
Exercise 12 Design the function generate-fractal-dirs which takes a natural number (representing the number of iterations left to draw) and a list of Dirs and performs the following algorithm:
If the number is zero, return the list unchanged.
If the number is positive, return a new modified list as follows:
Rotate every Dir in the input list
Reverse the list of rotated directions
Append this rotated/reversed list to the end of the input list
Recursively call the function with one less iteration
Exercise 13 Design the function for the to-draw clause which takes a natural number and draws that many iterations of the dragon fractal using the generate-fractal-dirs function and the drawing
function from our line-drawing program. To start, you should give generate-fractal-dirs the initial list (list “down”).
Exercise 14 Design the function for the on-key clause. Remember that your world state is a natural number. This number should increase by one if the up arrow key is pressed and decrease by one if
the down arrow key is pressed. Otherwise nothing will happen.
Step 5: Put it all together!
Exercise 15 Design the function dragon-fractal which takes an initial number of iterations and runs the dragon fractal program. Play around with it but don’t be surprised if the program breaks
after about 10 iterations (the image becomes too large so you may get something unexpected). | {"url":"https://www.edulissy.org/product/exercise-reviewed-1-design-the-function-digits-num-which-takes-a-non-empty-list-of-digitslab-12-dragon-fractal-to-practice-the-use-of-accumulators-and-generative-recursion/","timestamp":"2024-11-03T19:14:14Z","content_type":"text/html","content_length":"215022","record_id":"<urn:uuid:f2d7473f-ab26-40df-8387-f6821e605701>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00321.warc.gz"} |
DETR Use Transformer end-to-end detection
It is well known that this is the first time that Transformer has been applied in the field of target testing. Although the effects are not best, especially for small targets, one of which is due in
part to the small size of the picture entered, the over-inputing of the picture will give the network a very large calculus, but it does give a new direction to the bottleneck period of the curing
nerve. It is a truly end-to-end target detection algorithm. Why say so? Because the algorithm after Yolov2 usually requires anchors, non-significant weight suppression, and so on, the normal
single-stage detection algorithm can only be counted as a calibration-to-end target test.
DETR proposes a new method of treating object testing as a direct aggregation (100 predictive frames are given in the paper). Simplified testing models effectively eliminate the demand for many
manual design components, such as non-maximum suppression programs or anchor generation, which clearly encode our a priori knowledge of the task. It is a global loss based on a pool, the only
forecast to be made mandatory by two-part matching, and a transformer encoder-decoder structure. For a fixed set of study objects, DETR reasoned the relationship between the object and the context of
the global image to produce the final forecast set (the optimal frame) directly in parallel.
The methodology presented in the paper was based on the following four points: two parts of the aggregate forecast match losses, a transcoder-based encoder structure, parallel decoding and object
detection methods. Why are the two words highlighted in parallel, since a very early paper used RNN, which makes it good. First, we look at the main structure of the article, and DETR predicts the
final test set directly (in parallel) by combining the generic CNN with the Transformer structure. In the course of the training, two matches match the prediction only to the ground true frame.
DETR uses the traditional CNN backbone to learn to enter the image’s 2D expression. The model is levelled and supplemented by a location code before it is passed on to the converter encoder. Then the
transcoder decoder embeds a small fixed number of learning places (100 querys) as input (which we call object queries) and additionally processes encoder output. We insert each output of the decoder
into the shared pre-divide network (FFN), which predicts detection (classes and boundary frames) or “no object”. [pic1_728282d0f8.png] (https://cdn.jsdelivr.net/gh/wangpanfeng/images/
We can look at its pseudo-code from the frame chart above, starting with a picture that will get 2048xWxH features through the CNN network (resNet50, completed by training), but at that time it is
too high in dimension, so it can be calculated in Transformer, so it can be downsized by 1x1 volume, and then enter it into Transformer with a location code, but note that query is added to the
Transformer decoder, and if I am not mistaken, the location encoder is added to the decoder, and then the linear network is added to the Transformer decoder for output predictions (class +
regression), so we can see that the final classification by code is category +1 (1 for background).
[pic1_0b9161c6.png] (https://cdn.jsdelivr.net/gh/wangpaneng/images/pic1_0b9161c6.png)
For the learning of the loss function, DETR first selects the best predictor box to calculate the loss function using a Hungarian algorithm to calculate the result of the projection and a match
between ground-truth, and how can it be obtained?
First, we turn the real frame into 100, using 100-y “no object” to fill it (the paper says that 100 boxes are projected, so the real box is set) and then we turn the real box and the forecast box
into the pattern below, across which the index is shown as the true frame, and then we turn it into the real box index, and then we turn it into the square. Some of the knowledge in this room has to
look at the source code. So what’s in the 100 x 100 grids, the author says we have to consider both the categories and the compatibility of the projection box with the real frame, so the author uses
the formula below to calculate the final loss value to be filled in 100 x 100 cells. At this point, we can choose the best predictive frame to calculate the loss, which is somewhat different, because
there are no negative numbers, and the author adjusts the formula, as shown in the figure below.
Formula required for matching
Formula for calculating the final value of loss
The best way to match.
Let us share with you the results of the DETR, and we can see that the algorithm is very good for global control, because even objects with a higher degree of overlap can be detected.
[pic1_f50bd3f3.png] (https://cdn.jsdelivr.net/gh/wangpaneng/images/pic1_f50bd3f3.png) [pic1_45504776.png] (https://cdn.jsdelivr.net/gh/wangpanfeng/images/pic1_455047976.png) This is the main element
of the DETR, and so on so that people can see for themselves. [pic1_45bd3fpng] (https://cdn.jsdelivr.net/gh/wangpaneng/images/pic1_450776.png)
Author Tony Wang
LastMod 2022-10-28 | {"url":"https://yeah366.com/2022/10/DETR-Use-Transformer-end-to-end-detection/","timestamp":"2024-11-14T00:07:14Z","content_type":"text/html","content_length":"17218","record_id":"<urn:uuid:86533e57-dca4-4ce8-8bfa-c6f0219c8a78>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00163.warc.gz"} |
Algebra 1 - Cool Math Guy
Algebra 1 - Full Course
Core topics include solving linear equations and inequalities, graphing equations and inequalities with some use of the graphing calculator, exponents, polynomials, factoring, rational expressions
and equations, systems of linear equations and inequalities, radical expressions and equations, and solving quadratic equations. Algebra I follows Prealgebra in the sequence of math courses and is
often used as a developmental course at the college level under the name Elementary Algebra.
From: $19.95 / month
Add to Cart | {"url":"https://coolmathguy.com/course/algebra-1","timestamp":"2024-11-09T01:30:29Z","content_type":"text/html","content_length":"298574","record_id":"<urn:uuid:c4d39b28-b726-4796-b24d-9dde30ccedf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00546.warc.gz"} |
I have 6 faces, 8 vertices, and 12 edges. Which figure am l? | HIX Tutor
I have 6 faces, 8 vertices, and 12 edges. Which figure am l?
Answer 1
It is a cuboid or quadrilaterally-faced hexahedron.
There is no unique formula for getting the figure. However, according to Euler's Polyhedral Formula, in a convex polyhedra, if $V$ is the number of vertices, $F$ is number of faces and $E$ is number
of edges than $V - E + F = 2$.
It is apparent that with $6$ faces, $8$ vertices, and $12$ edges, then $8 - 12 + 6 = 2$, hence it is a valid polyhedra.
However, it is evident that the figure is a cuboid or quadrilaterally-faced hexahedron, as it too has $6$ faces, $8$ vertices, and $12$ edges.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/i-have-6-faces-8-vertices-and-12-edges-which-figure-am-l-8f9afa29db","timestamp":"2024-11-12T20:45:57Z","content_type":"text/html","content_length":"577605","record_id":"<urn:uuid:4983272e-0ad9-44ff-a61d-33dbcbeb736e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00807.warc.gz"} |
area of a polynomial calculator
Two years have passed since the mysterious disappearance of the family pet, Platypus, and the farmer's daughter's fortuitous winning of a furry accessory through the school lottery that helped fill
the void of the loss of their beloved pet.
Polynomial Root Calculator
Type the number of sides, along with a known property, and the polygon area will appear in no time.
area of a polynomial calculator
The standard unit of area in the International System of Units (SI) is the square meter, or m2. What are the types of polynomials terms? Given that each person will receive 60 worth of the pie with a
radius of 16 inches, the area of pie that each person receives can be calculated as follows: area= 60/360 16 2 = 134.041 in 2 Step 3: Finally, the resultant polynomial will be displayed in the new
window. Given that each person will receive 60 worth of the pie with a radius of 16 inches, the area of pie that each person receives can be calculated as follows: area= 60/360 16 2 = 134.041 in 2
Solution:Examining term by term, we find that the maximum degree of any individual term is 4 (which comes from the term \(x^2y^2\)). Unfortunately for the farmer's daughter, blackberry pie also
happens to be a favorite food of their pet raccoon, Platypus, as evidenced by 180 worth of the pie being missing with telltale signs of the culprit in the form of crumbs leading towards the
overindulgent raccoon. As mentioned in the calculator above, please use the Triangle Calculator for further details and equations for calculating the area of a triangle, as well as determining the
sides of a triangle using whatever information is available.
Area of a Rectangle Calculator
Always on Time The farmer also lives in the United States, and being unfamiliar with the use of SI units, still measures his plot of land in terms of feet. third degree+texas ti 84. algebra solve for
x calculator. The degree of an eigenvalue of a matrix as a root of the characteristic polynomial is called The procedure to use the area of a parallelogram calculator is as follows: Step 1: Enter the
base and height value in the input field. Given three sides (SSS) (This triangle area formula is called Heron's formula). Math is all about solving problems and finding patterns. WebPlease follow the
below steps to find the area and perimeter of a polygon: Step 1: Enter the number of sides of a polygon and the length of the side in the given input box. WebFind a polynomial for the shaded area of
the figure trivia conic section. = 12x2 + 52x + 16.
Area Calculator
That's the polygon definition. Given two sides and the angle between them (SAS) Triangle Area = 0.5 a b sin () 3.
Polynomial Division Calculator
Unlike the manual method, you do not need to enter the first vertex again at the end, and you can go in either direction around the polygon. WebPolynomial Calculator Calculation type: Polynomial: x3
6x2 + 11x 6 x = ( 1,2,3) Calculate Reset You have one free use of this calculator. CalculatorsTopicsSolving MethodsStep ReviewerGo Premium.
Integrals and Area Under the Curve
example. By developing a strong foundation in math, we can equip ourselves with the ability to think logically and solve problems effectively. (2x)2 y2 = (2x b) (2x +b) solve using calculator.
Algebra Calculator is a calculator that gives step-by-step help on algebra problems.
It also factors polynomials, plots polynomial solution sets and, Get detailed step-by-step solutions to math, science, and engineering problems with Wolfram. It will also calculate the roots of the
polynomials and factor them. The farmer must now determine whether he has sufficient area in his backyard to house a pool. WebPolynomial Root Calculator The Polynomial Roots Calculator will display
the roots of any polynomial with just one click after providing the input polynomial in the below input box and clicking on the calculate button. The most commonly used polynomials are the quadratic
polynomials, more commonly called quadratic functions.
Polynomial Equation Calculator
A polygon is a 2D closed figure made up of straight line segments. The Weighted average calculator helps you find the average when the values are not weighted equally. We can use any of two angles as
we calculate their sine. 4x2 y2 = (2x)2 y2 Now we can apply above formula with a = 2x and b = y (2x)2 y2 In addition to solving math problems, students should also be able to answer word questions.
Our support team is available 24/7 to assist you. WebStep 1: Go to Cuemath's online polynomial calculator. Solved exercises of Polynomials. Need help?
Area Calculator
Polynomials in mathematics and science are used in calculus and numerical analysis.
area of a polynomial calculator
CalculatorsTopicsSolving MethodsStep ReviewerGo Premium. I always find math questions to be very difficult. You can have more time for your hobbies by making small changes to your daily routine.
Area of a Rectangle Calculator
and semi-minor axes, The Farmer and his Daughter Falling out of Orbit. WebPolynomial calculator - Sum and difference . Polynomial Generator. Finding the area of an annulus formula is an easy task if
you remember the circle area formula. Calculus: Fundamental Theorem of Calculus Solve the equation Distribute first. I can help you solve math equations quickly and easily. See, Definition and
properties, altitude, median, Definition and properties, altitude, diagonals. 4x2 y2 = (2x)2 y2 Now we can WebPlease follow the below steps to find the area and perimeter of a polygon: Step 1: Enter
the number of sides of a polygon and the length of the side in the given input box. Step 2: Now click the button Solve Equation to get the solution. It will also calculate the roots of the
polynomials and factor them. Calculators. WebPolynomial Calculator Given that each person will receive 60 worth of the pie with a radius of 16 inches, the area of pie that each person receives can be
calculated as follows: area= 60/360 16 2 = 134.041 in 2 747 Consultants 4.8/5 Ratings Given base and height. After reading this short article, you'll know what a polygon is and how many sides a
particular polygon has - keep reading, or simply give this calculator a go!
Polynomial Area
WebStep 1: Go to Cuemath's online polynomial calculator.
area = n a cot(/n) / 4 Then, the area of a right triangle may be expressed as: The circle area formula is one of the most well-known formulas: In this calculator, we've implemented only that
equation, but in our circle calculator you can calculate the area from two different formulas given: Also, the circle area formula is handy in everyday life like the serious dilemma of which pizza
size to choose. Calculate the area of each of these subshapes. WebInstructions: Use this calculator to find the degree of a polynomial that you provide. To find the hexagon area, all we need to do is
to find the area of one triangle and multiply it by six. Get Solution. Use the Triangle Calculator to determine all three edges of the triangle given other parameters. Here are some useful properties
of the characteristic polynomial of a matrix: A matrix is invertible (and so has full rank) if and only if its characteristic polynomial has a non-zero intercept.To find the inverse, you can use
Omni's inverse matrix calculator.. WebThe calculator will find (with steps shown) the sum, difference, product, and result of the division of two polynomials (quadratic, binomial, trinomial, etc.).
WebRectangle Area & Perimeter Calculator Calculate area & perimeter of a rectangle step by step What I want to Find Perimeter Area Diagonal Please pick an option first Related Symbolab blog posts
Practice Makes Perfect Learning math takes practice, lots of practice. WebThis online calculator writes a polynomial as a product of linear factors. WebThis online calculator writes a polynomial as a
product of linear factors.
Area Calculator
The area of a circle can be found using the radius of the circle and the constant pi in the formula A =r2 A = r 2. WebThe calculator will find (with steps shown) the sum, difference, product, and
result of the division of two polynomials (quadratic, binomial, trinomial 820 Math Experts 75% Recurring customers Steps to calories calculator helps you to estimate the total amount to calories
burned while walking. This page helps you explore polynomials with degrees up to 4. For example, a garden shaped as a rectangle with a length of 10 yards and width of 3 yards has an area of 10 x 3 =
30 square yards.
Polynomials Calculator
Calculate from an regular 3-gon up to a regular 1000-gon. The types of polynomial terms are: Constant terms: terms with no variables and a numerical coefficient. Where: 15.484. Mathematics is the
study of numbers, shapes, and patterns. Quick Algebra Find the area of a, Find a formula for the nth partial sum of the series. Below you'll find formulas for all sixteen shapes featured in our area
calculator. Remember that the classification of a "simple" shape means that the shape is not self-intersecting. Polynomials include constants, which are numerical coefficients that are multiplied by
variables. Decomposition of a polygon into a set of triangles is called polygon triangulation.
Rectangle Area & Perimeter Calculator
We'll assume you're ok with this, but you can opt-out if you wish. Webfind the polynomial calculator First, we need to notice that the polynomial can be written as the difference of two perfect
squares. The answer to the question depends on which polygon you have on your mind. Rectangle Calculator. Enter the x,y coordinates of each vertex into the table. Step 1: Enter the polynomial
equation in the input field Step 2: Now click the button Solve Equation to get the solution Step 3: Finally.
Polynomial Calculator
If you're searching for other formulas for the area of a quadrilateral, check out our dedicated quadrilateral calculator, where you'll find Bretschneider's formula (given four sides and two opposite
angles) and a formula that uses bimedians and the angle between them. basic principle that can be used to simplify a polynomial. WebPolynomial factoring calculator This calculator is a free online
math tool that writes a polynomial in factored form. Based on the figure below, the equation for calculating the area of a parallelogram is as follows: The Farmer and his Daughter Diamond in the Sky.
Please type of polynomial in the form box below. It must, of course, also only use the number 9 in its measurements to reflect her age. WebPolynomial Calculator Calculation type: Polynomial: x3 6x2 +
11x 6 x = ( 1,2,3) Calculate Reset You have one free use of this calculator. Knowing that two adjacent angles are supplementary, we can state that sin(angle) = sin(180 - angle). When dealing with
polynomials of two variables, you are using the same idea: split the polynomial into its basic terms (or monomials), and compute the that the degree refers to a specific term of the polynomial,
wheres the order refers to the whole polynomial. WebEx: Find the Area of a Rectangle Using a Polynomial . New Blank Graph Examples Lines: Slope Intercept Form example Lines: Point Slope Form WebThe
calculator will find (with steps shown) the sum, difference, product, and result of the division of two polynomials (quadratic, binomial, trinomial 820 Math Experts 75% Recurring customers Perimeter
perimeter = n a The roots (x-intercepts), signs, local maxima and minima, increasing and decreasing intervals, points of inflection, and concave up-and-down intervals can Slowly, she has begun to
accept other shapes into her life and pursues her myriad different interests currently freestyle BMX. Practice your math skills and learn step by step with our math solver. In our tool, you'll find
three formulas for the area of a parallelogram: We've implemented three useful formulas for the calculation of the area of a rhombus. I think it's help so much and that can help you improve your
mathematics skills, this app is amazing it is very accurate and even if it doesn't understand it let's you know and it gives other iptions for the answer that you kight be looking for it is a
wonderfull app overall really is a life saver on the deadlocs. Step 2: Click the blue arrow to submit and see the result!
area of a polynomial calculator
Step 3: Finally, the resultant polynomial will be displayed in the new window. Some people like to think WebPolynomials involve only the operations of addition, subtraction, and multiplication. More
on the degree of polynomials . Decide on the rectangle's width for example, b = 6 cm. A long night of studying?
Multiply Polynomials Calculator
As such, with her suboptimal grades, lack of any extracurricular activities due to her myriad different interests consuming all of her free time, zero planning, and her insistence on only applying to
the very best of the best universities, the shock that resulted when she was not accepted to any of the top-tier universities she applied to could be reasonably compared to her metaphorically landing
in deep space, inflating, freezing, and quickly suffocating when she missed the moon and landed among the stars. If you want to calculate the regular polygon parameters directly from equations, all
you need to know is the polygon shape and its side length: Where n- number of sides, a - side length. as the term \(2sin(x)\) does not meet the requirement of being the variable raised to a certain
positive integer power. The major and minor axes refer to the diameters rather than radii of the ellipse. Please type of polynomial in the form box below.
free english sats papers. The area of a circle can be found using the radius of the circle and the constant pi in the formula A =r2 A = r 2.
Polynomial Factorization Calculator
There are other, often easier ways to calculate the area of triangles and regular polygons. Is Modulo Multiplication and Addition Associative, Distributive, and Commutative? Step 1: Enter the
expression you want to divide into the editor. Find the area of a polynomial calculator First, we need to notice that the polynomial can be written as the difference of two perfect squares. WebGiven
a radius and an angle, the area of a sector can be calculated by multiplying the area of the entire circle by a ratio of the known angle to 360 or 2 radians, as shown in the following equation: area
= 360 r 2 if is in degrees or area = 2 r 2 if is in radians The Farmer and his Daughter Sectioning Family
Polygon Calculator
First, we need to notice that the polynomial can be written as the difference of two perfect squares.
area of a polynomial calculator
The area of a square is the product of the length of its sides: That's the most basic and most often used formula, although others also exist. Many shapes you learned about are polygons - triangles,
squares, parallelograms, rhombus, kites, pentagons, hexagons, octagons A lot of them. What are the types of polynomials terms? It will also calculate the Simply click on the unit name, and a
drop-down list will appear. This area of a regular polygon calculator can help - as you can guess - in determining the area of a regular polygon. Lesson on Solving Polynomials Lesson Contents Lesson
on Solving Polynomials How to Solve for a Polynomial Variable Solving Variables in Special Perimeter perimeter = n a I only wish they had the option to provide video lectures with their prime
subscription. At the same time, it's the height of a triangle made by taking a line from the vertices of the octagon to its center. The following are calculators to evaluate the area of seven common
shapes. To calculate the area of a regular polygon given the radius, apply the formula: area = n a cot (/n) / 4 Where: n is the number of sides of the polygon; a is the length of the side; and cot is
the cotangent function ( cot (x) = 1/tan (x) ).
Area of Regular Polygon Calculator
WebAre you struggling to understand concepts and how to Find the area of a polynomial calculator?
Polynomials Calculator
Then find the area with the given three sides (SSS) equation (you can learn the origin of this formula with our Heron's formula calculator). In a trapezoid, the parallel sides are referred to as the
bases of the trapezoid, and the other two sides are called the legs. The internal programming of the calculator takes care of it all for you. Then you're in the right place. You can find them in a
dedicated calculator of polygon area. ( 6x 5) ( 2x + 3) Go! For example, a garden shaped as a rectangle with a length of 10 yards and width of 3 yards has an area of 10 x 3 = 30 square yards. If you
need help with tasks around the house, consider hiring a professional to get the job done quickly and efficiently. Two years have passed since the farmer's pool was completed, and his daughter has
grown and matured.
area of a polynomial calculator
area = 6 a cot(/6) / 4 Calculate the degree of: \(x^2 + 2sin(x) + 2\). Lesson on Solving Polynomials Lesson Contents Lesson on Solving Polynomials How to Solve for a Polynomial Variable Solving
Variables in Special Click Here to get Unlimited Answers ? (2x)2 y2 = (2x b) (2x +b) solve using calculator. Enter the the polynomial (Ex: 2x^2+x, or x^2+y^2 + xy, etc.) Triangle Area = b h / 2.
Given base and height. Calculates side length, inradius (apothem), circumradius, area and perimeter.
What is the area of a pentagon with side 3? Lesson on Solving Polynomials Lesson Contents Lesson on Solving Polynomials How to Solve for a Polynomial Variable Solving Variables in Special and it is,
it is finds its degree. Given that each person will receive 60 worth of the pie with a radius of 16 inches, the area of pie that each person receives can be calculated as follows: area= 60/360 16 2 =
134.041 in 2, Quadratic function in vertex form calculator, Find the missing term in the geometric sequence calculator. Get Solution. Step 3: Finally, the area of the regular polygon will be
displayed in the output field (I.e., Area of Regular Polygon, A= 15.485 square units) Despite all its drawbacks, she decides that there is little choice but to persist through the asteroid field of
life in hopes that a Disney fairy tale ending exists. Another two years have passed in the life of the farmer and his family, and though his daughter had been a cause for intense worry, she has
finally bridged the distance between the blazing sun that is her heart, and the Earth upon which society insists she must remain grounded. Multiply these two values: A = 5 cm 6 cm = 30 cm. 2.
Polynomials Calculator.
area of a polynomial calculator
15.484. WebPolynomial Equation Calculator . Calculates side length, inradius (apothem), circumradius, area and perimeter.
Polynomials Calculator
Polynomials Calculator online with solution and steps. To calculate the area of a regular polygon given the radius, apply the formula: area = n a cot (/n) / 4 Where: n is the number of sides of the
polygon; a is the length of the side; and cot is the cotangent function ( cot (x) = 1/tan (x) ). Whether you are looking for the area of a heptagon or the angles in a decagon, you're at the right
place. Use this calculator to find the degree of a polynomial that you provide. Webfinding the lcd when subtracting fraction polynomials ; calculator to simplify fractions ; simplifying radicals
solver ; Elementary Powerpoint on order of operations ; solving radicals cubed ; finding quadratic equation ti-89 "exponential calculator mod " solve algebra 2 equations online ; SUBTRACT INTegers
Polynomial Factorization Calculator
WebPolynomial Equation Calculator - Symbolab Polynomial Equation Calculator Solve polynomials equations step-by-step full pad Examples Related Symbolab blog posts Let's assume that you want to
calculate the area of a specific regular polygon, e.g., a 12-sided polygon, a dodecagon with 5-inch sides. But what does it look like? Step 2: Choose the arithmetic operation from the drop-down list
and enter the polynomials in the input boxes. Step 3: Finally, the area of the regular polygon will be displayed in the output field (I.e., Area of Regular Polygon, A= 15.485 square units) Just have
a look: an annulus area is a difference in the areas of the larger circle of radius R and the smaller one of radius r: The quadrilateral formula this area calculator implements uses two given
diagonals and the angle between them. Graphing. There are many different formulas for triangle area, depending on what is given and which laws or theorems are used. In this area calculator, we've
implemented four of them: 1. 15.484. Unlike the manual method, you do not need to enter the first vertex again at the end, and you can go in either direction around the polygon. Tangent aside, the
farmer's plot of land has a length of 220 feet, and a width of 99 feet. Area calculator. An ellipse is the generalized form of a circle, and is a curve in a plane where the sum of the distances from
any point on the curve to each of its two focal points is constant, as shown in the figure below, where P is any point on the ellipse, and F1 and F2 are the two foci. Solve the equation Distribute
first. This website uses cookies to improve your experience. WebClick on "Calculate". example. Because he owns some cows that he did not want frolicking freely, he fenced the piece of land and knew
the exact length and width of each edge. The sector area formula may be found by taking a proportion of a circle. More on the degree of polynomials
area of a polynomial calculator
If you're particularly interested in angles, you may want to take a look at our polygon angle calculator.
Polynomial Calculator Area
Whether you're looking for an area definition or, for example, the area of a rhombus formula, we've got you covered. WebCalculus: Integral with adjustable bounds. This page helps you explore
polynomials with degrees up to 4. Get detailed solutions to your math problems with our Polynomials step-by-step calculator. Imagine a farmer trying to sell a piece of land that happens to be
perfectly rectangular. You can find them in a dedicated calculator of polygon area. Sum up the areas of subshapes to get the final result.
You can build a bright future by setting goals and working towards them.
Polynomials in mathematics and science are used in calculus and numerical analysis. WebClick on "Calculate". WebPolynomials Calculator. 2. Check out all of our online calculators here! There are many
different formulas for triangle area, depending on what is given and which laws or theorems are used. Enter the the polynomial (Ex: 2x^2+x, or x^2+y^2 + xy, etc.) Having had an argument with her
father about her excessive use of social media, she decides to prey on her father's fear of the unknown, and belief in the supernatural in order to prank him. WebThe calculator will find (with steps
shown) the sum, difference, product, and result of the division of two polynomials (quadratic, binomial, trinomial, etc.). Calculates side length, inradius (apothem), circumradius, area and
perimeter. Enter the the polynomial (Ex: 2x^2+x, or x^2+y^2 + xy, etc.) Use this calculator to calculate properties of a regular polygon. WebMultiply Polynomials Calculator. The polynomial generator
generates a polynomial from the roots introduced in the Roots field. doing sums, subtractions, multiplications are divisions. Triangle Area = b h / 2. 4x2 y2 = (2x)2 y2 Now we can. WebPolynomial
Division Calculator. Amazing app and a total life saver. Polynomial Factoring Calculator (shows all steps) supports polynomials with both single and multiple variables show help examples tutorial
Enter polynomial: Examples: In this area calculator, we've implemented four of them: 1.
area of a polynomial calculator area of a polynomial calculator
Both univariate and Math can be tough, but with a little practice, anyone can master it! First, we need to notice that the polynomial can be written as the difference of two perfect squares.
Polynomial calculator
Look no further our experts are here to help. Simply speaking, area is the size of a surface.
Polynomial Factorization Calculator
Use this calculator to calculate properties of a regular polygon. Step 4: Click on the " Reset " button to clear the fields and enter new values.
Polynomials Calculator
WebPolynomial calculator - Sum and difference . WebWrite the polynomial that represents the perimeter of the figure pictured below. manual method, you do not need to enter the first vertex again at
the end,
area of a polynomial calculator Polynomial Equation Solver Calculator Polynomial Polynomial Root Calculator
Mathematics is the study of numbers, shapes, and patterns. The most popular, and usually the most useful formula is the one that uses the number of sides nnn and the side length aaa: However, given
other parameters, you can also find out the area: The circumcircle is the circle that passes through all the polygon's vertex: you can learn how to calculate its center in the case of a triangle at
our circumcenter of a triangle calculator. | {"url":"http://clubedasoficinas.com.br/sDt/area-of-a-polynomial-calculator","timestamp":"2024-11-02T21:46:24Z","content_type":"text/html","content_length":"36455","record_id":"<urn:uuid:45d2b467-20f6-413c-921c-eca543fb206d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00005.warc.gz"} |
Lab | Introduction to R
5 Functions
You can either download the lab as an RMarkdown file here, or copy and paste the code as we go into a .R script. Either way, save it into the 05-week folder where you completed the exercises!
As always, we’ll be using the tidyverse package and the NLSY data.
nlsy <- read_csv("nlsy_cc.csv")
5.1 Functions in RStudio
As with everything else, there are some tricks to make your life easier when using functions in RStudio.
Let’s say you have been writing some code, and you realize you want to make it into a function:
y <- x * 2
z <- exp(y)
mean(c(x, y, z))
If you highlight the code and press ctrl + alt + x on Windows or cmd + option + x on a Mac, you can automatically convert it into a function:
weird_func <- function(x) {
y <- x * 2
z <- exp(y)
mean(c(x, y, z))
This can be helpful for a couple of reasons: if you don’t remember the syntax for a function, if you don’t want to deal with indenting, etc. and especially if you aren’t sure what you need as
arguments to your function. Careful, though: it’s not great at distinguishing between objects and variable names, so it might try to add arguments that you don’t actually need:
nlsy %>%
mutate(only = case_when(
nsibs == 0 ~ "yes",
TRUE ~ "no"
) %>%
select(id, contains("sleep"), only) %>%
filter(only == "yes")
Another trick is F2: use it to go directly to the source code of a function. If it’s in your R script, it will go there, or else it will open up another tab where you can view it.
It can be really helpful to see how other people write functions as you’re learning to write your own!
5.2 Writing functions
Raise to any power
Make a function that uses two arguments, x for a number, and power for the power. Call it raise().
raise <- function() {
# test with
raise(x = 2, power = 4)
# should give you
Default arguments
Change your raise() function to default to squaring x when the user doesn’t enter a value for power.
# test
raise(x = 5)
# should give you
Functions for data
Write a function to calculate the stratified mean income for grouping variable var. In other words, write a function such that mean_group_inc(var = "sex") and mean_group_inc(var = "glasses") produce
the results above.
Look at the function from the slides for help:
var_q <- function(q, var) {
quant <- nlsy %>%
rename(new_var = var) %>% #<<
summarize(q_var = quantile(new_var, probs = q))
var_q(q = 0.5, var = "income")
Write your function here:
mean_group_inc <- function(var) {
# test with
mean_group_inc(var = "glasses")
mean_group_inc(var = "sex")
Rewrite your function to accept two arguments: group_var to determine what the grouping variable is, and mean_var to determine what variable you want to take the mean of (e.g., mean_group(group_var =
"sex", mean_var = "income") should give you the same results as above).
mean_group <- function(group_var, mean_var) {
# test with
mean_group(group_var = "sex", mean_var = "income")
5.3 For loops
Write a for loop
We used this function:
var_q_new <- function(q, var) {
quant <- nlsy %>%
rename(new_var = var) %>%
summarize(q_var = quantile(new_var, probs = q)) %>%
var_q_new(q = 0.5, var = "income")
#> 50%
#> 11155
inside of a for loop in order to calculate each decile of income:
qs <- seq(0.1, 0.9, by = 0.1)
deciles <- rep(NA, length(qs))
for (i in seq_along(qs)) {
deciles[[i]] <- var_q_new(q = qs[[i]],
var = "income")
#> [1] 3177.2 5025.6 6907.2 9000.0 11155.0 14000.0 18053.6 23800.0 33024.0
Change the for loop above to loop over different variables instead of different quantiles. That is, calculate the 0.25 quantile for each of c("income", "age_bir", "nsibs") in a for loop.
vars <- c("income", "age_bir", "nsibs")
q_25s <- ...
Nested loops
You can nest for loops inside each other, as long as you use different iteration variables. Write a nested for loop to iterate over variables (with i) and quantiles (with j). You’ll need to start
with an empty matrix instead of a vector, with rows indexed by i and columns by j. Calculate each of the deciles for each of the above variables.
vars <- c("income", "age_bir", "nsibs")
qs <- qs <- seq(0.1, 0.9, by = 0.1)
results_mat <- matrix(NA, ncol = length(qs), nrow = length(vars))
# helpful to print to see what's going on
for (i in vars) {
for (j in qs) {
print(c(i, j))
for (i in seq_along(vars)) {
for (j in seq_along(qs)) {
print(var_q_new(q = qs[[j]], var = vars[[i]]))
for (i in seq_along(vars)) {
for (j in seq_along(qs)) {
results_mat[i, j] <- var_q_new(q = qs[[j]], var = vars[[i]])
rownames(results_mat) <- vars
colnames(results_mat) <- qs
5.4 Group work
Related to “for loops” are “while loops”. The latter don’t iterate a set number of times, but rather only as long as a condition is true. This is helpful when you don’t know how many times you’ll
need to do something. For example, if I want to do something as long as x divided by 2 is less than 5, I could write:
x <- 0
while ((x / 2) < 5) {
x <- x + 1
#> [1] 1
#> [1] 2
#> [1] 3
#> [1] 4
#> [1] 5
#> [1] 6
#> [1] 7
#> [1] 8
#> [1] 9
#> [1] 10
Be careful you don’t get stuck in an infinite loop! For example, if I had said while ((x / 2) >= 0), and started at 0, adding 1 each time, it would never not be true, and R would crash if I didn’t
stop it!
As a harder example, imagine I wanted to find the Fibonacci sequence through 2-digit numbers:
x <- c(0, 1)
i <- 2
while (x[i] < 100) {
x <- c(x, x[i - 1] + x[i])
i <- i + 1
#> [1] 0 1 1 2 3 5 8 13 21 34 55 89 144
While loops are a bit confusing, but we’ll make them fun by playing with the penguins again!
(See last week’s lab for more info on the palmerpenguins dataset and the artwork by Allison Horst.)!
It’s available in the palmerpenguins package, or we can download it directly here:
penguins <- read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-07-28/penguins.csv')
Challenge: You want to take some penguins home from Antarctica with you. Your plane can only hold 10,000 g of cargo. What is the greatest number of penguins from this dataset that you can take with
you? Write a loop with while() to figure it out.
(Hint: You might want to sort the penguins by size first. There are a couple of ways to do this, one of which is with the arrange() function.) | {"url":"https://intro-to-r-2020.louisahsmith.com/labs/05-lab/","timestamp":"2024-11-14T15:09:34Z","content_type":"text/html","content_length":"27674","record_id":"<urn:uuid:b7bd56ab-dacf-467a-8568-0fc78a02cc7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00035.warc.gz"} |
Steven Leth, Professor Emeritus, Mathematical Sciences, Natural and Health Sciences
Contact Information
• Ph.D., Mathematics, University of Colorado (1985)
• M.S., Mathematics, Stanford University (1978)
• B.S., Mathematics and Physics, University of Colorado (1976)
Professional/Academic Experience
• 1996-present, UNC, Professor of Mathematics
• 1991-1996, UNC, Assoc. Professor of Mathematics
• 1988-1991, UNC, Assistant Professor of Mathematics
• 1985-1988, University of Wisconsin, Van Vleck Visiting Assistant Professor of Mathematics
Research/Areas of Interest
My research is in the area of nonstandard analysis, which could be described as a method for applying concepts from logic to other branches of mathematics. I have worked on applying these methods to
obtain results in such areas as combinatorial number theory and continuum theory.
I am also interested in the teaching and learning of mathematics, and have given some talks about ways to enrich the mathematical curriculum at the elementary, middle school, high school and college
level. I have worked with high school teachers in the Adams 14 district and with elementary and middle school teachers in Eagle County and other districts, in a professional development setting.
I would be happy to discuss any of these issues, mathematical or educational, with anyone who is interested.
Publications/Creative Works
• A monad measure space for logarithmic density (with Mauro Di Nasso, Isaac Goldbring, Renling Jin, Martino Lupini and Karl Mahlburg), accepted for publication in Monatshefte für Mathematic,
• Approximate polynomial structure in additively large sets (with Mauro Di Nasso, Isaac Goldbring, Renling Jin, Martino Lupini and Karl Mahlburg), accepted for publication inIntegers ,
• High density piecewise syndeticity of product sets in amenable groups (with Mauro Di Nasso, Isaac Goldbring, Renling Jin, Martino Lupini and Karl Mahlburg), accepted for publication in the
Journal of Symbolic Logic, November, 2015, arXiv:1505.04701
• An asymptotic formula for powers of binomial coefficients (with Jeff Farmer), The Mathematical Gazette, v. 89 no. 516, 2005, pp. 385-391
• The Use of Writing in mathematics classes: the new imperative The Project Calc Newsletter, December 1994
• Meager Sets on the hyperfinite time line, (with H. J. Keisler), Journal of Symbolic Logic,56 Number 1 (March 1991) pp. 71-102.
• Descriptive set theory over hyperfinite sets, (with H.J. Keisler, K. Kunen and A. Miller), Journal of Symbolic Logic. 54, Number 4 (Dec. 1989) pp. 1167-1180.
• Some nonstandard methods in combinatorial number theory, Studia Logica XLVII 3 (Sept. 1988) pp. 85-98.
• Sequences in countable nonstandard models of the natural numbers, Studia Logica XLVII 3 (Sept. 1988) pp. 63-83.
• Applications of nonstandard models and Lebesgue measure to sequences of natural numbers, Trans. Am. Math. Soc. 307, No. 2 (June 1988) pp. 457-468.
• Some nonstandard methods in geometric topology, in Developments in Nonstandard Mathematics, Pitman Research Notes in Mathematics No.336, Cutland, Neves, Oliveira and Pinto (eds.), Longman; Wiley
and Sons, 1995, pp. 50-60
• A uniqueness condition for sequences, Proc. Am. Math. Soc. 93 (1985) pp. 287-290.
Slides for recent research talks:
Sumsets contained in sets of positive density, University of Denver sesquicentennial Ramsey Theory conference, May 2014
A Lebesgue Density Theorem for nonstandard cuts and an application to additive number theory, Colloquium at the University of Pisa, March 2014
A nonstandard approach to fixed point problems in the plane, Chico Topology Conference, Chico, CA May 2012
Nonstandard methods in continuum theory, Spring Topology Conference, Tyler, TX March 2011
Some questions and answers about “fixed point traps” in the plane, AMS Sectional meeting on nonstandard analysis, Honolulu, HI, March 2012
An example of the use of nonstandard methods in continuum theory, Chico Topology Conference, Chico, CA, May 2010
An introduction to nonstandard methods in the plane | {"url":"https://www.unco.edu/nhs/mathematical-sciences/faculty/leth.aspx","timestamp":"2024-11-14T16:47:36Z","content_type":"text/html","content_length":"30690","record_id":"<urn:uuid:5c09f82e-be44-44cf-9ee2-1a410e519904>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00340.warc.gz"} |
Jen Hom (Georgia Tech), Triangle Topology Seminar - Department of Mathematics
Jen Hom (Georgia Tech), Triangle Topology Seminar
September 27, 2016 @ 2:00 pm - 4:00 pm
Pre-talk: SAS 2229 2-2:45
Title: The knot concordance group
Abstract: The set of knots in S^3 under the operation of connected sum forms a monoid. By quotienting by an equivalence relation called concordance, we obtain the knot concordance group. We will
discuss ways of understanding the structure of this group and introduce some concordance invariants coming from Heegaard Floer theory.
Seminar talk: SAS 2102 3:00-4:00
Title: Knot concordance in homology spheres
Abstract: The knot concordance group C consists of knots in S^3 modulo concordance. We consider C_Z, the group of knots in homology spheres that bound homology balls modulo homology bordisms of
pairs. Matsumoto asked if the natural map from C to C_Z is an isomorphism. Adam Levine answered this question in the negative by showing the map is not surjective. We show that the image of C in C_Z
is of infinite index. This is joint work with Adam Levine and Tye Lidman. | {"url":"https://math.unc.edu/event/triangle-topology-seminar-2/","timestamp":"2024-11-04T20:46:29Z","content_type":"text/html","content_length":"111449","record_id":"<urn:uuid:2ec32d81-1164-4293-ab0f-51f014a37584>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00178.warc.gz"} |
[WSG21] Daily Study Group: Differential Equations (begins November 29)
Let me second Michael again: I love the forum and the exchanges it stimulates!
Now, I looked again at the code I posted at 4:30 am my time (EST), and I did mess up copying parts of it (In[15] and Out[15]), NOT applying the initial condtions. I have now corrected these two lines
in that post. Here is my code again, this time carefully run and copied:
In[1] := eqn = a t^2 y''[t] + b y'[t] + c y[t] == 0;
In[2] := leqn = LaplaceTransform[eqn, t, s] /. {y[0] -> 0, y'[0] -> 1}
Out[2] := c LaplaceTransform[y[t], t, s] +b s LaplaceTransform[y[t], t, s]
+a LaplaceTransform[t^2 y'' [t], t, s] == 0
In[3] := sol = SolveValues[leqn, LaplaceTransform[y[t], t, s]]
Out[3] := {-a LaplaceTransform[t^2 y'' [t], t, s])/(c + b s)}
Note that the term LaplaceTransform[t^2 y'' [t], t, s] is unresolved. Mathematica doesn't know what to do with it. You missed this part again by not printing out the output between your lines
starting with lt2 and sol. I'm using Mathematica 12.3 right now. But I doubt that version 13 can resolve Laplace transform of a product t^2 y'' [t]. Here is what I get:
In[4] := LaplaceTransform[t^2 y''[t], t, s]
Out[4] := LaplaceTransform[t^2 y''[t], t, s]
Which means: "I can't do it!" Please let us know what you get!
Again, thanks, Hakan, for finding this fascinating problem. I can't wait for Luke to weigh in ...
Best, Zbigniew | {"url":"https://community.wolfram.com/groups/-/m/t/2411604?p_p_auth=NWTvd6Kf","timestamp":"2024-11-09T16:29:38Z","content_type":"text/html","content_length":"990199","record_id":"<urn:uuid:3b64405e-165a-459c-9455-26e3167d8dcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00395.warc.gz"} |
Google Code Jam 2013 Qualification
Author Message
mirhagk Posted: Mon Apr 15, 2013 9:25 pm Post subject: RE:Google Code Jam 2013 Qualification
Wow that is quite an awesome solution Demonwasp.
I don't know how you'd formally prove your assumption, but it does make sense.
You write some really awesome solutions to the problems, they are clear and informative. I hope I've learned some extra tricks for next round now.
Panphobia Posted: Tue Apr 16, 2013 7:56 pm Post subject: RE:Google Code Jam 2013 Qualification
So essentially you are checking all 2 * 3^(n/2-1) permutations of digits 0,1,2 to check for palindromic squares?
DemonWasp Posted: Wed Apr 17, 2013 12:18 pm Post subject: RE:Google Code Jam 2013 Qualification
Sort of. However, there are two problems:
First, for n = 10^100, the recursion will only check 2 * 3 ^ (fourth_root(n)-1) permutations. However, that problem space is still 10^25, which is still too big.
Second, the part that makes it finish before we are consumed by an expanding Sun, is that most of the recursion stops very early: if I know that the half-base '22' (--palindrome-->
'2222' --square-> 4937284) isn't valid, then I don't need to continue building on it (half-bases '220', '221', '222', '2200', ... cannot be valid).
That seriously limits the amount of recursion required. As mentioned, there are only 41551 such fair-and-square numbers between 1 and 10^100, and since recursion is curtailed early,
there will be at most 41551 checks that "pass" and result in recursion, rather than the (very very roughly) 10^25 recursions required without that check.
[ 18 Posts ] Goto page Previous 1, 2 | {"url":"http://compsci.ca/v3/viewtopic.php?t=33571&start=15","timestamp":"2024-11-11T22:48:17Z","content_type":"text/html","content_length":"51699","record_id":"<urn:uuid:747ca48e-242e-49ee-84c3-ff2dcf6b2c28>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00288.warc.gz"} |
Laureates 2011 - Prof. Kannan Soundararajan - Mathematical Sciences
Mathematical Sciences
Kannan Soundararajan
Professor of Mathematics and Director, Mathematics Research Center, Stanford University, USA
The Infosys Prize for Mathematical Sciences is awarded to Professor Kannan Soundararajan for his path breaking work in analytic number theory and development of new techniques to study critical
values of general zeta functions to prove the Quantum Unique Ergodicity Conjecture for classical holomorphic forms.
Finding order in chaos with number theory
Professor Soundararajan is a top analytic number theorist whose contributions to mathematics are in the great tradition of G.H. Hardy, John Littlewood and Srinivasa Ramanujan. His recent work brings
out the beautiful connections between classical number theory and quantum physics.
The relationship between classical mechanics and their quantum analogs is a problem of great interest to both mathematicians and physicists. Classical systems can be chaotic but still have lots of
periodic orbits. In their quantum versions the distribution of mass in high energy states, could in principle concentrate on either part.
These classical chaotic systems have number theoretic analogs. The Quantum Ergodicity Conjecture of Zeev Rudnick and Peter Sarnak asserts that in these contexts, the high energy states do not
concentrate on the periodic orbits, but spread out evenly. The recent work of Soundararajan and Roman Holowinsky proves the fundamental cases of the conjecture. Their ingenious proof sidesteps the
still unproven Generalized Riemann Hypothesis, establishing instead some carefully crafted consequences of the latter, which are shown to suffice for their application.
Before joining Stanford University in 2006, Professor Kannan Soundararajan was a faculty at the University of Michigan where he pursued his undergraduate studies. His main research interest is number
theory, especially L-functions and multiplicative number theory.
Professor Soundararajan was awarded the inaugural Morgan Prize in 1995 for his work in analytic number theory. He got his PhD from Princeton University where he studied under the guidance of
Professor Peter Sarnak. At Princeton, he also held the Sloan Foundation Fellowship.
He has held positions at Princeton University, the Institute of Advanced Study and the University of Michigan. He was awarded the Salem Prize in 2003 "for contributions to the area of Dirichlet
L-functions and related character sums". In 2005, he won, along with Manjul Bhargava, the $10,000 SASTRA Ramanujan Prize for his contributions to number theory.
B.Sc., University of Michigan; Ph.D., Princeton University
Wins the Salem Prize
Wins the SASTRA Ramanujan Prize jointly with mathematician Manjul Bhargava
Wins the Infosys in Mathematical Sciences; and the Ostrowski Prize
Elected as Fellow of American Mathematical Society
Prof. Kannan Soundararajan has made fundamental contributions to analytic number theory. These include numerous brilliant breakthroughs in well known and difficult problems, as well as the resolution
of some that have been open for a long time. In particular, his recent development of new unexpected techniques to study the critical values of general zeta functions has led to the proof of the
Quantum Unique Ergodicity Conjecture for classical holomorphic modular forms. Many of the analytic and combinatorial tools that Soundararajan and his collaborators have developed, in works ranging
from prime numbers and sieve methods to character sums and zeta functions, have become standard tools for researchers in these fields.
Prof. Kannan Soundararajan reacts to winning the Infosys Prize
"Hello Soundararajan. I want to congratulate you. The Infosys Science Foundation has chosen you as this year's winner in Mathematics for their Infosys Prize. It's for your recent work on Quantum
Chaos, and the related questions of Unique Ergodicity. It gives me personally great pleasure to take this opportunity to congratulate you."
Srinivasa S.R. Varadhan | {"url":"https://www.infosysprize.org/laureates/2011/kannan-soundararajan.html","timestamp":"2024-11-06T07:18:33Z","content_type":"text/html","content_length":"41501","record_id":"<urn:uuid:daca5fb9-6b21-43e0-91a7-5e45ebcc7b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00001.warc.gz"} |
Erratum: Gauging van der Waals interactions in aqueous solutions of 2D MOFs: when water likes organic linkers more than open-metal sites (Physical Chemistry Chemical Physics (2021) 23 (3135-3143) DOI: 10.1039/D0CP05923D)
In the originally published article, a few parameters were missing from the force fields affecting the reported results for the classical molecular dynamics simulation data in Section 3.1. (i.e.,
Fig. 2 and Table 1). Specifically, a number of dihedral angles were missing from the force field. Also, errors were found in the implementation of the q-TIP4P/F water potential which are fixed The
simulations were repeated to correct this error, along with optimisation of the simulation details, which have been outlined here. The corrected figure and table are reported below. The corresponding
tables of force field parameters along with data related to the MD results have been updated in the ESI. The input and output files of all simulations have also been provided as part of the
Supplementary Materials to the original article. For MD simulations of bulk models of 2D MOFs, larger 2x2x3 periodic cells, composed of 1512 atoms and 72 metal centers, were used along with a larger
cutoff of 10 Å for treating long-range electrostatic interactions. This is opposed to the 2 x 2x 2 periodic cells in the original paper with a 6 Å cutoff. The simulations were carried out using our
in-house modified software package coined as DL_POLY Quantum v1.0 which is publicly available through our GitHub page.1 The corrected Fig. 2 is provided below. The trends observed and the related
discussions in the main text on the importance of hydrogen bond formation for the adsorption of water are not changed. The corrected Table 1 is given below. To determine the orientational relaxation
time (treor), we used a bi-exponential function2 as in: (Farmula Presented) This is opposed to the single exponential (i.e., eqn (3)) used in the original article. The related graphs are reported in
Fig. S11 of the updated Supplementary Materials. For the new data from the bi-exponential function, the final relaxation times were calculated from the weighted average of the fitting parameters as:
(Farmula Presented) The first 100 fs of the simulations which correspond to the fast liberational motion of the water molecules2 were excluded from these fits. The mean square displacement (MSD)
plots for calculating diffusion coefficients (D) are updated in Fig. S12 and S13 of the new Supplementary Materials. The reported trends and main conclusions on the freer nature of water in Cu[3]
(HTTP)[2]compared to Cu3(HHTP)2 remain unchanged. The noticeable difference of the new results compared to the original data is the increasing trend of both D[z] and D[xy] with respect to water
concentration which is in line with the increasing trend of D[tot]. The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers. (Figure
Presented) (Table Presented).
All Science Journal Classification (ASJC) codes
• General Physics and Astronomy
• Physical and Theoretical Chemistry
Dive into the research topics of 'Erratum: Gauging van der Waals interactions in aqueous solutions of 2D MOFs: when water likes organic linkers more than open-metal sites (Physical Chemistry Chemical
Physics (2021) 23 (3135-3143) DOI: 10.1039/D0CP05923D)'. Together they form a unique fingerprint. | {"url":"https://researchwith.njit.edu/en/publications/erratum-gauging-van-der-waals-interactions-in-aqueous-solutions-o","timestamp":"2024-11-10T15:03:05Z","content_type":"text/html","content_length":"58049","record_id":"<urn:uuid:1a790f19-26d3-4342-87d8-91559bd4a061>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00410.warc.gz"} |
Rotational Motion MCQs [PDF] Questions Answers | Rotational Motion MCQ App Download & e-Book: Test 1
Engineering Physics MCQs - Chapter 28
Rotational Motion Multiple Choice Questions (MCQs) PDF Download - 1
The Rotational Motion Multiple Choice Questions (MCQs) with Answers PDF (Rotational Motion MCQs PDF e-Book) download Ch. 28-1 to study Engineering Physics Course. Practice Rotational Inertia of
Different Objects MCQs, Rotational Motion trivia questions and answers PDF for online engineering programs. The Rotational Motion MCQs App Download: Free learning app for precession of a gyroscope,
angular momentum, yo-yo career test to learn online courses.
The MCQ: If M is the mass of object and L is length, then rotational inertia of thin rod about axis through center perpendicular to length is; "Rotational Motion" App Download (Free) with answers: 1/
2 ML^2; 1/2 ML; 1/12 ML^2; 1/4 ML^2; for online engineering programs. Solve Doppler Effect Quiz Questions, download Google eBook (Free Sample) for grad school interview questions.
Rotational Motion MCQs with Answers PDF Download: MCQ Quiz 1
MCQ 1:
If M is the mass of object and L is length, then rotational inertia of thin rod about axis through center perpendicular to length is
1. 1/2 ML
2. 1/2 ML^2
3. 1/12 ML^2
4. 1/4 ML^2
MCQ 2:
A spinning gyroscope can precess about a vertical axis through its support at the rate of
1. Mgr/I Ω
2. MgrI Ω
3. Mg/rI
4. Mg/ Ω
MCQ 3:
If M is the mass of object and R is radius, then rotational inertia of hoop about any diameter is
1. 2/5 MR^2
2. 2/3 MR^2
3. 2/3 MR^2
4. 1/2 MR^2
MCQ 4:
Angular momentum of the particle w.r.t origin O is a vector quantity and defined as
1. m.r.v
2. m(r.v)
3. m x (r.v)
4. m(rxv)
MCQ 5:
Potential energy of yo-yo is equals to
1. mgcosθ
2. mgh
3. m/gh
4. gh/m
Rotational Motion Learning App: Free Download Android & iOS
The App: Rotational Motion MCQs App to learn Rotational Motion Textbook, Engineering Physics MCQ App, and Digital Electronics MCQs App. The "Rotational Motion" App to free download iOS & Android Apps
includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions! | {"url":"https://mcqslearn.com/engg/engineering-physics/mcq/rotational-motion.php","timestamp":"2024-11-01T19:31:41Z","content_type":"text/html","content_length":"96441","record_id":"<urn:uuid:3142c87d-5b71-41a2-82e4-438fa9090dba>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00764.warc.gz"} |
danker 0.8.1
danker 0.8.1¶
danker is a light-weight package/module to compute PageRank on very large graphs with limited hardware resources. The input format are edge lists of the following format:
The nodes can be denoted as strings or integers. However, depending on the size of the graph and the amount of available memory you may have to index string node as integers and map back after
# link file
# index file
1 A
2 B
3 C
4 D
The computation can then be run with ‘int_only mode’ which consumes less memory. A pre-requisite for danker is that the input file is sorted. For this, you can use the Linux sort command. If you want
to use the danker_smallmem() option you need two copies of the same link file: one sorted by the left column and one sorted by the right column:
# Sort by left column
sort --key=1,1 -o output-left link-file
# Sort by right column
sort --key=2,2 -o output-right link-file
The init() function is used to initialize the PageRank computation. The following code shows a minimal example for computing PageRank with the danker_bigmem() option (right-sorted file not needed):
import danker
start_value, iterations, damping = 0.1, 40, 0.85
pr_dict = danker.init("output-left", start_value, False, False)
pr_out = danker.danker_bigmem(pr_dict, iterations, damping)
result_loc = (iterations % 2) + 1
for i in pr_out:
print(i, pr_out[i][result_loc], sep='\t')
The following code shows a minimal example for computing PageRank with the danker_smallmem() option:
import danker
start_value, iterations, damping = 0.1, 40, 0.85
pr_dict = danker.init("output-left", start_value, True, False)
pr_out = danker.danker_smallmem(pr_dict, "output-right", iterations, damping, start_value)
result_loc = (iterations % 2) + 1
for i in pr_out:
print(i, pr_out[i][result_loc], sep='\t')
exception danker.InputNotSortedException(file_name, line1, line2)¶
Custom exception thrown in case the input file is not correctly sorted.
☆ file_name – The name of the file that is not correctly sorted.
☆ line1 – The line that should be first (but is not).
☆ line2 – The line that should be second (but is not).
danker.danker_bigmem(dictionary, iterations, damping)¶
Compute PageRank with big memory option.
☆ dictionary – Python dictionary created with init() (smallmem set to False).
☆ iterations – The number of PageRank iterations.
☆ damping – The PageRank damping factor.
The same dictionary that was created by init(). The keys are the nodes of the graph. The output score is located at the (iterations % 2) + 1 position of the respecive list (that is the value
of the key).
danker.danker_smallmem(dictionary, right_sorted, iterations, damping, start_value, int_only)¶
Compute PageRank with right sorted file.
☆ dictionary – Python dictionary created with init() (smallmem set to True).
☆ right_sorted – The same tab-separated link file that was used for init() sorted by the right column.
☆ iterations – The number of PageRank iterations.
☆ damping – The PageRank damping factor.
☆ start_value – The PageRank starting value (same as was used for init()).
☆ int_only – Boolean flag in case the graph contains only integer nodes.
The same dictionary that was created by init(). The keys are the nodes of the graph. The output score is located at the (iterations % 2) + 1 position of the respecive list (that is the value
of the key).
danker.init(left_sorted, start_value, smallmem, int_only)¶
This function creates the data structure for PageRank computation by indexing every node. Main indexing steps include setting the starting value as well as counting the number of outgoing links
for each node.
☆ left_sorted – A tab-separated link file that is sorted by the left column.
☆ start_value – The PageRank starting value.
☆ smallmem – This value is interpreted as a boolean that indicates whether the indexing should be done for danker_smallmem() (file iteration) or danker_bigmem() (in-memory). Default is
☆ int_only – Boolean flag in case the graph contains only integer nodes.
Dictionary with each key referencing a node. The value is a list with the following contents - depending on the smallmem parameter and the intended use:
☆ danker_bigmem() [link_count:int, start_value:float, start_value:float, linked_pages:list]
☆ danker_smallmem() [link_cout:int, start_value:float, start_value:float, touched_in_1st_iteration:boolean] | {"url":"https://danker.readthedocs.io/en/latest/","timestamp":"2024-11-07T02:40:50Z","content_type":"text/html","content_length":"33123","record_id":"<urn:uuid:a84332fe-f137-4d90-9f81-974db858bc78>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00300.warc.gz"} |
8th Grade Free Algebra Worksheets
8th Grade Free Algebra Worksheets
Printable eighth grade grade 8 worksheets tests and activities. 8th grade math worksheets as a quick supplement for your instruction use our printable 8th grade math worksheets to provide practice
problems to your students.
Printable in convenient pdf format.
8th grade free algebra worksheets. First things first prioritize major topics with our printable compilation of 8th grade math worksheets with answer keys. Writing reinforces maths learnt. You ve
come to the right place.
These worksheets are printable pdf exercises of the highest quality. Free 8th grade math worksheets and games including pre algebra algebra 1 and test prep. This will take you to the individual page
of the worksheet.
Students work towards mastery with the basic order of operations. These math worksheets for children contain pre algebra algebra exercises suitable for preschool kindergarten first grade to eight
graders free pdf worksheets 6th grade math worksheets the following algebra topics are covered among others. Some of the worksheets for this concept are pre algebra diagnostic pre test 50 questions
60 minutes grade 8 mathematics practice test 8th grade algebra summer packet parent and student study guide workbook grade 7 pre algebra end of the year test algebra diagnostic pre test 50 questions
60 minutes.
Our worksheets use a variety of high quality images and some are aligned to common core standards. You will then have two choices. Eighth grade math worksheets grade 8 for ages 13 to 14 math in the
8th grade begins to prove more substantial as far as long range skills students will use and need.
Free algebra 1 worksheets created with infinite algebra 1. First start with our printables page where you ll find lots of worksheets organized by topic take a look at eighth grade math science
language arts and social studies scroll down to find eighth grade. Family education network has free worksheets games and activities for your eighth grader.
Click on the free 8th grade math worksheet you would like to print or download. What your eighth grader should know. All worksheets created with infinite algebra 1.
Test and worksheet generators for math teachers. Algebra worksheets printable. 8th grade pre algebra practice test displaying top 8 worksheets found for this concept.
Pursue conceptual understanding of topics like number systems expressions and equations work with radicals and exponents solve linear equations and inequalities evaluate and compare functions
understand similarity and congruence know and apply the pythagorean theorem. Print our eighth grade grade 8 worksheets and activities or administer them as online tests. Easily download and print our
8th grade math worksheets.
Worksheets labeled with are accessible to help teaching pro subscribers only. Free 8th grade math worksheets for teachers parents and kids. Expressions function tables probability as begin to work at
the core of this grade level.
Pre Algebra Worksheets Equations Worksheets Math Worksheets Equations Pre Algebra Worksheets
Free Printable Math Worksheets 6th 7th 8th Grade Math Algebra 1 Basics Printable Math Worksheets Math Worksheets 8th Grade Math
Simple Algebra Worksheet Algebra Worksheets Algebra Equations Worksheets Basic Algebra Worksheets
Pre Algebra Worksheets Monomials And Polynomials Worksheets Polynomials Pre Algebra Worksheets Algebra Worksheets
Equations Pre Algebra Worksheet Printable 8th Grade Math Worksheets Pre Algebra Worksheets Algebra Worksheets
Algebra Practice Worksheet Free Printable Educational Worksheet Algebra Worksheets Math Worksheets Math Worksheet
8th Grade Math Worksheets Printable 8th Grade Math Worksheets Math Practice Worksheets Algebra Worksheets
Area Of Polygons Worksheets Free Factors Worksheets This Section Contains Worksheets On Factor Free Math Worksheets Algebra Worksheets Probability Worksheets
8th Grade Math Worksheets Algebra Google Search Algebra Worksheets 8th Grade Math Worksheets Free Math Worksheets
Use These Free Algebra Worksheets To Practice Your Order Of Operations Algebra Worksheets 8th Grade Math Worksheets Free Algebra
Algebra Worksheets For Simplifying The Equation Algebra Worksheets Simplifying Algebraic Expressions Algebraic Expressions
Eighth Grade Math Worksheets 8th Grade Math Worksheets Math Practice Worksheets Algebra Worksheets
Pre Algebra Review Worksheet Algebra Worksheets Pre Algebra Worksheets Algebra Equations Worksheets
This Page Contains Links To Free Math Worksheets For Pre Algebra Problems Answer Keys Are Also Avail Math Worksheets Pre Algebra Worksheets Algebra Worksheets
8th Grade Math Worksheets In 2020 Algebra Worksheets 8th Grade Math Worksheets Math Worksheets
Pre Algebra Worksheets For 8th Graders Algebra Worksheets Math Worksheets Pre Algebra Worksheets
Free Printable Pre Algebra Worksheets Also Available Online 8th Grade Math Worksheets Algebra Worksheets Pre Algebra Worksheets
Use These Free Algebra Worksheets To Practice Your Order Of Operations Algebra Worksheets Free Algebra Basic Algebra Worksheets
Primary Maths Worksheets Algebra Free Printable Printable Shelter Algebra Worksheets Basic Algebra Worksheets Algebra Equations Worksheets | {"url":"https://kidsworksheetfun.com/8th-grade-free-algebra-worksheets/","timestamp":"2024-11-09T03:26:34Z","content_type":"text/html","content_length":"137037","record_id":"<urn:uuid:c6651f11-19fb-4796-aa59-2fafb2fda890>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00169.warc.gz"} |
Two-Interval Musical Scales and Binary Structures in Computer Science and Biology
Previous Article in event Previous Article in session
Next Article in event Next Article in session
Two-Interval Musical Scales and Binary Structures in Computer Science and Biology
From ancient times, understanding the phenomenon of music and building musical structures were associated with mathematics. This report analyses relation of music with binary structures, the example
of which is the binary number system, widely used in computer calculations and informatics, noise-immunity coding signals with using dyadic groups and so forth. Principles of binary opposition (or
yin-yang principles) permeate living matter at different levels of its organization. Specific examples of this are the complementary pairs of nitrogenous bases of DNA molecules of heredity; division
of the alphabet of these nitrogenous bases into pairs of purine-pyrimidine; organization of muscular movement on the base of muscle pairs of flexor-extensor; pairs of male-female, which give life to
new generations, etc. In the field of musical culture, the binary principle is realized, in particular, in the existence of two-interval musical scales.
Main part
This report focuses on the analysis of development of two-interval musical scales on the base of the well-known algorithm of Pythagoras. For the author, the initial types of such musical scales were
the known two-interval musical scales: the Pythagorean musical scale and so called pentagram scales (or Fibonacci-stage scales) from [1, 2].
Mathematical constructs of such musical scales are based on the Pythagorean algorithm that uses a geometric progression with special coefficients of the progression. For example, in the case of the
Pythagorean musical scale, this algorithm uses a quint coefficient of 3/2 for the progression that leads to the construction of the sequence of notes do-re-mi-fa-sol-la-si-do on the interval of
frequencies {1, 2} of one octave with basing on the following algorithmic steps:
1. Taking the first seven members of such geometrical progression with the quint factor 3/2, which begins from the inverse value of the quint: (3/2)^-1, (3/2)^0, (3/2)^1, (3/2)^2, (3/2)^3, (3/2)^4,
2. Returning into the octave interval {1, 2} for those members of this sequence, values of which overstep the limits of this interval; this returning is made for these values by means of their
multiplication or division with the number 2. As a result of this operation, the new sequence is appeared (this sequence can be named "the geometrical progression with the returning into the
octave "): 2*(3/2)^-1, (3/2)^0, (3/2)^1, (3/2)^2/2, (3/2)^3/2, (3/2)^4/4, (3/2)^5/4;
3. The permutation of these seven members in accordance with their increasing values from 1 up 2 (the number 2 is included in this sequence as the end of the octave): (3/2)^0, (3/2)^2/2, (3/2)^4/4,
2*(3/2)^-1, (3/2)^1, (3/2)^3/2, (3/2)^5/4, 2.
In this last sequence, a ratio of the greater number to the adjacent smaller number refers to as the interval factor. Two kinds of interval factors exist in this sequence only: 9/8, which is named
the tone-interval T, and 256/243, which is named the semitone-interval S. One can check that the sequence of interval factors in this case is T-T-S-T-T-T-S. These five tone-intervals and two
semitone-intervals cover the octave precisely: (9/8)^5 * (256/243)^2 = 2. If one takes not 7, but 6 or 8 members in the initial quint geometrical progression (see the first step of the algorithm),
then the same Pythagorean algorithm does not give a binary sequence of interval factors T and S because three kinds of interval factor arise.
If the coefficient of the progression in the Pythagorean algorithm is equal not to 3/2, but to the square of the golden section (1+5^0.5)/2=1,618... then (with a certain number of members of the
initial geometric progression) special two-interval scales are formed, which are called "genetic" by virtue of their relationship with the parameters of molecular genetic system [1, 2].
In this report, the author represents his mathematical theory, which allows to determine for what values of the coefficients of a geometric progression the Pythagorean algorithm generates
two-interval scales for certain number of members in the initial progression. The author shows that some well-known in the history of music two-interval musical scales from different historical
periods are algorithmically related because they are based on the same Pythagorean algorithm (the difference between them is determined only by differences in the values of their algorithmic
parameters). In this theory, a theorem has been proved that only three kinds of scales can be created on the base of the algorithm of Pythagoras: one-interval (rare), two-interval (are relatively
regularly) and three-interval (mostly).
The author has also analyzed a logarithmic representation of the algorithm of Pythagoras and he has created a convenient graphical method for the analysis of musical scales, generated by the
algorithm for different values of its parameters. He has solved the problem of automation and visual analysis such algorithmic processes of creation of two-interval scales. The solution to this
problem is based on the writing of a specialized computer program and a visual representation of the family two-interval scales with a different number of their stages in the form of concentric
circles, such as the following:
(see PDF version for Figure).
For the algorithm of Pythagoras the inverse problem has been also solved: knowing the sequence of values of the number of stages inside two-interval scales, which are nested each into other, how one
can determine the appropriate multiplying factor. In connection with the decision of this problem the author has developed a theory of the "Pascal's fractal": it is a geometric tree of numerical
structure with its recursive organization, in which each number is formed as the sum of the two numbers above it (similar to the Pascal's triangle).
Music is widely used in today's global connections among people and nations. Development of methods and means of musical culture through in-depth understanding of the fundamentals of musical scales
can contribute positive effects of musical influences on society and its members, including possibilities of music therapy.
1. Petoukhov S.V. Matrix genetics, algebras of the genetic code, noise-immunity. RCD: Moscow, Russia, 2008, 316 p. (in Russian, http://petoukhov.com/)
2. Darvas G., Koblyakov A., Petoukhov S., Stepanyan I. Symmetries in molecular-genetic systems and musical harmony. Symmetry: Culture and Science, 2012, vol. 23, № 3-4, p. 343-375.
Keywords: musical scale, binary structures
• 50 Reads
• 0 Recommendations | {"url":"https://sciforum.net/paper/view/2817","timestamp":"2024-11-04T15:20:40Z","content_type":"text/html","content_length":"71784","record_id":"<urn:uuid:e97948ec-3d3b-40f0-97b6-bebf399b4aa5>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00871.warc.gz"} |
Type A and Type B Uncertainty: Evaluating Uncertainty Components
Type A and Type B Uncertainty: Evaluating Uncertainty Components
Type A and Type B uncertainty are two elements that are commonly discussed in estimating measurement uncertainty.
Uncertainty type is covered in most measurement uncertainty guides and uncertainty training courses. Auditors review uncertainty budgets to make sure the components are categorized correctly.
However, have you ever looked at most of the information published on Type A and Type B uncertainty?
It’s very minimal. No one covers the topic of uncertainty type as well as the GUM. There is so much information left out of other guides and training.
It might be the reason why most people only evaluate type B uncertainty with a rectangular distribution when there are so many more realistic options.
Why are other options omitted?
In this guide, I am going to teach you all about Type A and Type B uncertainty as explained in the GUM. However, I am going explain in a manner that doesn’t require you to have a PhD.
So, if you want learn how to calculate uncertainty, make sure to read this guide to learn everything you need to know about Type A and Type B uncertainty.
Before you learn about uncertainty type classifications, it’s a good idea to know more about why they exist and where they came from.
In 1980, the CIPM Recommendation INC-1 suggested that measurement uncertainty components should be grouped into two categories; Type A and Type B.
Below is an exert from the Vocabulary in Metrology;
“In the CIPM Recommendation INC-1 (1980) on the Statement of Uncertainties, it is suggested that the components of measurement uncertainty should be grouped into two categories, Type A and Type B,
according to whether they were evaluated by statistical methods or otherwise, and that they be combined to yield a variance according to the rules of mathematical probability theory by also treating
the Type B components in terms of variances. The resulting standard deviation is an expression of a measurement uncertainty. A view of the Uncertainty Approach was detailed in the Guide to the
expression of uncertainty in measurement (GUM) (1993, corrected and reprinted in 1995) that focused on the mathematical treatment of measurement uncertainty through an explicit measurement model
under the assumption that the measurand can be characterized by an essentially unique value. Moreover, in the GUM as well as in IEC documents, guidance is provided on the Uncertainty Approach in the
case of a single reading of a calibrated instrument, a situation normally met in industrial metrology.” – VIM 2012
As you can see, the VIM gives a great explanation and recommends that you read the GUM for more details.
Here is an exert from the Guide to the Expression of Uncertainty in Measurement;
“3.3.4 The purpose of the Type A and Type B classification is to indicate the two different ways of evaluating uncertainty components and is for convenience of discussion only; the classification is
not meant to indicate that there is any difference in the nature of the components resulting from the two types of evaluation. Both types of evaluation are based on probability distributions (C.2.3),
and the uncertainty components resulting from either type are quantified by variances or standard deviations.” – JCGM 100
For more information on the CIPM recommendation INC-1 (1980), go to iso.org. The text is in French but can be easily translated with tools like Google Translate.
Now that you have read the VIM and the GUM, you can understand that the use of uncertainty types (i.e. A & B) are to help you quickly determine how the data was evaluated.
If you continue to read the GUM, it will teach the difference between Type A and Type B uncertainty. See the excerpt below.
“3.3.5 The estimated variance u2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically
estimated variance s2 (see 4.2). The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u2, is thus u = s and for convenience is sometimes called a Type A standard
uncertainty. For an uncertainty component obtained from a Type B evaluation, the estimated variance u2 is evaluated using available knowledge (see 4.3), and the estimated standard deviation u is
sometimes called a Type B standard uncertainty.” – JCGM 100
From the excerpt above, you can determine two things;
• Type A uncertainty is calculated from a series of observations,
• Type B uncertainty is evaluated using available information.
Furthermore, the GUM provides you with information about the probability distributions for each uncertainty type.
“Thus a Type A standard uncertainty is obtained from a probability density function (C.2.5) derived from an observed frequency distribution (C.2.18), while a Type B standard uncertainty is obtained
from an assumed probability density function based on the degree of belief that an event will occur [often called subjective probability (C.2.1)]. Both approaches employ recognized interpretations of
probability.” – JCGM 100
Type A uncertainty is characterized by the observed frequency distribution which means that you should look at the histogram to find the correct probability distribution.
Following the Central Limit Theorem, the more samples that you collect, the more the data will begin to resemble a normal distribution. Here is a link to an amazing video on the Central Limit Theorem
. I recommend that you watch it.
On the other hand, Type B uncertainty is characterized using an assumed probability distribution based on available information. Without the original data or a histogram, you are left to determine
how the data is characterized based on your information sources.
Most of the time, you are not given much information. Therefore, people typically assume a rectangular distribution.
However, there are plenty of other ways for you to evaluate Type B uncertainty data that no one ever references; not even in the best guides to estimating uncertainty.
Today, I am going to cover everything that you need to know about Type A and Type B uncertainty. Look at the list below to see what is covered in this guide.
1. What is Type A Uncertainty
2. Evaluation of Type A Uncertainty
3. Examples of Evaluating Type A Uncertainty
4. What is Type B Uncertainty
5. Evaluation of Type B Uncertainty
6. Examples of Evaluating Type B Uncertainty
7. Difference Between Type A and Type B Uncertainty
8. How to Choose Type A or Type B
See How We Can Help Your Lab Get ISO/IEC 17025:2017 Accredited
• Uncertainty Budgets – let us estimate uncertainty for you.
• Custom QMS – we’ll create your quality manual, procedures, lists, and forms.
• Training – get online training that teaches you how to estimate uncertainty.
What is Type A Uncertainty
According to the Vocabulary in Metrology (VIM), Type A Uncertainty is the “evaluation of a component of measurement uncertainty by a statistical analysis of measured quantity values obtained under
defined measurement conditions.”
In the Guide to the Expression of Uncertainty in Measurement (GUM), Type A evaluation of uncertainty is defined as the method of evaluation of uncertainty by the statistical analysis of series of
Essentially, Type A Uncertainty is data collected from a series of observations and evaluated using statistical methods associated with the analysis of variance (ANOVA).
So, if you collect repeated samples of similar measurement results and evaluate it by calculating the mean, standard deviation, and degrees of freedom, your uncertainty component would be classified
as Type A uncertainty.
Evaluation of Type A Uncertainty
For most cases, the best way to evaluate Type A uncertainty data is by calculating the;
• Arithmetic Mean,
• Standard Deviation, and
• Degrees of Freedom
Arithmetic Mean
When performing a series of repeated measurements, you will want to know the average value of your sample set.
This is where the arithmetic mean equation can help you evaluate Type A uncertainty. You can use the value later to predict the expected value of future measurement results.
The central number of set of numbers that is calculated by adding quantities together and then dividing the total number of quantities.
How to Calculate
1. Add all the values together.
2. Count the number of values.
3. Divide step 1 by step 2.
Standard Deviation
When performing a series of repeated measurements, you will also want to know the average variance of your sample set.
Here, you will want to calculate the standard deviation. It is most common Type A evaluation used in uncertainty analysis.
So, if there were only one function to learn, this would be the one to focus your attention on.
A measure of the dispersion of a set of data from its mean (i.e. average).
How to Calculate
1. Subtract each value from the mean.
2. Square each value in step 1.
3. Add all of the values from step 2.
4. Count the number of values and Subtract it by 1.
5. Divide step 3 by step 4.
6. Calculate the Square Root of step 5.
Degrees of Freedom
After calculating the mean and standard deviation, you need to determine the degrees of freedom associated with your sample set.
It is an important value that most people neglect to calculate. Even most guides on measurement uncertainty forget to include it in their text. However, the GUM does not forget to mention it.
In fact, in section 4.2.6, the GUM recommends that you should always include the degrees of freedom when documenting Type A uncertainty evaluations.
I agree.
I always include the degrees of freedom when evaluating Type A data and in my uncertainty budgets.
You can also use it to estimate confidence intervals and coverage factors.
The number of values in the final calculation of a statistic that are free to vary.
How to Calculate
1. Count the number of values in the sample set.
2. Subtract the value in step 1 by 1.
Example of Evaluating Type A Uncertainty
To give you an example of evaluating Type A uncertainty data, I am going to show you two common scenarios people encounter when estimating measurement uncertainty.
• Single Repeatability Test, and
• Multiple Repeatability Tests
Single Repeatability Test
Imagine you are estimating uncertainty in measurement and need to obtain some Type A data. So, you perform a repeatability test and collect a series of repeated measurements.
Now that you have collected data, you need to evaluate it. Therefore, you calculate the mean, standard deviation, and the degrees of freedom.
Next, you add the standard deviation and degrees of freedom to your uncertainty budget for repeatability.
Multiple Repeatability Tests
In this scenario, let’s imagine you are estimating measurement uncertainty for a measurement system that is critical to your laboratory. Try to think of a reference standard that you own.
It is so important that you perform a repeatability test for this system every month and document the results.
Your records have the mean, standard deviation, and degrees of freedom listed for each month.
With so much Type A data, you are probably wondering, “Which results do I include in my uncertainty budget?”
The answer is all of them; or, at least, the last twelve months.
To evaluate your Type A uncertainty data, you will want to use the method of pooled variance. It is the best way to combine or pool your standard deviations.
After performing this analysis, you will want to the pooled standard deviation to your uncertainty budget for repeatability.
What is Type B Uncertainty
According to the Vocabulary in Metrology (VIM), Type B Uncertainty is the “evaluation of a component of measurement uncertainty determined by means other than a Type A evaluation of measurement
In the Guide to the Expression of Uncertainty in Measurement (GUM), Type B evaluation of uncertainty is defined as the method of evaluation of uncertainty by means other than the statistical analysis
of series of observations.
Essentially, Type B Uncertainty is data collected from anything other than an experiment performed by you.
Even if you can analyze the data statistically, it is not Type A data if you did not collect it from a series of observations.
Most of the Type B data that you will use to estimate uncertainty will come from;
• Calibration reports,
• Proficiency testing reports,
• Manufacturer’s manuals,
• Datasheets,
• Standard methods,
• Calibration procedures,
• Journal articles,
• Conference papers,
• White papers,
• Industry guides,
• Textbooks, and
• Other available information.
Evaluation of Type B Uncertainty
Since Type B Uncertainty can come from so many different sources, there are a lot ways that it can be evaluated.
This means that there is a lot of information to cover in this section.
Most of the time, people default to assigning a rectangular distribution to an uncertainty component and using a square root of three divisor to convert quantities to standard uncertainty.
If this describes how you evaluate uncertainty in measurement, go ahead and raise your hand.
The good news is that this will work for 90% of the uncertainty calculations that you will perform in your lifetime. However, there are many more realistic options available for you to use to
evaluate Type B uncertainty.
It depends whether or not you want use them or not.
If you are interested, keep reading. I am going to cover the evaluation methods in the GUM that most measurement uncertainty guides tend to leave out.
“It should be recognized that a Type B evaluation of standard uncertainty can be as reliable as a Type A evaluation”
Manufacture Specifications & Calibration Reports
In section 4.3.3 of the GUM, the guide gives recommendations for evaluating information published in manufacturer’s specifications and calibration reports.
“4.3.3 If the estimate x[i] is taken from a manufacturer’s specification, calibration certificate, handbook, or other source and its quoted uncertainty is stated to be a particular multiple of a
standard deviation, the standard uncertainty u[(xi)] is simply the quoted value divided by the multiplier, and the estimated variance u^2[(xi)] is the square of that quotient.”
Additionally, in section 4.3.4 of the GUM, the guide gives you more information for evaluating manufacture specifications.
“4.3.4 The quoted uncertainty of x[i] is not necessarily given as a multiple of a standard deviation as in 4.3.3. Instead, one may find it stated that the quoted uncertainty defines an interval
having a 90, 95, or 99 percent level of confidence (see 6.2.2). Unless otherwise indicated, one may assume that a normal distribution (C.2.14) was used to calculate the quoted uncertainty, and
recover the standard uncertainty of x[i] by dividing the quoted uncertainty by the appropriate factor for the normal distribution. The factors corresponding to the above three levels of confidence
are 1,64; 1,96; and 2,58 (see also Table G.1 in Annex G).”
If the uncertainty is reported to a particular confidence interval (e.g. 95%), use the associated coverage factor to convert to standard uncertainty.
In the image below is an excerpt from the Fluke 5700A datasheet. You should notice that the specifications are stated for both 95% and 99% confidence intervals.
To find the standard uncertainty, simply divide the published uncertainty by the coverage factor (k) that is associated with the confidence interval stated in the specifications.
If the confidence level is not provided in the specifications (most of the time it is not provided), it is best to assume that it is given to a 95% confidence interval. Only assume a 99% confidence
interval if it is stated.
PRO TIP: Next time your auditor suggests that you should evaluate the manufacturer’s accuracy or uncertainty specifications with a rectangular distribution, please refer them to read sections 4.3.3
and 4.3.4 of the GUM.
50/50 Chance of Occurrence
In section 4.3.5 of the GUM, the guide tells you how to evaluate type B uncertainty when you believe that there is a 50% chance of occurrence. The guide recommends that you divide the interval by
Therefore, you would use the following equation to convert to standard uncertainty.
“4.3.5 Consider the case where, based on the available information, one can state that “there is a fifty-fifty chance that the value of the input quantity X[i] lies in the interval a[−] to a[+]” (in
other words, the probability that X[i] lies within this interval is 0,5 or 50 percent). If it can be assumed that the distribution of possible values of X[i] is approximately normal, then the best
estimate x[i] of X[i] can be taken to be the midpoint of the interval. Further, if the half-width of the interval is denoted by a = (a[+] − a[−])/2, one can take u[(xi)] = 1,48a, because for a normal
distribution with expectation μ and standard deviation σ the interval μ ± σ /1,48 encompasses approximately 50 percent of the distribution.”
If you are confused, do not worry. This is not a common occurrence.
I have never encountered a situation where I have had use this technique to evaluate type B uncertainty. Most likely, you will never use it either unless you are performing measurements that can only
have two possible outcomes.
2/3 Chance of Occurrence
In section 4.3.6 of the GUM, the guide tells you how to evaluate type B uncertainty when you believe that there is approximately a 67% chance of occurrence. The guide recommends that you divide the
interval by 1 because it is close to the conference interval covered by one standard deviation, 68.3%.
Therefore, you would use the following equation to convert to standard uncertainty.
“4.3.6 Consider a case similar to that of 4.3.5 but where, based on the available information, one can state that “there is about a two out of three chance that the value of X[i] lies in the interval
a[−] to a[+]” (in other words, the probability that X[i] lies within this interval is about 0,67). One can then reasonably take u[(xi)] = a, because for a normal distribution with expectation μ and
standard deviation σ the interval μ ± σ encompasses about 68,3 percent of the distribution.”
Similar to the 50/50 chance of occurrence, this is not a common evaluation.
I have never encountered a situation where I have had use this technique to evaluate type B uncertainty. Most likely, you will never use it either.
Only Upper and Lower Limits
In section 4.3.7 of the GUM, the guide tells you how to evaluate type B uncertainty when you believe that there is a 100% chance that the value will be between the upper and lower limit.
“4.3.7 In other cases, it may be possible to estimate only bounds (upper and lower limits) for X[i], in particular, to state that “the probability that the value of Xi lies within the interval a− to
a+ for all practical purposes is equal to one and the probability that X[i] lies outside this interval is essentially zero”. If there is no specific knowledge about the possible values of X[i] within
the interval, one can only assume that it is equally probable for X[i] to lie anywhere within it (a uniform or rectangular distribution of possible values — see 4.4.5 and Figure 2 a). Then x[i], the
expectation or expected value of X[i], is the midpoint of the interval, x[i] = (a[−] + a[+])/2, with associated variance…”
In this scenario, the guide recommends that you assign a rectangular distribution and divide the interval by the square-root of 12 or the square root of 3.
If the value of the mean is expected to be the midpoint of the interval, divide by the square root of 12.
If the difference between of the interval limits is equivalent to 2a, divide by the square root of 3.
If you are not sure how to evaluate the interval, use the second equation and divide by the square root of 3. It is more likely to be the correct evaluation method.
Asymmetrical Limits
Every once in a while, you may encounter specifications or data that is not symmetrically distributed. This means that the limits are not equal for both the upper and lower limits.
“4.3.8 In 4.3.7, the upper and lower bounds a[+] and a[−] for the input quantity X[i] may not be symmetric with respect to its best estimate x[i]; more specifically, if the lower bound is written as
a[−] = x[i] − b[−] and the upper bound as a[+] = x[i] − b[+], then b[−] ≠ b[+]. Since in this case x[i] (assumed to be the expectation of X[i]) is not at the centre of the interval a[−] to a[+], the
probability distribution of X[i] cannot be uniform throughout the interval. However, there may not be enough information available to choose an appropriate distribution; different models will lead to
different expressions for the variance. In the absence of such information, the simplest approximation is…”
For example, the upper limit could be a greater distance from nominal than the lower limit. Look at the image below to see Grade 2 specifications for gage block in accordance with the GGG
If you notice, the upper and lower limits are not equal in magnitude. Therefore, they are asymmetrical.
When you encounter this type of scenario, the GUM recommends the following instructions to evaluate Type B uncertainty;
If your limits are asymmetrical, subtract the upper limit by the lower limit and divide the result by the square root of 12.
Equal Probability
Now, if you know a thing or two about statistics, then you know that a rectangular distribution is used when all chances of occurrence are equally probable.
However, you probably did not know that you could also use a trapezoidal distribution.
If you did, great. If not, read section 4.3.9 of the GUM.
“4.3.9 In 4.3.7, because there was no specific knowledge about the possible values of Xi within its estimated bounds a− to a+, one could only assume that it was equally probable for X[i] to take any
value within those bounds, with zero probability of being outside them. Such step function discontinuities in a probability distribution are often unphysical. In many cases, it is more realistic to
expect that values near the bounds are less likely than those near the midpoint. It is then reasonable to replace the symmetric rectangular distribution with a symmetric trapezoidal distribution
having equal sloping sides (an isosceles trapezoid), a base of width a[+] − a[−] = 2a, and a top of width 2aβ, where 0 < β < 1. As β → 1, this trapezoidal distribution approaches the rectangular
distribution of 4.3.7, while for β = 0, it is a triangular distribution [see 4.4.6 and Figure 2 b)]. Assuming such a trapezoidal distribution for X[i], one finds that the expectation of X[i] is x[i]
= (a[−] + a[+])/2 and its associated variance is…”
The GUM explains that a rectangular distribution is not always realistic. If you expect values to occur closer to the midpoint and less likely at the limit, then you should use a trapezoidal
Furthermore, it even provides some additional insight to recommend the use of a triangular distribution.
I think this evaluation of Type B uncertainty is very interesting. It is realistic and practical for most applications where people typically use a rectangular distribution.
However, I do not see it used very often and don’t expect to see many people switching over from rectangular distributions anytime soon.
For those who do, you may enjoy the benefits of a smaller estimate of uncertainty and the additional questioning by your auditors. So, make sure to refer to this section of the GUM to defend using it
in your uncertainty budgets.
Another good resource is this paper by Howard Castrup. At the bottom of page 15, Howard gives you a good alternative equation for the trapezoidal distribution.
Double-Counting Uncertainty
In uncertainty analysis, there are two common problems; not considering enough sources of uncertainty in your uncertainty budget and double-counting uncertainty components.
Section 4.3.10 of the GUM warns you of double-counting uncertainty to prevent overstated estimates of measurement uncertainty.
“4.3.10 It is important not to “double-count” uncertainty components. If a component of uncertainty arising from a particular effect is obtained from a Type B evaluation, it should be included as an
independent component of uncertainty in the calculation of the combined standard uncertainty of the measurement result only to the extent that the effect does not contribute to the observed
variability of the observations. This is because the uncertainty due to that portion of the effect that contributes to the observed variability is already included in the component of uncertainty
obtained from the statistical analysis of the observations.”
I see double-counting uncertainty components a lot in calibration uncertainty estimates.
For example, a laboratory considers an “ideal” unit-under-test (i.e. UUT) for UUT resolution in their CMC Uncertainty analysis, then includes the actual UUT resolution when calculating calibration
That’s double-counting; and, it happens all of the time.
Even auditors are bad about enticing laboratories to double-count uncertainty components in the very scenario given in the example above.
In fact, I spoke with an assessor this week who wanted to know why the UUT resolution wasn’t included in the CMC Uncertainty calculation. I had to happily refer him to read section 5.4 of the ILAC
Another common example of double-counting is when a laboratory includes uncertainty components that would typically be included in the Type A uncertainty components; repeatability and repeatability.
The bad news is it can be difficult to determine if an uncertainty component is already accounted for in another uncertainty component. This means that it is nearly impossible to prevent
double-counting uncertainty.
Examples of Evaluating Type B Uncertainty
Evaluating Data From Calibration Reports
Evaluating data from your calibration reports is pretty easy as long as you are getting ISO/IEC 17025 accredited calibrations.
Most accredited calibrations report the measurement result and the associated measurement uncertainty. Additionally, the report will tell you the confidence level the estimated uncertainty;
typically, 95% where k=2.
Therefore, all you need to do is divide the reported uncertainty by the expansion factor (k).
Using the information shown in the calibration report below and the equation given above, you should be able to convert the expanded uncertainty to standard uncertainty.
Simply divide the expanded uncertainty (U) by the coverage factor (k). Your result will be the standard uncertainty.
Evaluating Data From Manufacturer’s Specifications
Evaluating data from manufacturer’s specifications is just as easy as evaluating the data from your calibration reports.
Typically, manufacturer’s specifications can be found in manufacturer manuals, datasheets, catalogs, or other marketing materials.
However, not all manufacturers do their due diligence when publishing specifications. So, you may have to make some assumptions.
Most credible manufacturers publish specifications with an associated confidence interval. In the image below, you will see that Fluke has published specifications for both 95% and 99% confidence
For this example, let’s focus on the 95% specification to evaluate a 10V signal using the 11V range.
Looking at the 1 Year absolute uncertainty specification for the 11 volt range, the uncertainty for 10 volts is approximately 38 micro-volts.
Using the information shown in the manufacturer’s specification, use the equation given below to convert the expanded uncertainty to standard uncertainty.
Afterward, your evaluation of Type B uncertainty should be approximately 19.4 micro-volts.
Now, you are probably thinking, “What if the manufacture specifications don’t give a confidence interval?”
The answer is, assume it is stated to a 95% confidence interval and evaluate it similar to the example given above. Feel free to use the values 2 or 1.96 for the coverage factor, k.
Evaluating Data From Guides, Handbook, Papers, & Articles
When evaluating Type B uncertainty, you are not always going to have the convenience of using your own data.
Most laboratories do not have the time or resources required to test every factor that contributes to uncertainty in measurement. Therefore, you are going to use data from other laboratories that
have already done the work for you.
The biggest challenge is finding the data! You must put some time and effort into conducting research. To make life easier, I have already created a list of 15 places you can find sources of
Once you find the data and deem it applicable for your measurement process, you can evaluate it for your uncertainty analysis.
Now, you can evaluate Type B uncertainty data in many ways. However, I will focus on the situation that you are going to encounter 90% of the time.
Typically, you are going to find information in a guide, conference paper, or journal article that gives you data with no background information about it.
Therefore, you are most likely to characterize the data with a rectangular distribution and use the following equation to evaluate the uncertainty component.
For example, imagine that you are estimating uncertainty for measuring voltage with a digital Multimeter. You are performing research and stumble upon a paper published by Keysight Technologies that
has really good information that is relatable to the measurement process you are estimating uncertainty for.
So, you decide to include some of the information in your uncertainty budget.
The image below is an excerpt from a paper on System Cabling Errors and DC Voltage Measurement Errors in Digital Multimeters published by Keysight Technologies. It contains information on Thermal EMF
errors that you want to include in your uncertainty budget.
The table in the image has some great information to help you quantify thermal EMF errors, but provides very little information on the origin of the data. Therefore, it would be best to assume that
the data has a rectangular distribution.
For a copper-to-copper junction with a temperature change of 1°C, your thermal EMF error should be approximately 0.3 micro-volts. To convert your uncertainty component to standard uncertainty, you
would divide the uncertainty component by the square-root of three.
On the other hand, you may find data in a guide, conference paper, or journal article that is normally distributed or has been already converted to standard uncertainty.
Don’t assume all Type B data is rectangular, you will overstate your uncertainty estimates. Look for clues to help you find the right method to evaluate it.
For example, imagine that you are performing research and stumble upon a paper published in the NIST Journal of Research. The study you found has information that is relatable to the measurement
process you are estimating uncertainty for.
So, you decide to include some of the information in your uncertainty budget.
The image below is an excerpt from an article on Uncertainty and Dimensional Calibrations by Ted Doiron published in the NIST Journal of Research. It contains data for the elastic deformation of gage
blocks calibrated by mechanical comparison that you want to include in your uncertainty budget.
Notice that the paper states that the data is reported as standard uncertainty where k=1.
Assuming that the data has a normal distribution and a coverage factor of one, use the equation below to evaluate Type B uncertainty.
Therefore, your evaluation of Type B uncertainty should be approximately 2 micro-meters since your coverage factor (k) is one.
Difference Between Type A and Type B Uncertainty
There is a lot of misinformation on type A and type B uncertainty.
The VIM definitions are the most accurate. Type A uncertainty is evaluated using statistical means. Type B uncertainty is evaluated using other than statistical means.
It is all evaluated by statistical methods. Therefore, the difference is how the data is collected, not how it is evaluated.
Type A uncertainty is collected from a series of observations. Type B data is collected from other sources.
Although Type B uncertainty found in publications may have been collected from a series of observations, it wasn’t collected by you or your laboratory personnel.
Therefore, you are not sure that the data was collected from a series of observations. Furthermore, you do not know how the experiment was conducted.
Experimental results can be manipulated, especially when performed by a group who stands to benefit from the results (e.g. manufacturer, sponsored agency, etc.).
Over the years, many researchers and laboratories have been caught manipulating experiments to achieve results that benefit themselves or their mission. So, you need to be careful.
The image below is from phdcomics.com. It was shown to me in grad school when covering the topic of ethics in research. It depicts the realistic manipulation of the scientific method.
How to Choose Type A or Type B
Many people have a hard time trying to decide whether their data is a Type A or Type B uncertainty.
However, it doesn’t have to be a difficult process. In fact, I am going to show you a simple two-step process that will help you choose the correct uncertainty type every time.
All you have a to do is ask yourself these two questions;
Question 1: Did you collect the data yourself via testing and experimentation?
• If yes, go to question 2.
• If no, choose Type B.
Question 2: Is your data older than 1 year?
• If yes, choose Type B
• If no, choose Type A
I even made you a handy flowchart to help you decide whether your data is Type A or Type B uncertainty.
Think about it. If you collected the data yourself, then you are going to evaluate it statistically. Therefore, it is Type A Data.
However, if you performed a repeatability experiment 5 years ago and still want to include it your uncertainty budget, then it is Type B data.
The age of the data is important. Hence, the reason for question two. You need to routinely update your Type A uncertainty data.
If it is older than a year, then it is most likely Type B data and you should collect more data soon.
Now, there are some exceptions. I have read some repeatability procedures over the years that have recommended that two years’ worth of data should be kept on record at all times.
However, the procedure required that new data should be collected each month which means that the test records included 24 independent sampling events. So, new data was constantly being collected and
added to the repeatability records.
In this case, I would consider it Type A uncertainty data.
Don’t stress about picking an uncertainty type, use the two questions listed above and your best judgement. It will help you make the right decision.
Type A uncertainty and Type B uncertainty are two classifications commonly used in uncertainty analysis. Typically used for informational purposes only, they let others know how the data is collected
and evaluated.
This guide has covered everything that you need to know about Type A and B uncertainty. It should help you distinguish the difference between the two uncertainty types, so you can select the
appropriate method of evaluation for your uncertainty analysis.
So, use the information and give some of these evaluation methods a try. They should help you improve your ability to calculate uncertainty.
Now, leave a comment below and tell me how you choose Type A and Type B uncertainty.
14 Comments | {"url":"https://www.isobudgets.com/type-a-and-type-b-uncertainty/","timestamp":"2024-11-13T12:23:24Z","content_type":"text/html","content_length":"170317","record_id":"<urn:uuid:df5da055-c47e-489c-b4f3-59dac23ead24>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00346.warc.gz"} |
Find rate per period
For instance, let the interest rate r be 3%, compounded monthly, and let the Then the compound-interest equation, for an investment period of t years, becomes: To solve this, I have to figure out
which values go with which variables. but you should also memorize the meaning of each of the variables in the formula.
The Excel RATE function is a financial function that returns the interest rate per period of an annuity. You can use RATE to calculate the periodic interest rate, then multiply as required to derive
the annual interest rate. per - the period we want to work with. Supplied as 1 since we are interested in the the principal amount of the first payment. pv - The present value, or total value of all
payments now. In the case of a loan, this is input as a negative value by adding a negative sign in front of C5 to supply -5000. The Rate of Return (ROR) is the gain or loss of an investment over a
period of time copmared to the initial cost of the investment expressed as a percentage. This guide teaches the most common formulas for calculating different types of rates of returns including
total return, annualized return, ROI, ROA, ROE, IRR EXAMPLES: Calculating Incidence Rates. Example A: Investigators enrolled 2,100 women in a study and followed them annually for four years to
determine the incidence rate of heart disease. After one year, none had a new diagnosis of heart disease, but 100 had been lost to follow-up. Number of Periods of Annuity Calculator - This can be
used to calculate how much you would need to save periodically (at the end of the period) in order to end up at a goal result. Calculating the Rate Per Period. When the number of compounding periods
matches the number of payment periods, the rate per period (r) is easy to calculate.Like the above example, it is just the nominal annual rate divided by the periods per year. However, what do you do
if you have a Canadian mortage and the compounding period is semi-annual, but you are making monthly payments? How to calculate interest payments per period or total with Excel formulas? This article
is talking about calculating the interest payments per period based on periodic, constant payments and constant interest rate with Excel formulas, and the total interest payments as well. Calculate
monthly interest payments on a credit card in Excel
Rate is the speed at which something happens or changes compared to the original state over a period of time. From the definition, it's obvious that rate it time
In other words, this formula is used to calculate the length of time a present value would need to reach the future value, given a certain interest rate. The formula Examples to find Rate when
Principal, Interest and Time are given: 1. Find Rate, when Principal = $ 3000; Interest = $ 400; Time = 3 years. Solution: 29 Jul 2015 How to Find the Total Amount Paid in an Interest Rate Equation.
you to find the total amount of money paid over a certain period of time, don't worry. plus the accumulated interest in four years at a rate of 10% per year. This formula is used to calculate the
number of periods needed to get to the rate, or the interest rate at which the amount will be compounded each period Formula for the calculation of a discount factor based on the periodic interest
rate and the number of interest periods. Rate is the speed at which something happens or changes compared to the original state over a period of time. From the definition, it's obvious that rate it
By the end of a 10-year period, the $1,000 investment under option one grows to $2,219.64, but under option two, it grows to $2,184.04. The more frequent compounding of option one yields a greater
return even though the interest rate is higher in option two.
If we know the present value (PV), the future value (FV), and the interest rate per period of compounding (i), the future value factors allow us to calculate the
The periodic interest rate r is calculated using the following formula: r = (1 + i/m) m/n - 1 Where, i = nominal annual rate n = number of payments per year i.e., 12 for monthly payment, 1 for yearly
payment and so on. m = number of compounding periods per year
Calculate the effective periodic interest rate from the nominal annual interest rate and the number of compounding periods per year. Example, calculate daily Calculates principal, accrued principal
plus interest, rate or time periods using the Calculate periodic compound interest on an investment or savings. Compounding occurs once per period in this basic compounding equation but other A
periodic rate is the APR expressed over a shorter period and can be found by If your credit card issuer uses the average daily balance method to calculate your for each day in the billing cycle by
the daily rate for a daily finance charge. 18 Sep 2019 The periodic interest rate is the rate charged or paid on a loan or realized on rate is multiplied by the amount the borrower owes at the end of
each day. of compounding periods to calculate its effective annual interest rate. Period interest rate per payment is used to determine the interest rate to charge to each payment. This is important
when the compounding frequency does not
12 Nov 2018 You can calculate your business's absence rate to determine the percentage of days employees miss per period. Absences are generally
How to calculate interest payments per period or total with Excel formulas? This article is talking about calculating the interest payments per period based on periodic, constant payments and
constant interest rate with Excel formulas, and the total interest payments as well. Calculate monthly interest payments on a credit card in Excel Question: A. Find I (the Rate Per Period) And N (the
Number Of Periods) For The Following Annuity. Quartarly Deposits Of $800 Are Made For 6 Years Into An Annuity That Pays 8.5% Compounded Quarterly. I=__ N=__ B. Use The Future Value Formula To Find
The Indicated Value. How are you supposed to calculate the rate per compounding period, i, for each of the following. a) 9% per annum, compounded quarterly b) 6% per annum, compounded monthly c) 4.3%
per annum compounded semi-annually I wasn't sure how to do this question without more information, such as initial value, etc? To find simple interest, multiply the amount borrowed by the percentage
rate, expressed as a decimal. To calculate compound interest, use the formula A = P(1 + r) n, where P is the principal, r is the interest rate expressed as a decimal and n is the number of number of
periods during which the interest will be compounded. The Rate of Return (ROR) is the gain or loss of an investment over a period of time copmared to the initial cost of the investment expressed as a
percentage. This guide teaches the most common formulas for calculating different types of rates of returns including total return, annualized return, ROI, ROA, ROE, IRR
This is the rate per compounding period, such as per month when your period is year and compounding is 12 times per year. Interest rate can be for any period not just a year as long as compounding is
per this same time unit. | {"url":"https://platformmlpjsc.netlify.app/guiberteau21936si/find-rate-per-period-hyg.html","timestamp":"2024-11-03T06:33:29Z","content_type":"text/html","content_length":"36875","record_id":"<urn:uuid:47198908-8d2a-4aa8-bc2e-c2f5ead3a4c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00468.warc.gz"} |
TEKNISK INFORMATION TYPE DGW - Lindpro AB
Nominal government bonds - Riksgälden.se
Tillförd effekt, 4 076, W. Ström, 7, Data enligt ErP. Redo för ErP-krav, Not ErP relevant 3, Product data 6, SKU, Description, Nominal/ Rated Beam Angle [°], Rated Lumens [lm], Weighted energy
consumption [kWh/1000h], Rated efficacy [lm/W] Data for 400 V AC, AM8071-wPyz Nominal voltage, 100…480 V AC Standstill point; Nominal point @ 230 V AC; Nominal point @ 400 V AC; Nominal point
Skaltypen påverkar sättet att framställa och analysera datamaterialet. Nominalskala. • Talar om för oss vilken klass en observation tillhör. • Klasserna kan inte By submitting my email, I agree to
Qred's processing of personal data. You can read more in our privacy policy.
The underlying goal driving the methodology RadiPac Nominal data. Article number. Motor. VAC. Hz rpm.
Joachim Parrow - Google Scholar
It could be the case that answers which refer to the same meaning can be grouped, but the researcher … 2019-10-03 Nominal data is categorized data with no order/hierarchy or rank among the
categories, example cricket balls can be categorized into n number of categories based on color, without defining any hierarchy or rank among them. To deal with nominal data make your nominal data
column as a factor, so that the machine recognizes the same just as a factor. 2015-01-16 Nominal data cannot be used to perform many statistical computations, such as mean and standard deviation,
because such statistics do not have any meaning when used with nominal variables. However, nominal variables can be used to do cross tabulations.
NM-00036a CT09sheet - HDD
Typical data as well as nominal and measured values are not warranted by The calibration kit is fully functional down to 0 Hz, with effective system data as This paper focuses on the problem of
making decisions in the context of nominal data under specific constraints. The underlying goal driving the methodology RadiPac Nominal data.
ECI-80.20-K1 B00. ECI-80.20-K1 D00. Nominal voltage (UN). VDC. 24. 48. Nominal speed (nN) min–1. 4000**.
Line count. DO. A very good method of troubleshooting is to enter the received data into a pressure/enthalpy Nominal data and tables are normally based on other conditions. The data from the
investigation is analysed through "a truncated component analysis", in which only nominal data is used. Tschuprow*s T2 is used as correlation Nominaldata. Nominaldata är den mest primitiva typen av
data. Nominaldata kan endast klassificeras och summeras på antal.
interval or ratio data) – and some work with a mix. While statistical software like SPSS or R might “let” you run the test with the wrong type of data, your results will be flawed at best , and
meaningless at worst. Se hela listan på matthewrenze.com 2016-08-05 · Background Reliability of measurements is a prerequisite of medical research. For nominal data, Fleiss’ kappa (in the following
labelled as Fleiss’ K) and Krippendorff’s alpha provide the highest flexibility of the available reliability measures with respect to number of raters and categories. Our aim was to investigate which
measures and which confidence intervals provide the best Nominal.
Camping olivet
Nominal data, as a subset of the term “Data /deɪtə/ or data /dətə/”as you may choose to call it, is the foundation of statistical analysis and all other mathematical sciences. They are individual
pieces of information recorded and used for the purpose of analysis. 2020-02-13 · Nominal data can be collected with an open-ended or multiple choice question but the open-ended approach is frowned
upon. The latter option is more common and arguably more accurate. Though they appear simple, nominal data is the foundation of quantitative research and is among the most used measurement scale.
Since Nominal data refer to named data and can often take a large variety of answers, it is recommended before the analysis to organize the data if needed and if possible.
nominal array objects provide efficient storage and convenient manipulation of such data, while also maintaining meaningful labels for the values. Nominal Data. Nominal values represent discrete
units and are used to label variables, that have no quantitative value. Just think of them as „labels“.
Arocell ab stock
R&S ZV-Z2xx Calibration Kits - Data Sheet - TestEquity
Mätstorheter. Volymflöde i Nominal pipe size. | {"url":"https://hurmanblirrikuuhjj.netlify.app/7198/38044.html","timestamp":"2024-11-03T23:30:16Z","content_type":"text/html","content_length":"15374","record_id":"<urn:uuid:72b39381-ceed-4e7d-a997-23ea884f3855>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00291.warc.gz"} |