text
stringlengths 256
16.4k
|
|---|
What about overtones?
A few words on tunings: just intonation vs. equal temperament
Where do we calculate the harmonic series frequencies?
How to use our harmonic series calculator
Our harmonic series calculator will help you find the harmony in your music: keep learning to discover the underlying elegance of music and much more!
The harmonic series may well be the most important concept in music: the timbre of a musical instrument (which is the sound we associate with the music itself) is intimately connected to the harmonic series.
What is the harmonic series in music?
How do we create a complex sound using the harmonic series frequencies?
What are the differences between the just intonation and the equal temperament tunings?
How to calculate the harmonic series?
And much more: get in tune with music at Omni Calculator! And if your curiosity is not satisfied, check out the other tools the music nerds created on the music page of our website, like the music scale calculator or the music interval calculator.
In music, a harmonic series is a set of frequencies corresponding to integer multiples of a fundamental frequency or note.
A tone is a note on a musical instrument. If you listen closely, you can identify multiple pitches — the psychoacoustic concept corresponding to the frequency. In pitched instruments (like pianos and guitars), one of the frequencies will be more prominent: that's our fundamental frequency.
🙋 In this text, we will use the word "frequency" to identify the corresponding sine wave with the equation
\sin{(2\pi\cdot t \cdot f)}
t
Each frequency that appears in a tone is called partial. We can identify two types of partials:
Harmonic partials: frequencies that match the mathematical harmonicity;
Inharmonic partials: frequencies that deviate from the ideal harmonic, creating a certain dissonance.
The distance from an inharmonic partial to the nearest ideal harmonic is measured in cents.
🔎 A cent is a division of the interval between two notes that are a semitone apart (like
\text{E}4
\text{F}4
). Each semitone interval contains exactly
100
cents, for a total amount of
1200
cents per octave. Remember that the value of a cent in hertz varies.
Now that you know what the harmonic series is, let's keep on learning more about it and its implications!
You may hear the word "overtone" in the same context as "partial" from time to time. When talking of harmonic series, overtones are the partial frequencies above the fundamental frequency.
If we consider the fundamental frequency to be the first partial frequency, then we have a discrepancy between the numbering of the partials and the overtones (the first overtone corresponding to the second partial, and so on).
The usage of the term "overtone" may be helpful in certain situations: a "pure" frequency (a single sine wave) is entirely devoid of overtones while it still maintains a partial.
🙋 Don't mistake the overtones of musical theory with overtone singing! The singing style of barbershop choruses or Tuvan throat singers uses a psychoacoustic phenomenon called combination tones, which is only loosely related to the harmonic series.
The definition of the frequency and pitch of each note in a musical system requires the study of the intervals and relationships between the elements of a scale.
Western music usually uses the equal temperament tuning system. In this system, the octaves are divided into equal intervals. This implies that the ratio of frequencies of two adjacent notes (with a semitone between them) is always equal to the twelfth root of two:
\sqrt[12]{2}
This tuning sounds most pleasing to our ears. However, it is not the only one possible: let's talk about just intervals.
If we decide to define the interval between each note and some reference note (usually middle C) as integer numbers ratios, what we obtain is the just intonation tuning (in contrast to the equal temperament system, where we fix the ratio between any two adjacent notes). The harmonic series is based on multiple integers of a fundamental frequency: it naturally produces just interval scales.
A just intonation scale would sound slightly off to us. That's only a matter of habits, though — it is as good as (if not better than) Western music's "traditional" tuning. You can listen to some examples of comparison between just intonation, equal temperament, and other tuning systems online. We found a version of Pachelbel's Canon in D, and the 7th symphony by Beethoven (in a spacesuit!). You be the judge!
Calculating the harmonic series is straightforward: take your fundamental frequency
f
, and multiply it by subsequent integer numbers. By the way, if you don't know the frequency of your fundamental, don't worry: we made a calculator exactly for that reason: the note frequency calculator!
Back to harmonies, now. Here is the series for
f
f,\ 2f,\ 3f,\ 4f,\ 5f,...
It goes on this way indefinitely; however, due to our perception of sound, the higher the multiple, the more similar two harmonics will sound.
🔎 The harmonic series in music is intimately connected to the harmonic series in math. In the latter, the elements are the inverse of subsequent integer numbers, eventually multiplied by a constant factor.
Let's consider an example. Take as a fundamental the note
\text{C}4
\text{C}4 = 261.6256\ \text{Hz}
And calculate the first
16
partials:
As you can see, the distance between two pairs of adjacent notes is equal to the frequency of the fundamental.
The table above contains a lot of information: let's dig through it!
The harmonic series contains all of the higher octaves of the fundamental note. You can see that both
\text{C}5
\text{C}6
\text{C}7
\text{C}8
appears. The first interval of this harmonic series (between
\text{C}5
\text{C}4
2:1
. The interval is an octave. Any two notes separated by the same
2:1
ratio are separated by an octave — in our series, we can identify some of them:
\text{C}6
\text{C}5
4:2
\text{C}7
\text{C}6
8:4
\text{E}7
\text{E}6
10:5
An octave is a "boring" interval 😛. That's because it corresponds to the simplest ratio,
2:1
; this is also the reason it sounds so good to our ears (the simpler the ratio, the better the harmony). Math and music really share some connections! Another interesting observation is the interval between the third and second note of our harmonic scale,
\text{G}5
\text{C}5
. Their frequencies have a ratio of
3:2
, and define what musicians call a perfect fifth.
Being defined by a relatively simple ratio, the perfect fifth is a nice sounding harmony — so good that Pythagoras decided to create an entire tuning system uniquely based on
3/2
. Spoiler: it's not that perfect.
The next step is to consider the fourth and third notes on the harmonic series: they have a ratio of
4/3
, and define a perfect fourth. And here we fall into the first conflict between the tuning systems we will explore.
In the just intonation system, a fourth corresponds to an interval (measured in cents) of:
1200\cdot \text{log}_2\left(\frac{4}{3}\right) = 498.05\ \text{cents}
Meh. Admittedly, we can hardly say it sounds off. However, this interval doesn't fit in the equal temperament scale. Since it spans five semitones (hey, that's why it's called a fourth... no wait), the interval should be
500
cents, which corresponds to a ratio of:
\begin{align*} 500\ \text{cents} &: 498.05\ \text{cents} \\ %3\cdot 2^{500/1200} &:4\\ 1.0039&:1 \\ \end{align*}
The difference is barely noticeable, but it's there.
Progressing with the intervals, we meet a major third. Since it encompasses the fourth and fifth harmonics, its ratio is
5:4
. Although it's counterintuitive, the major third covers fourth semitones, an interval that should correspond to
400
cents — we are sure you've guessed it by now! However, when you consider the harmonic scale, the interval becomes
1200\cdot \text{log}_2\left(5/4\right)=386.31\ \text{cents}
. This finally makes a difference you can appreciate.
The harmonic series proceeds with other intervals. The table above shows the differences between the pitches identified by the harmonic series (corresponding to the just intonation scale) and the equal temperament scale. You can clearly see that some intervals are particularly problematic: the 7th, the 11th, and the 14th are where the dissonance is higher.
Simply select the pitch and the octave of the fundamental note, then choose the number of notes you want to generate. Our harmonic series calculator will print a table of the values of the frequencies, alongside the name of the relative note, its octave, and the cents of difference from the same note in the equal temperament tuning system.
What can you do with it? Experiment with this different tuning system, or tune your instrument (if you can: unless you have a fretless guitar, it is impossible to use a 12TET fretboard with any different tuning). You can also use the values of the frequencies to synthesize a complex sound. There are programs you can use for this — we can give you the math, you do the music!
🙋 Harmonies are at the basis of chords: find out more at our dedicated tools, the chord calculator, and the chord progression calculator!
What are the first harmonics of A440?
The first harmonics of the note A4, with the frequency of 440 Hz, are 880 Hz, 1320 Hz, and 1760 Hz. These numbers correspond to the first four integer multiples of the fundamental frequency. Among them, you can identify the notes:
A4, A5, and A6 (respectively, the first, second, and fourth multiples); and
E5, which together with A5 creates a perfect fifth.
How to calculate the harmonic series from a fundamental note?
To calculate the partials of a harmonic series, multiply the frequency f of a fundamental note by consecutive integers:
The first partial is 1 × f;
The second partial is 2 × f;
The third partial is 3 × f;
And so on. Doing so, you will obtain a scale of notes related by simple ratios of integers numbers: the first two harmonics are in ratio 1:2, the second and third 3:2, etc.
What are the differences between just intonation and equal temperament?
Just intonation and equal temperament are two tuning systems that differ in the way they define the 12 notes composing each octave:
Just intonation defines each note as an integer multiple of a fundamental, thus relating each pair of notes to a simple ratio of integers;
Equal temperament: each note's frequency is equal to the frequency of the previous note, multiplied by a constant ratio (the twelfth root of two).
What are the harmonics in a complex sound?
Each complex sound is made of a different composition of harmonics. Taking the frequency of a fundamental note and its multiples, the different proportions of each multiple (the partials) confer a unique timbre to the sound:
A sound poor in higher partials will be "purer" like the one of a recorder or flute.
A sound rich in higher partials (but skipping the more dissonant ones) will feel more layered and refined, like the violin.
4 (1-line)
Number of partials
|
The sinusoidal wave equation
What are frequency and wavelength?
How to calculate the wavelength from the frequency
Calculate the wavelength given the frequency
Beyond our frequency to wavelength calculator!
Our frequency to wavelength calculator will help you understand the intimate relationship between these two quantities in a wave.
What is the wavelength?;
What is the relationship between wavelength and frequency?;
How to calculate the wavelength given a frequency?;
Some examples of how to use our frequency to wavelength calculator.
A wave is a periodic motion where the equilibrium state of a system is perturbated, creating a characteristic oscillating behavior.
You can find waves just about everywhere: sound, earthquakes, electromagnetic radiation. If there is a propagation of energy, it's likely that with a better look, you will observe a wave-like motion.
There are many types of waves: let's focus only on the most common and easy-to-understand one: the sinusoidal wave.
A sinusoidal wave is described by... you guessed it, a sine function. It follows the equation:
\footnotesize u(x,t)=A\times\sin{(\frac{2\pi}{\lambda}x - 2\pi f t + \phi)}
u
- the amplitude of the wave at a certain position
x
t
A
- the maximum amplitude of the wave (
1/2
of the distance from the highest to the lowest point of the wave);
\lambda
- the wavelength;
f
- the frequency; and
\phi
- the phase constant.
An animated sine wave shows the connection between the function and a harmonic motion.
Frequency and wavelength define the shape of a wave minus a vertical stretch (that's the purpose of the amplitude
A
) and the definition of a "starting point" (the phase constant tells us the initial position at position
x=0
t=0
f
tells us how many times the wave completes a period of its oscillation in a given amount of time (if that interval is a second, the frequency is measured in Hertz:
1\ \text{Hz} = 1/\text{s}
). Note that the frequency is the inverse of the period
T
f=1/T
The wavelength
\lambda
measures the horizontal distance between two peaks on the same side of the baseline.
What is the relationship between frequency and wavelength? You will be pleased to discover that it all boils down to a simple formula that relates the two quantities to the speed of propagation of the wave,
v
f = \frac{v}{\lambda}
You can intuitively understand this formula by noting that the speed of the wave corresponds to the distance covered in a given unit of time. At the same time, the wavelength is the distance to complete a period of the oscillation. Dividing the wavelength by the speed returns the time necessary to complete an oscillation: the period!
Let's look at some examples of waves to better understand the relationship between frequency and wavelength.
Light is nothing but electromagnetic radiation. It propagates at the ludicrous speed of almost
300,000\ \text{km}/\text{s}
. As humans, we are most familiar with visible light, a portion of the electromagnetic spectrum associated with light we perceive as colors. Each color has a specific frequency.
From the lowest to the highest frequency, we selected some colors we liked:
Red with a frequency of
4.62\times10^{14}\ \text{Hz}
Green with a frequency of
5.45\times10^{14}\ \text{Hz}
Blue with a frequency of
6.66\times10^{14}\ \text{Hz}
Try our frequency to wavelength calculator to find out the value of the wavelengths: choose the desired wave velocity, in this case, light in a vacuum.
The calculator will calculate the wavelengths given the frequencies. To input the frequencies, select teraHertz from the units menu of the frequency variable, then input the values above as
462
545
666
\lambda = \frac{299,792,458 \text{m}/\text{s}}{4.62\times10^{14}\ \text{Hz}} \simeq 6.49 \times10^{-7}\ \text{m} = 649\ \text{nm}
\text{nm}
stands for nanometer, a billionth of a meter. It's a measurement unit good for molecules and nanotechnologies! Think of something a thousand time smaller than a hair.
For the color green, we find that:
\lambda= \frac{299,792,458 \text{m}/\text{s}}{5.45\times10^{14}\ \text{Hz}} \simeq 5.50\times10^{-7}\ \text{m} = 550\ \text{nm}
And finally, for the blue, we calculate the wavelength from the frequency with:
\lambda= \frac{299,792,458 \text{m}/\text{s}}{6.66\times10^{14}\ \text{Hz}} \simeq 4.50\times10^{-7}\ \text{m} = 450\ \text{nm}
As you can see, the higher the frequency, the shorter the wavelength (if the speed remains constant). This happens because the time required to complete a period gets smaller, and at a constant speed, this corresponds to a reduced distance between peaks.
🔎 Do you know that the distance traveled by light in a nanosecond roughly corresponds to a foot? Physicist David Mermin proposed a new measurement unit, the "light nanosecond", jokingly called phoot: a portmanteau of photos, light in Latin, and foot.
Apart from the frequency to wavelength calculator, we made other tools that can help you with any kind of oscillating problem! Try our:
Wavelength; or
Wavelength to frequency.
The wavelength is a quantity that measures the distance of two peaks on the same side of a wave. You can think of the wavelength as the distance covered by a wave in the period of the oscillation.
How to calculate the wavelength from the frequency?
To calculate the wavelength of a periodic oscillating motion given the frequency, you first need to know the speed of the wave. Then, apply the formula:
λ - the wavelength;
v - the propagation speed of the wave; and
f - the frequency.
What is the wavelength of microwaves?
Your microwave oven works thanks to a particular kind of electromagnetic radiation that excites the water molecules in your food. Microwave ovens typically operate with waves with frequencies of about 2.45 GHz. This is also the frequency used by wifi!
The corresponding wavelength is, considering the speed of light in air:
λ = 299,702,547 m/s / 2.45 × 10⁹ Hz = 12.23 cm
Where can I see wavelength?
Drop a stone in a pond! You will see the surface of the water getting covered in concentric ripples. The distance between two peaks (above the surface) is the wavelength of the transverse wave.
That wavelength increases with time because the energy transported by the water is damped, and it progressively reduces to zero.
The Hall Coefficient Calculator computes the Hall coefficient unveiling the nature of the charge carriers in conductors.
|
Homotopy Analysis Method for Solving Foam Drainage Equation with Space- and Time-Fractional Derivatives
2011 Homotopy Analysis Method for Solving Foam Drainage Equation with Space- and Time-Fractional Derivatives
Hadi Hosseini Fadravi, Hassan Saberi Nik, Reza Buzhabadi
The analytical solution of the foam drainage equation with time- and space-fractional derivatives was derived by means of the homotopy analysis method (HAM). The fractional derivatives are described in the Caputo sense. Some examples are given and comparisons are made; the comparisons show that the homotopy analysis method is very effective and convenient. By choosing different values of the parameters
\alpha ,\beta
in general formal numerical solutions, as a result, a very rapidly convergent series solution is obtained.
Hadi Hosseini Fadravi. Hassan Saberi Nik. Reza Buzhabadi. "Homotopy Analysis Method for Solving Foam Drainage Equation with Space- and Time-Fractional Derivatives." Int. J. Differ. Equ. 2011 (SI1) 1 - 12, 2011. https://doi.org/10.1155/2011/237045
Hadi Hosseini Fadravi, Hassan Saberi Nik, Reza Buzhabadi "Homotopy Analysis Method for Solving Foam Drainage Equation with Space- and Time-Fractional Derivatives," International Journal of Differential Equations, Int. J. Differ. Equ. 2011(SI1), 1-12, (2011)
|
Cosmic neutrino background - Wikipedia
The cosmic neutrino background (CNB or CνB[a]) is the universe's background particle radiation composed of neutrinos. They are sometimes known as relic neutrinos.
As neutrinos rarely interact with matter, these neutrinos still exist today. They have a very low energy, around 10−4 to 10−6 eV.[1][2] Even high energy neutrinos are notoriously difficult to detect, and the CνB has energies around 1010 times smaller, so the CνB may not be directly observed in detail for many years, if at all.[1][2] However, Big Bang cosmology makes many predictions about the CνB, and there is very strong indirect evidence that the CνB exists.[1] [2]
1 Derivation of the CνB temperature
2 Indirect evidence for the CνB
2.2 CMB anisotropies and structure formation
2.3 Indirect evidence from phase changes to the Cosmic Microwave Background (CMB)
3 Prospects for the direct detection of the CνB
Derivation of the CνB temperature[edit]
Given the temperature of the cosmic microwave background (CMB) the temperature of the cosmic neutrino background (CνB) can be estimated. It involves a change between two regimes:
The original state of the universe is a thermal equilibrium, the final stage of which has photons and leptons freely creating each other through annihilation (leptons create photons) and pair production (photons create leptons). This was the very brief state, right after the Big Bang. Its last stage involves only the lowest-mass possible fermions that interact with photons: electrons and positrons.
Once universe has expanded enough that the photon+lepton plasma has cooled to the point that Big Bang photons no longer have enough energy for pair production of the lowest mass / energy leptons, the remaining electron-positron pairs annihilate. The photons they create cool, and are then unable to create new particle pairs. This is the current state of most of the universe.[b]
At very high temperatures, before neutrinos decoupled from the rest of matter, the universe primarily consisted of neutrinos, electrons, positrons, and photons, all in thermal equilibrium with each other. Once the temperature dropped to approximately 2.5 MeV (
{\displaystyle 17.4\times 10^{9}}
K), the neutrinos decoupled from the rest of matter, and for practical purposes, all lepton and photon interactions with these neutrinos stopped.[c]
Despite this decoupling, neutrinos and photons remained at the same temperature as the universe expanded as a "fossil" of the prior Regime 1, since both are cooled in the same way by the same process of cosmic expansion, from the same starting temperature. However, when the temperature dropped below double the mass of the electron, most electrons and positrons annihilated, transferring their heat and entropy to photons, and thus increasing the temperature of the photons. So the ratio of the temperature of the photons before and after the electron–positron annihilation is the same as the ratio of the temperature of the neutrinos and the photons in the current Regime 2. To find this ratio, we assume that the entropy s of the universe was approximately conserved by the electron–positron annihilation. Then using
{\displaystyle s\propto g\,T^{3}~,}
where g is the effective number of degrees of freedom and T is the plasma or photon temperature. Once reactions cease, the entropy s should remain approximately "stuck" for all temperatures below the cut-off temperature, and we find that
{\displaystyle {\frac {\;T_{1}\;}{T_{2}}}=\left({\frac {\;g_{2}\;}{g_{1}}}\right)^{\tfrac {1}{3}}~,}
{\displaystyle \;T_{1}\propto T_{\mathrm {\nu } }\;}
denotes the lowest temperature where pair production and annihilation were in equilibrium; and
{\displaystyle \;T_{2}\propto T_{\mathrm {\gamma } }\;}
denotes the temperature after the temperature fell below the regime-shift temperature
{\displaystyle \;T_{1}\;}
, after the remaining, but no longer refreshed, electron-positron pairs had annihilated and contributed to the total photon energy. The related temperatures
{\displaystyle \;T_{\mathrm {\gamma } }\;}
{\displaystyle \;T_{\mathrm {\nu } }\;}
are the simultaneous temperatures of the photons (γ) and neutrinos (ν) respectively, whose ratio stays "stuck" at the same value indefinitely, after
{\displaystyle \;T_{\mathrm {\gamma } }<T_{1}\;.}
{\displaystyle \;g_{1}\;}
is determined by a sum, based on the particle species engaged in the original equilibrium reaction:
+ 2 for each photon (or other massless bosons, if any).[3]
+ 7/4 for each electron, positron, or other fermion.[3]
Whereas the factor
{\displaystyle \;g_{2}\;}
is simply 2, since the present regime only concerns photons, in thermal equilibrium with at most themselves.[3]
{\displaystyle {\frac {\;T_{\mathrm {\nu } }\;}{T_{\mathrm {\gamma } }}}={\frac {\;T_{1}\;}{T_{2}}}=\left({\frac {\;g_{2}\;}{g_{1}}}\right)^{\tfrac {1}{3}}=\left({\frac {2}{\;2+{\tfrac {7}{4}}+{\tfrac {\,7\,}{4}}\;}}\right)^{\tfrac {\,1\,}{3}}=\left({\frac {4}{\;11\;}}\right)^{\tfrac {\,1\,}{3}}\approx {0.714}~.}
Since the cosmic photon background temperature at present has cooled to
{\displaystyle \;T_{\mathrm {\gamma } }=2.725\,\mathrm {K} ~,}
[4] it follows that the neutrino background temperature is currently
{\displaystyle \;T_{\mathrm {\nu } }\approx {1.95}\,\mathrm {K} ~.}
The above discussion is technically valid for massless neutrinos, which are always relativistic. For neutrinos with a non-zero rest mass, at low temperature where the neutrinos become non-relativistic, a description in terms of a temperature is not appropriate. In other words, when the neutrinos' thermal energy
{\displaystyle \;{\frac {3}{2}}\,k\,T_{\mathrm {\nu } }\;}
(k is Boltzman's constant) falls below the rest mass energy
{\displaystyle \;m_{\mathrm {\nu } }\,c^{2}\;;}
in a low-temperature case one should instead speak of the neutrinos' collective energy density, which remains both relevant and well-defined.
Indirect evidence for the CνB[edit]
Relativistic neutrinos contribute to the radiation energy density of the universe ρR, typically parameterized in terms of the effective number of neutrino species Nν :
{\displaystyle \rho _{\mathrm {R} }={\frac {\pi ^{2}}{15}}T_{\mathrm {\gamma } }^{4}(1+z)^{4}\left[1+{\frac {7}{8}}N_{\mathrm {\nu } }\left({\frac {4}{11}}\right)^{\frac {4}{3}}\right],}
where z denotes the redshift. The first term in the square brackets is due to the CMB, the second comes from the CνB. The Standard Model with its three neutrino species predicts a value of Nν ≃ 3.046,[5] including a small correction caused by a non-thermal distortion of the spectra during e+×e− annihilation. The radiation density had a major impact on various physical processes in the early universe, leaving potentially detectable imprints on measurable quantities, thus allowing us to infer the value of Nν from observations.
Due to its effect on the expansion rate of the universe during Big Bang nucleosynthesis (BBN), the theoretical expectations for the primordial abundances of light elements depend on Nν. Astrophysical measurements of the primordial 4
abundances lead to a value of Nν = 3.14+0.70
−0.65 at 68% c.l.,[6] in very good agreement with the Standard Model expectation.
CMB anisotropies and structure formation[edit]
The presence of the CνB affects the evolution of CMB anisotropies as well as the growth of matter perturbations in two ways: Due to its contribution to the radiation density of the universe (which determines for instance the time of matter-radiation equality), and due to the neutrinos' anisotropic stress which dampens the acoustic oscillations of the spectra. Additionally, free-streaming massive neutrinos suppress the growth of structure on small scales. The WMAP spacecraft's five-year data combined with type Ia supernova data and information about the baryon acoustic oscillation scale yielded Nν = 4.34+0.88
−0.86 at 68% c.l.,[7] providing an independent confirmation of the BBN constraints. The Planck spacecraft collaboration has published the tightest bound to date on the effective number of neutrino species, at Nν = 3.15±0.23.[8]
Indirect evidence from phase changes to the Cosmic Microwave Background (CMB)[edit]
Big Bang cosmology makes many predictions about the CνB, and there is very strong indirect evidence that the cosmic neutrino background exists, both from Big Bang nucleosynthesis predictions of the helium abundance, and from anisotropies in the cosmic microwave background. One of these predictions is that neutrinos will have left a subtle imprint on the cosmic microwave background (CMB). It is well known that the CMB has irregularities. Some of the CMB fluctuations were roughly regularly spaced, because of the effect of baryon acoustic oscillation. In theory, the decoupled neutrinos should have had a very slight effect on the phase of the various CMB fluctuations.[1][2]
In 2015, it was reported that such shifts had been detected in the CMB. Moreover, the fluctuations corresponded to neutrinos of almost exactly the temperature predicted by Big Bang theory (1.96 ± 0.02 K compared to a prediction of 1.95 K), and exactly three types of neutrino, the same number of neutrino flavours currently predicted by the Standard Model.[1][2]
Prospects for the direct detection of the CνB[edit]
Confirmation of the existence of these relic neutrinos may only be possible by directly detecting them using experiments on Earth. This will be difficult as the neutrinos which make up the CνB are non-relativistic, in addition to interacting only weakly with normal matter, and so any effect they have in a detector will be hard to identify. One proposed method of direct detection of the CνB is to use capture of cosmic relic neutrinos on tritium i.e. 3H, leading to an induced form of beta decay.[9]
The neutrinos of the CνB would lead to the production of electrons via the reaction
{\displaystyle \mathrm {\nu } +{}^{3}\mathrm {H} \rightarrow {}^{3}\mathrm {He} +e^{-}~,}
while the main background comes from electrons produced via natural beta decay
{\displaystyle {}^{3}\mathrm {H} \rightarrow {}^{3}\mathrm {He} +e^{-}+\mathrm {\bar {\nu }} ~.}
These electrons would be detected by the experimental apparatus in order to measure the size of the CνB. The latter source of electrons is far more numerous, however their maximum energy is smaller than the average energy of the CνB-electrons by twice the average neutrino mass. Since this mass is tiny, of the order of a few eVs or less, such a detector must have an excellent energy resolution in order to separate the signal from the background. One such proposed experiment is called PTOLEMY, which will be made up of 100 g of tritium target.[10] The detector should be ready by 2022.[11]
^ The symbol ν (italic ν) is the Greek letter nu, standard particle physics symbol for a neutrino. In this article, it is set in a mathematical font in order to help distinguish its shape from the extremely similar lower-case Latin letter "v", which in a sans-serif font is identical: Greek "ν" vs. Latin "v".
^ The exceptions are nuclear processes inside stars and white dwarfs. These produce "hot" neutrinos, unlike the "cold" CνB. See Neutrino § Solar.
^ The neutrino interactions that are measured in current particle detectors are all with neutrinos newly created in the Sun, nuclear reactors and weapons, and particle accelerators and cosmic ray collisions. Even among those, only the neutrinos with the highest kinetic energies are feasibly detectable. It's something of a "lose-lose" situation: The lower a neutrino's kinetic energy, the lower it's probability of interacting with matter, and the even slighter, less noticeable, the matter's response will be even if some rare event were to occur.
^ a b c d e f Follin, Brent; Knox, Lloyd; Millea, Marius; Pan, Zhen (2015). "First detection of the acoustic oscillation phase shift expected from the cosmic neutrino background". Physical Review Letters. 115 (9): 091301. arXiv:1503.07863. Bibcode:2015PhRvL.115i1301F. doi:10.1103/PhysRevLett.115.091301. PMID 26371637. S2CID 24763212.
^ a b c d e "Cosmic neutrinos detected, confirming the Big Bang's last great prediction". Forbes. Starts with a Bang. 9 September 2016.
Above is news coverage of the original academic paper:[1]
^ a b c Weinberg, S. (2008). Cosmology. Oxford University Press. p. 151. ISBN 978-0-19-852682-7.
^ Fixsen, Dale; Mather, John (2002). "The spectral results of the Far-Infrared Absolute Spectrophotometer instrument on COBE". Astrophysical Journal. 581 (2): 817–822. Bibcode:2002ApJ...581..817F. doi:10.1086/344402.
^ Mangano, Gianpiero; et al. (2005). "Relic neutrino decoupling including flavor oscillations". Nuclear Physics B. 729 (1–2): 221–234. arXiv:hep-ph/0506164. Bibcode:2005NuPhB.729..221M. doi:10.1016/j.nuclphysb.2005.09.041. S2CID 18826928.
^ Cyburt, Richard; et al. (2005). "New BBN limits on physics beyond the standard model from He-4". Astroparticle Physics. 23 (3): 313–323. arXiv:astro-ph/0408033. Bibcode:2005APh....23..313C. doi:10.1016/j.astropartphys.2005.01.005. S2CID 8210409.
^ Komatsu, Eiichiro; et al. (2011). "Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Cosmological interpretation". The Astrophysical Journal Supplement Series. 192 (2): 18. arXiv:1001.4538. Bibcode:2011ApJS..192...18K. doi:10.1088/0067-0049/192/2/18. S2CID 17581520.
^ Ade, P.A.R.; et al. (2016). "Planck 2015 results. XIII. Cosmological parameters". Astronomy & Astrophysics. 594: A13. arXiv:1502.01589. Bibcode:2016A&A...594A..13P. doi:10.1051/0004-6361/201525830. S2CID 119262962.
^ Cocco, A.G.; Mangano, G.; Messina, M. (2007). "Probing low energy neutrino backgrounds with neutrino capture on beta decaying nuclei". Journal of Cosmology and Astroparticle Physics. 0706 (15): 082014. doi:10.1088/1742-6596/110/8/082014. S2CID 16866395.
^ Betts, S.; et al. (PTOLEMY collaboration) (2013). "Development of a relic neutrino detection experiment at PTOLEMY: Princeton Tritium Observatory for Light, Early-Universe, Massive-Neutrino Yield". arXiv:1307.4738 [astro-ph.IM].
^ Mangano, Gianpiero; et al. (PTOLEMY collaboration) (2019). "Neutrino physics with the PTOLEMY project". Journal of Cosmology and Astroparticle Physics. 07: 047. arXiv:1902.05508. doi:10.1088/1475-7516/2019/07/047. S2CID 119397039.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cosmic_neutrino_background&oldid=1083978168"
|
The wavelength to energy formula - Planck's equation
How do I calculate energy from wavelength?
More wavelength and energy calculators!
This is Omni's wavelength to energy calculator, a tool that instantly calculates a photon's energy from its wavelength. By using Planck's equation, this tool will help you determine a photon's energy in joules (J), electronvolts (eV), or its multiples.
In this article, you'll also find the Planck equation, a step-by-step guide on how to calculate the energy from the wavelength of a photon and how to get this result in joules or electronvolts.
Planck's equation, also known as Planck's relation, is an expression that allows you to define a photon's energy E in terms of its wave properties. Planck's relation states that the energy is directly proportional to its frequency
f
\footnotesize E = h \cdot f
Or inversely proportional to wavelength
\lambda
, by recalling the relationship between frequency and wavelength,
f = c / \lambda
\footnotesize E = \dfrac {h \cdot c}{\lambda}
E
- Photon energy;
h
- Planck's constant, 6.6261 x 10-34 J⋅s or 4.1357 x 10-15 eV⋅s;
c
- Speed of light, 299792458 m/s;
\lambda
- Wavelength; and
f
- Photon frequency.
Notice that if a photon's frequency
f
or wavelength
\lambda
are known, you can directly determine its energy
E
, since the other elements in the equation are constants.
🙋 The energy of a photon is commonly expressed using the unit electronvolt (eV), but it can also be expressed in other energy units, such as joules (J).
To calculate a photon's energy from its wavelength:
Multiply Planck's constant, 6.6261 × 10⁻³⁴ J⋅s by the speed of light, 299792458 m/s.
Divide this resulting number by your wavelength in meters.
The result is the photon's energy in joules.
If you enjoyed using this tool and you'd like to get more information about a photon's energy and its wavelength, we invite you to visit more of our related tools:
Photon energy; and
Energy to wavelength.
How do I find the energy in joules given the wavelength?
To find the energy in joules given the wavelength of a photon:
Use Planck's equation E = h x c / λ and substitute the values of the wavelength (λ), Planck's constant in joules (h = 6.6261 × 10⁻³⁴ J⋅s), and speed of light (c = 299792458 m/s).
With these units, you'll get an energy result in joules (J).
How do I convert wavelength to energy in eV?
In order to convert a wavelength to energy in electronvolts (eV):
Utilize Planck's energy equation E = h × c / λ.
Substitute the values of the wavelength (λ), Planck's constant (h = 6.6261 × 10⁻³⁴ J⋅s), and speed of light (c = 299792458 m/s).
You'll get a result in joules (J).
To go from joules (J) to electronvolts (eV), use the conversion factor 1eV = 1.602176565 × 10⁻¹⁹ J.
Finally, to express your result in electrovolts, divide the energy in joules by the conversion factor: E [J] / 1.602176565e⁻¹⁹ J/eV = E [eV].
How do I calculate the energy of a photon of wavelength 3.5 μm?
To calculate the energy of a photon of wavelength 3.5 μm:
Employ Planck's energy equation, E = h × c / λ.
Use the values of the wavelength λ = 3.5 μm, Planck's constant h = 6.6261 × 10⁻³⁴ J⋅s and speed of light c = 299792458 m/s.
Substitute into Planck's equation, E = 3.5 μm × (6.6261 × 10⁻³⁴ J⋅s) × (299792458 m/s).
After performing the required operations, you'll get that the energy value is E = 354.242 meV.
What is the energy of a 100 nm photon?
The energy of a 100 nm photon is 12.39847 eV or 1.99 × 10⁻¹⁸ J. To get this result:
Employ Planck's equation, E = h × c / λ:
Where λ = 3.5 μm is the wavelength, h = 6.6261 × 10⁻³⁴ J⋅s Planck's constant and c = 299792458 m/s the speed of light.
Replace, E = 3.5 μm × (6.6261 × 10⁻³⁴ J⋅s) × (299792458 m/s) = 12.39847 eV.
To express the result in joules, apply the conversion factor 1eV = 1.602176565 × 10⁻¹⁹ J, E = 12.39847 eV x (1.602176565 x 10⁻¹⁹ J/eV) = 1.99 × 10⁻¹⁸ J.
Use the inclined plane calculator to solve exercises about objects sliding down an inclined plane with a friction coefficient.
The Physical Pendulum Calculator helps you compute the period and frequency of a physical pendulum.
|
TrueSkill - WikiMili, The Best Wikipedia Reader
Rating system supporting games with more than 2 players
TrueSkill is a skill-based ranking system developed by Microsoft for use with video game matchmaking on Xbox Live. Unlike the popular Elo rating system, which was initially designed for chess, TrueSkill is designed to support games with more than two players. [1] [2] In 2018, Microsoft published details about an extended version of TrueSkill, named TrueSkill2. [3]
A player's skill is represented as a normal distribution
{\displaystyle {\mathcal {N}}}
characterized by a mean value of
{\displaystyle \mu }
(mu, representing perceived skill) and a variance of
{\displaystyle \sigma }
(sigma, representing how "unconfident" the system is in the player's
{\displaystyle \mu }
value). [1] [2] As such
{\displaystyle {\mathcal {N}}(x)}
can be interpreted as the probability that the player's "true" skill is
{\displaystyle x}
. [1] [2]
On Xbox Live, players start with
{\displaystyle \mu =25}
{\displaystyle \sigma =25/3}
{\displaystyle \mu }
always increases after a win and always decreases after a loss. The extent of actual updates depends on each player's
{\displaystyle \sigma }
and on how "surprising" the outcome is to the system. Unbalanced games, for example, result in either negligible updates when the favorite wins, or huge updates when the favorite loses surprisingly.
Factor graphs and expectation propagation via moment matching are used to compute the message passing equations which in turn compute the skills for the players. [1] [2]
Player ranks are displayed as the conservative estimate of their skill,
{\displaystyle R=\mu -3\times \sigma }
. This is conservative, because the system is 99% sure that the player's skill is actually higher than what is displayed as their rank.
The system can be used with arbitrary scales, but Microsoft uses a scale from 0 to 50 for Xbox Live. Hence, players start with a rank of
{\displaystyle R=25-3\cdot {\frac {25}{3}}=0}
. This means that a new player's defeat results in a large sigma loss, which partially or completely compensates their mu loss. This explains why people may gain ranks from losses.
TrueSkill is patented, [4] and the name is trademarked, [5] so it is limited to Microsoft projects and commercial projects that obtain a license to use the algorithm.
In mathematical analysis and in probability theory, a σ-algebra on a set X is a collection Σ of subsets of X, is closed under complement, and is closed under countable unions and countable intersections. The pair is called a measurable space.
A measure space is a basic object of measure theory, a branch of mathematics that studies generalized notions of volumes. It contains an underlying set, the subsets of this set that are feasible for measuring and the method that is used for measuring. One important example of a measure space is a probability space.
Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, after a special case was proven by E. Cosserat and F. Cosserat in 1909. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.
The Einstein–Hilbert action in general relativity is the action that yields the Einstein field equations through the principle of least action. With the (− + + +) metric signature, the gravitational part of the action is given as
In measure theory, Carathéodory's extension theorem states that any pre-measure defined on a given ring R of subsets of a given set Ω can be extended to a measure on the σ-algebra generated by R, and this extension is unique if the pre-measure is σ-finite. Consequently, any pre-measure on a ring containing all intervals of real numbers can be extended to the Borel algebra of the set of real numbers. This is an extremely powerful result of measure theory, and leads, for example, to the Lebesgue measure.
In mathematics, a π-system on a set is a collection of certain subsets of such that
In mathematics, a positive measure μ defined on a σ-algebra Σ of subsets of a set X is called a finite measure if μ(X) is a finite real number, and a set A in Σ is of finite measure if μ(A) < ∞. The measure μ is called σ-finite if X is a countable union of measurable sets with finite measure. A set in a measure space is said to have σ-finite measure if it is a countable union of measurable sets with finite measure. A measure being σ-finite is a weaker condition than being finite, i.e. all finite measures are σ-finite but there are (many) σ-finite measures that are not finite.
The Glicko rating system and Glicko-2 rating system are methods of assessing a player's strength in games of skill, such as chess and Go. It was invented by Mark Glickman as an improvement on the Elo rating system, and initially intended for the primary use as a chess rating system. Glickman's principal contribution to measurement is "ratings reliability", called RD, for ratings deviation.
Expectation propagation (EP) is a technique in Bayesian machine learning.
In probability theory, the rectified Gaussian distribution is a modification of the Gaussian distribution when its negative elements are reset to 0. It is essentially a mixture of a discrete distribution and a continuous distribution as a result of censoring.
In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
In statistics and machine learning, Gaussian process approximation is a computational method that accelerates inference tasks in the context of a Gaussian process model, most commonly likelihood evaluation and prediction. Like approximations of other models, they can often be expressed as additional assumptions imposed on the model, which do not correspond to any actual feature, but which retain its key properties while simplifying calculations. Many of these approximation methods can be expressed in purely linear algebraic or functional analytic terms as matrix or function approximations. Others are purely algorithmic and cannot easily be rephrased as a modification of a statistical model.
1 2 3 4 Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT Press. ISBN 978-0262018029.
1 2 3 4 Herbrich, Ralf; Minka, Tom; Graepel, Thore (2007), Schölkopf, B.; Platt, J. C.; Hoffman, T. (eds.), "TrueSkill : A Bayesian Skill Rating System" (PDF), Advances in Neural Information Processing Systems 19, MIT Press, pp. 569–576, retrieved 2018-10-11
↑ Minka, Tom; Cleven, Ryan; Zaykov, Yordan (2018-03-22). "TrueSkill 2: An improved Bayesian skill rating system". {{cite journal}}: Cite journal requires |journal= (help)
↑ "United States Patent Application 20090227313: Determining Relative Skills of Players". USPTO. Retrieved 2014-02-16.
Microsoft Research's TrueSkill homepage
Microsoft Research's TrueSkill paper
In-depth explanation of the mathematical background
|
Reviewed by Wojciech Sas, PhD candidate and Adena Benn
Young's modulus equation
How do I calculate Young's modulus?
Example using the modulus of elasticity formula
How to calculate Young's modulus from a stress-strain curve
With this Young's modulus calculator, you can obtain the modulus of elasticity of a material, given the strain produced by a known tensile/compressive stress.
We will also explain how to automatically calculate Young's modulus from a stress-strain curve with this tool or with a dedicated plotting software.
What the modulus of elasticity is;
How to calculate Young's modulus with the modulus of elasticity formula;
What Young's modulus unit is;
What material has the highest Young's modulus; and more.
Young's modulus, or modulus of elasticity, is a property of a material that tells us how difficult it is to stretch or compress the material in a given axis.
This tells us that the relation between the longitudinal strain and the stress that causes it is linear. Therefore, we can write it as the quotient of both terms.
However, this linear relation stops when we apply enough stress to the material. The region where the stress-strain proportionality remains constant is called the elastic region.
If we remove the stress after stretch/compression within this region, the material will return to its original length.
Because of that, we can only calculate Young's modulus within this elastic region, where we know the relationship between the tensile stress and longitudinal strain.
🙋 If you want to learn how the stretch and compression of the material in a given axis affect its other dimensions, check our Poisson's ratio calculator!
Before jumping to the modulus of elasticity formula, let's define the longitudinal strain
\epsilon
\epsilon =\frac{L - L_{0}}{L_{0}},
L_{0}
is the material's initial length; and
L
is the length while being under tensile stress.
And the tensile stress
\sigma
\sigma = \frac{F}{A},
F
is the force producing the stretching/compression; and
A
is the area on which the force is being applied.
Thus, Young's modulus equation results in:
E = \frac{\sigma}{\epsilon}
Since the strain is unitless, the modulus of elasticity will have the same units as the tensile stress (pascals or Pa in SI units).
To calculate the modulus of elasticity E of material, follow these steps:
Measure its initial length, L₀ without any stress applied to the material.
Measure the cross-section area A.
Apply a known force F on the cross-section area and measure the material's length while this force is being applied. This will be L.
Calculate the strain ϵ felt by the material using the longitudinal strain formula: ϵ = (L - L₀) / L₀.
Calculate the tensile stress you applied using the stress formula: σ = F / A.
Divide the tensile stress by the longitudinal strain to obtain Young's modulus: E = σ / ϵ.
Let's say we have a thin wire of an unknown material, and we want to obtain its modulus of elasticity.
Assuming we measure the cross-section sides, obtaining an area of A = 0.5×0.4 mm. Then we measure its length and get L₀ = 0.500 m.
Now, we apply a known force, F = 100 N for example, and measure, again, its length, resulting in L = 0.502 m.
Before computing the stress, we need to convert the area to meters:
A = 0.5×0.4 mm = 0.0005×0.0004 m
With those values, we are now ready to calculate the stress σ = 100/(0.0005×0.0004) = 5·10⁸ Pa and strain ϵ = (0.502 - 0.500) / 0.500 = 0.004.
Finally, if we divide the stress by the strain according to the Young's modulus equation, we get: E = 5·10⁸ Pa / 0.004 = 1.25·10¹¹ Pa or E = 125 GPa, which is really close to the modulus of elasticity of copper (130 GPa). Hence, our wire is most likely made out of copper!
Our Young's modulus calculator also allows you to calculate Young's modulus from a stress-strain graph!
To plot a stress-strain curve, we first need to know the material's original length,
L_{0}
. Then, we apply a set of known tensile stresses and write down its new length,
L
, for each stress value.
Lastly, we calculate the strain (independently for each stress value) using the strain formula and plot every stress-strain value pair using the
Y
X
Analysing the stress-strain chart
Stress-strain chart. Black lines represent the end of the elastic region.
As you can see from the chart above, the stress is proportional (linear) to the strain up to a specific value. This is the elastic region, and after we cross this section, the material will not return to its original state in the absence of stress.
Since the modulus of elasticity is the proportion between the tensile stress and the strain, the gradient of this linear region will be numerically equal to the material's Young's modulus.
We can then use a linear regression on the points inside this linear region to quickly obtain Young's modulus from the stress-strain graph.
Our Young's modulus calculator automatically identifies this linear region and outputs the modulus of elasticity for you. Give it a try!
Is stiffness the same as Young's modulus?
No, but they are similar. Stiffness is defined as the capacity of a given object to oppose deformation by an external force and is dependent on the physical components and structure of the object. Young's modulus is an intensive property related to the material that the object is made of instead.
Is tensile modulus the same as Young's modulus?
Yes. Tensile modulus is another name for Young's modulus, modulus of elasticity, or elastic modulus of a material. It relates the deformation produced in a material with the stress required to produce it.
What material has the highest Young's modulus?
Diamonds have the highest Young's modulus or modulus of elasticity at about ~1,200 GPa. Diamonds are the hardest known natural substances, and they are formed under extreme pressures and temperatures inside Earth's mantle.
Is the modulus of elasticity constant?
Yes. Since the modulus of elasticity is an intensive property of a material that relates the tensile stress applied to a material, and the longitudinal deformation it produces, its numerical value is constant. The resulting ratio between these two parameters is the material's modulus of elasticity.
Single stress/strain value
Stress (σ)
Final length (L)
Initial length (L₀)
Strain (ε)
Our wavelength to frequency calculator is here to help you estimate frequency given the wavelength and vice versa.
|
In the previous tutorial, we initialized our model parameters. In this part, we'll compute forward propagation and cost function. Here are the mathematical expression formulas of forward propagation algorithm for one example x(i) from our previous tutorial part:
{z}^{\left[1\right]\left(i\right)}={W}^{\left[1\right]}{x}^{\left(i\right)}+{b}^{\left[1\right]}\phantom{\rule{0ex}{0ex}}{a}^{\left[1\right]\left(i\right)}=\mathrm{tan}h\left({z}^{\left[1\right]\left(i\right)}\right)\phantom{\rule{0ex}{0ex}}{z}^{\left[2\right]\left(i\right)}={W}^{\left[2\right]}{a}^{\left[1\right]\left(i\right)}+{b}^{\left[2\right]}\phantom{\rule{0ex}{0ex}}\stackrel{^}{y}={a}^{\left[2\right]\left(i\right)}=\sigma \left({z}^{\left[2\right]\left(i\right)}\right)
So at first, we'll retrieve each parameter from the dictionary "parameters," and then we'll compute Z[1], A[1], Z[2], and A[2] (the vector of all your predictions on all the examples in the training set). Then we'll store values into the cache, which will be used as an input to the backpropagation function.
Code for our forward propagation function:
X - input data of size (input_layer, number of examples)
parameters - python dictionary containing your parameters (output of initialization function)
A2 - The sigmoid output of the second activation
cache - a dictionary containing "Z1", "A1", "Z2" and "A2"
# Retrieve each parameter from the dictionary "parameters"
# Implementing Forward Propagation to calculate A2 probabilities
# Values needed in the backpropagation are stored in "cache"
Computing Neural network cost:
Now that we have computed A[2] (in the Python variable A2), which contains a[2](i) for every example, we can compute the cost function, which looks like this:
J=-\frac{1}{m}\sum _{i=1}^{m}\left({y}^{\left(i\right)}\mathrm{log}\left({a}^{\left[2\right]\left(i\right)}\right)-\left(1-{y}^{\left(i\right)}\right)\mathrm{log}\left(1-{a}^{\left[2\right]\left(i\right)}\right)\right)
Code for our cost function:
A2 - The sigmoid output of the second activation, of shape (1, number of examples);
Y - "true" labels vector of shape (1, number of examples);
parameters - python dictionary containing parameters W1, b1, W2, and b2.
def compute_cost(A2, Y, parameters):
# number of example
# Compute the cross-entropy cost
logprobs = np.multiply(np.log(A2),Y) + np.multiply(np.log(1-A2), (1-Y))
cost = -1/m*np.sum(logprobs)
# makes sure cost is the dimension we expect, E.g., turns [[51]] into 51
#train_set_x_flatten shape: (12288, 6002)
#train_set_y shape: (1, 6002)
A1 = np.tanh(Z1)
# makes sure cost is in dimension we expect, E.g., turns [[51]] into 51
Up to this point, we have initialized our model's parameters, implement forward propagation and compute the loss—few more functions left to write, which we'll continue to do in the next tutorial.
|
Programming/Kdb/Labs/Option pricing - Thalesians Wiki
Programming/Kdb/Labs/Option pricing
1 Background: the Black-Scholes formulae
2 Task 1: Implementing the standard normal cumulative distribution function
3 Task 2: Implement the Black-Scholes formula for the European call and put
4 Background: a simple Monte Carlo model
5 Task 3: Generating standard Gaussian random variates
6 Task 4: Implement a Monte Carlo pricer
7 Task 5: Pricing a double digital option
Background: the Black-Scholes formulae
Recall the celebrated Black-Scholes equation
{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+(r-q)S{\frac {\partial V}{\partial S}}-rV=0.}
{\displaystyle t}
is a time in years; we generally use
{\displaystyle t=0}
as now;
{\displaystyle V(S,t)}
is the value of the option;
{\displaystyle S(t)}
is the price of the underlying asset at time
{\displaystyle t}
{\displaystyle \sigma }
is the volatility — the standard deviation of the asset's returns;
{\displaystyle r}
is the annualized risk-free interest rate, continuously compounded;
{\displaystyle q}
is the annualized (continuous) dividend yield.
The solution of this equation depends on the payoff of the option — the terminal condition. In particular, if at the time of expiration,
{\displaystyle T}
, the payoff is given by
{\displaystyle V(S,T)=C(S,T)=:\max\{S-K,0\}}
, in other words, the option is a European call option, then the value of the option at time
{\displaystyle t}
is given by the Black-Scholes formula for the European call:
{\displaystyle C(S_{t},t)=e^{-r\tau }[F_{t}N(d_{1})-KN(d_{2})]}
{\displaystyle \tau =T-t}
is the time to maturity,
{\displaystyle F=S_{t}e^{(r-q)\tau }}
is the forward price, and
{\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+\left(r-q+{\frac {1}{2}}\sigma ^{2}\right)\tau \right]}
{\displaystyle d_{2}=d_{1}-\sigma {\sqrt {\tau }}.}
{\displaystyle N(x)}
{\displaystyle N(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{-z^{2}/2}\,dz.}
Similarly, if the payoff is given by
{\displaystyle V(S,T)=P(S,T)=:\max\{K-S,0\}}
, in other words, the option is a European put option, then the value of the option at time
{\displaystyle t}
is given by the Black-Scholes formula for the European put:
{\displaystyle P(S_{t},t)=e^{-r\tau }[KN(-d_{2})-F_{t}N(-d_{1})].}
We will implement the formulae for the European call and European put in q. However, our first task is to implement
{\displaystyle N(x)}
Task 1: Implementing the standard normal cumulative distribution function
As mentioned in the Handbook of Mathematical Functions,
{\displaystyle N(x)}
{\displaystyle \left\{{\begin{array}{ll}1-\phi (x)\left[c_{1}k+c_{2}k^{2}+c_{3}k^{3}+c_{4}k^{4}+c_{5}k^{5}\right],&x\geq 0,\\1-N(-x),&x<0,\end{array}}\right.}
{\displaystyle \phi (x)=\exp(-x^{2}/2)/{\sqrt {2\pi }},}
{\displaystyle k=1/(1+0.2316419x),c_{1}=0.319381530,c_{2}=-0.356563782,c_{3}=1.781477937,c_{4}=-1.821255978,c_{5}=1.330274429.}
Can you implement this function in q?
pi:acos -1;
One (terse) implementation would be
normal_cdf:{abs(x>0)-(exp[-.5*x*x]%sqrt 2*pi)*t*.31938153+t*-.356563782+t*1.781477937+t*-1.821255978+1.330274429*t:1%1+.2316419*abs x};
Task 2: Implement the Black-Scholes formula for the European call and put
Equipped with our implementation of normal_cdf, can you implement the Black-Scholes formula for the European call?
compute_d1:{[S;K;r;q;sigma;tau](log[S%K]+((r-q)+.5*sigma*sigma)*tau)%sigma*sqrt tau};
call_price:{[S;K;r;q;sigma;tau]
d1:compute_d1[S;K;r;q;sigma;tau];
d2:d1-sigma*sqrt tau;
F:S*exp tau*r-q;
(exp neg r*tau)*(F*normal_cdf d1)-K*normal_cdf d2};
put_price:{[S;K;r;q;sigma;tau]
(exp neg r*tau)*(K*normal_cdf neg d2)-F*normal_cdf neg d1};
We can test these implementations on a few sets of parameters, for example
call_price[100f;105f;.05;.07;.1;.5]
gives 0.7991363, whereas
put_price[100f;105f;.05;.07;.1;.5]
gives 6.646135.
Background: a simple Monte Carlo model
The Black-Scholes equation may not have analytic solutions for all derivatives that we are interested in. However, it may still be possible to solve it numerically using Monte Carlo methods.
The model for stock price evolution is
{\displaystyle dS_{t}=\mu S_{t}\,dt+\sigma S_{t}\,dW_{t},}
{\displaystyle \mu }
is the drift. The Black-Scholes pricing theory then tells us that the price of a vanilla option with pay-of{\displaystyle f}
{\displaystyle e^{-r\tau }\mathbb {E} [f(S_{T})],}
where the expectation is taken under the associated risk-neutral process,
{\displaystyle dS_{t}=(r-q)S_{t}\,dt+\sigma S_{t}\,dW_{t}.}
We solve this equation by passing to the log and using Ito's lemma; we compute
{\displaystyle d\ln S_{t}=\left(r-q-{\frac {1}{2}}\sigma ^{2}\right)\,dt+\sigma \,dW_{t}.}
As this process is constant-coefficient, it has the solution
{\displaystyle \ln S_{t}=\ln S_{0}+\left(r-q-{\frac {q}{2}}\sigma ^{2}\right)t+\sigma W_{t}.}
{\displaystyle W_{t}}
is a Brownian motion,
{\displaystyle W_{T}}
is distributed as a Gaussian with mean zero and variance
{\displaystyle T}
, so we can write
{\displaystyle W_{T}={\sqrt {T}}N(0,1),}
{\displaystyle \ln S_{T}=\ln S_{0}+\left(r-q-{\frac {q}{2}}\sigma ^{2}\right)T+\sigma {\sqrt {T}}N(0,1),}
{\displaystyle S_{T}=S_{0}\exp \left[\left(r-q-{\frac {q}{2}}\sigma ^{2}\right)T+\sigma {\sqrt {T}}N(0,1)\right].}
The price of a vanilla option is therefore equal to
{\displaystyle e^{-rT}\mathbb {E} \left[f\left(S_{0}\exp \left[\left(r-q-{\frac {q}{2}}\sigma ^{2}\right)T+\sigma {\sqrt {T}}N(0,1)\right]\right)\right].}
The objective of our Monte Carlo simulation is to approximate this expectation by using the law of large numbers, which tells us that if
{\displaystyle Y_{j}}
are a sequence of identically distributed independent random variables, then with probability 1 the sequence
{\displaystyle {\frac {1}{N}}\sum _{j=1}^{N}Y_{j}}
{\displaystyle \mathbb {E} [Y_{1}]}
So the algorithm to price a call option by Monte Carlo is clear. We draw a random variable,
{\displaystyle x}
, from an
{\displaystyle N(0,1)}
distribution and compute
{\displaystyle f\left(S_{0}\exp \left[\left(r-q-{\frac {q}{2}}\sigma ^{2}\right)T+\sigma {\sqrt {T}}N(0,1)\right]\right).}
We do this many times and take the average. We then multiply this average by
{\displaystyle e^{-rT}}
Task 3: Generating standard Gaussian random variates
To generate 10 standard uniform random variates in q, we can simply use
10?1f
How can we convert these standard uniform random variates into standard Gaussian random variates?
One approach is to use the Box-Muller transform. Suppose
{\displaystyle U_{1}}
{\displaystyle U_{2}}
are independent samples chosen from the uniform distribution on the unit interval
{\displaystyle [0,1]}
{\displaystyle Z_{0}={\sqrt {-2\ln U_{1}}}\cos(2\pi U_{2})}
{\displaystyle Z_{1}={\sqrt {-2\ln U_{1}}}\sin(2\pi U_{2}).}
{\displaystyle Z_{0}}
{\displaystyle Z_{1}}
are independent random variables with a standard normal distribution.
Equipped with Box-Muller transform, implement a function in q that, given a number
{\displaystyle x}
, will return
{\displaystyle x}
standard Gaussian random variates.
normal_variates:{$[x=2*n:x div 2;raze sqrt[-2*log n?1f]*/:(sin;cos)@\:(2*pi)*n?1f;-1_.z.s 1+x]}
Task 4: Implement a Monte Carlo pricer
We are now ready to implement a Monte Carlo pricer. We can use it to price more complicated options than the European call and put. However, the European call and put are convenient — the analytic solutions enable us to test our Monte Carlo pricer.
mc:{[S;r;q;sigma;T;path_count;payoff]
root_variance:sqrt variance:sigma*sigma*T;
moved_spot:S*exp (ito_correction:-.5*variance)+(r-q)*T;
exp[neg T*r]*avg payoff moved_spot*exp[root_variance*normal_variates path_count]};
Let's define the European call and put payoffs:
call_payoff:{[K;S]0|S-K}
put_payoff:{[K;S]0|K-S}
We can now test our Monte Carlo pricer on these payoffs and confirm that we get similar answers to those produced by the analytic formulae:
mc[100f;.05;.07;.1;.5;100000;call_payoff[105f]]
mc[100f;.05;.07;.1;.5;100000;put_payoff[105f]]
Task 5: Pricing a double digital option
We shall now use our q Monte Carlo pricer to price a double digital option.
First, we implement the double digital payoff:
double_digital_payoff:{[L;U;S]?[(S<L)|S>U;z;1f+z:0f*S]}
We are now ready to price a double digital option:
mc[100f;.05;.07;.1;.5;100000;double_digital_payoff[105f;110f]]
The result will vary (due to the Monte Carlo randomness), but it will be around 0.1260532.
Retrieved from "https://wiki.thalesians.com/index.php?title=Programming/Kdb/Labs/Option_pricing&oldid=683"
|
The Near Subnormal Weighted Shift and Recursiveness
R. Ben Taher, M. Rachidi, "The Near Subnormal Weighted Shift and Recursiveness", International Journal of Analysis, vol. 2013, Article ID 397262, 4 pages, 2013. https://doi.org/10.1155/2013/397262
R. Ben Taher1 and M. Rachidi1
1Group of DEFA, Department of Mathematics and Informatics, Faculté des Sciences, Université Moulay Ismail, BP 11201, Zitoune, Méknés, Morocco
We aim at studying the near subnormality of the unilateral weighted shifts, whose moment sequences are defined by linear recursive relations of finite order. Using the basic properties of recursive sequences, we provide a natural necessary condition, that ensure the near subnormality of this important class of weighted shifs. Some related new results are established; moreover, applications and consequences are presented; notably the notion of near subnormal completion weighted shift is implanted and explored.
Let be a bounded sequence of (or ) and a separable Hilbert space of basis . The unilateral weighted shift with weight sequence is defined by . The moments of the operator are given by
It is well known that the bounded operator can never be normal, and it is hyponormal if and only if (see [1–3]). For a given positive , the space of bounded operators on , an operator , is called a -near subnormal operator if there exists a constant satisfying . An hyponormal operator is called near subnormal if it is -near subnormal, where . Some necessary and sufficient conditions guaranteeing the near subnormality for unilateral weighted shifts have been established in [4, 5].
In this paper, we are interested in studying the near subnormal unilateral weighted shifts, when the sequence of moments satisfies the following linear recursive relation of order : where are the initial data and are fixed real or complex numbers with . Such sequences are widely studied in the literature, generally called -generalized Fibonacci sequences (see [6] and references therein). A weighted shift such that is a sequence (2) is called a recursive weighted shift. Our motivation in considering sequence (2) is inspired from the fact that every weighted shift is norm-limit of recursively generated weighted shifts (for further information we refer, to [1, 2, 7], e.g.,). On the other hand, these sequences play a central role in the characterization of the subnormality via the truncated moment problem (see [1, 2, 7, 8]). It turns out that following Curto-Fialkow's approach, the roots of the polynomial , called the characteristic polynomial of (2), play an important role, for establishing properties of subnormality, via Berger's Theorem. Moreover, in the process of construction of the generating measure, related to the truncated moment problem and subnormality of [8, 9], it reveals some significant obstructions when all roots are not simple; some additional conditions on the initial data are necessary for one thing, to guarantee the existence of the generating measure. On the other hand, it was established in [10] that every can be expressed as a moment of distribution of discrete support.
In this paper, we describe a deductive reasoning to prove that appending a mild hypotheses to the natural necessary condition for the existence of the hyponormal operators is sufficient, so that the unilateral weighted shifts, whose associate moment sequences satisfy (1), are near subnormal (Section 2). The main tool employed is the Binet formula of sequence (2) (see [6]). We then uprise to the near subnormal completion problem (NSCP) of order , while at the same time evolving the subnormal completion problem (see [2]), the case is examined and solved. In the last section, we are interested in stretching our study for characterizing the near subnormality of a recursive weighted shift such that the moment sequence satisfies (2); we employ some results on the moment of distributions of discrete support (see [8, 10]). The construction of the representing distribution is derived from the Binet formula of sequence (2); we preclude to set any condition on the initial data . The closed relation of the near subnormality and the subnormality of weighted shifts is discussed.
2. Recursive Sequences and Near Subnormality of Unilateral Weighted Shifts
Let be a hyponormal operator and a sequence such that , for all . It was established in [4] that if , then is near subnormal if and only if , where . This characterization of near subnormality for unilateral weighted shifts is more practical and adequate in this section. Suppose that the moment sequence of the weighted shift satisfies (2). Expression (1) shows that , for ; as a matter of fact, we formulate easily our first telling result as follows.
Proposition 1. Let be a hyponormal unilateral weighted shift with for all and the moment sequence (1) associated with , satisfying , for all . Then, is near subnormal if and only if the sequence is bounded.
The advantage of this result consists of its application to the former sequence (2), for establishing sufficient conditions on the near subnormality.
Theorem 2. Let be a positive sequence (2) and its characteristic polynomial with . Let be the sequence defined by , satisfying , for all . Then, the hyponormal operator associated with unilateral weighted shift is near subnormal.
Proof. Let be a positive sequence (2) and its characteristic polynomial with . It follows from the Binet formula that , for all (see [6]). A straightforward computation leads to have . By the above Proposition 1, we obtain the desired result.
More generally, suppose that ; it may occur in the Binet formula , where , that there exist such that , for and for . If , for every , then a straightforward computation allows us to establish that . Therefore, under the preceding data, the conclusion of Theorem 2 is still valid. Now, let us study the general situation when there exist with such that ; we can also demonstrate that . To establish this result, a long and direct computation is necessary; and the following lemma will be useful.
Lemma 3. Let be a positive sequence (2) and suppose that its characteristic polynomial is given by , where and . Let be the sequence defined by , satisfying , for all . Then, we have .
The proof of this lemma is more technical and for the reason of clarity we omit it. For the reason of simplicity, we study the general situation, when the Binet formula is given by , where for , , and with and . Since , for every , Lemma 3 implies the following.
Lemma 4. Let be a positive sequence (2) and suppose that its characteristic polynomial is , where , and , for all . Let be the sequence defined by , satisfying for all . Then, we have .
Lemmas 3 and 4 permit us to formulate the following extension of Theorem 2.
Theorem 5. Let be a positive sequence (2) and its characteristic polynomial with . Let be the sequence defined by , such that for all . Then, the hyponormal operator associated with unilateral weighted shift is near subnormal.
We manage to reassemble the above results as follows.
Theorem 6. Let be a hyponormal operator such that . Suppose that the sequence of moment is a positive sequence (2). Let be the sequence defined by , satisfying , for all . Then, is near subnormal.
In [8], an important class of subnormal weighted shifts is explored by considering measures with two atoms and . A sequence such that , and is a moment sequence if and only if . As a matter of fact, we study the fallout of our approach by providing a connection between hyponormality, near subnormality and subnormality for this class of operators of weighted shifts.
Proposition 7. Let be a positive sequence (2) and its characteristic polynomial with . Let be the sequence defined by and the operator associated with unilateral weighted shift . Suppose that , for all . Then, the following five statements are equivalent: (i) ; (ii) is hyponormal; (iii) is -hyponormal for all ; (iv) is near subnormal; (v) is subnormal.
Similar to the subnormal completion problem (see [1, 2]), the NSCP can be formulated as follows: “Let be a finite collection of positive numbers, find necessary and sufficient conditions on to guarantee the existence of a near subnormal weighted shift whose initial weights are given by ". The first obstructions encountered for solving this problem are the natural necessary condition for the existence of the hyponormal completion. That is, once we know that admits a -hyponormal completion which is recursively generated, the condition is not easy to be satisfied in order to apply Theorem 6. When , the problem becomes highly nontrivial. For , the strategy to solve NSCP is as follows. Given such that , set , and . First, we use these terms to construct a recursive sequence of order , by setting for all . A straightforward calculation gives , and , , where . Thus, we obtain , for every . And the completion of is hyponormal and recursively generated, and the condition is satisfied. It follows from Theorem 6 that is near subnormal. As a matter of fact, we mange to have the following result.
Proposition 8. Let be an initial segment of positive weights. Then, has a near subnormal completion.
3. The Moment of Distributions and the Near Subnormality of Unilateral Weighted Shifts
In this section, we are interested in formulating some results in the moment of distributions of discrete support, with a view to characterize the near subnormality and its closed relation with the subnormality of weighted shifts. Let be a separable Hilbert space and its orthonormal basis. Let be a bounded sequence of nonnegative real numbers and the bounded operator defined by . Let be the sequence of moments associated with . By Berger's theorem, is a subnormal operator if and only if there exists a nonnegative Borelean measure , which is called a representing measure of , with such that , where . Hence, the moment problem and subnormal weighted shifts are closely related. And it was shown in [8] that a sequence (2) admits a generating measure (not necessary positive) if and only if its characteristic (minimal) polynomial has distinct roots, with , the set of zeros of (for more details, see Proposition 2.4 of [8]). It was pointed out in [9] that if , with for some , then is a moment sequence for some distribution , where is the derivative in the meaning of distribution of the Dirac measure (see [10]). To find this distribution (supported by a compact ) interpolating , the Binet formula plays a primordial role. Indeed, let be compact subset of (or ), a neighborhood of , and consider the function of class satisfying the following three conditions: (i) for every ; (ii) for every or ; and (iii) for every (or ). It was established in [10] that for a distribution of compact support , the real (or complex) number is independent of the function , for every (see Lemma 1 of [10]). The number is called the moment (or power moment) of order of the distribution .
Let be the Dirac measure at the point and its derivative. It is well known that and define two distributions on the space of polynomial functions on (or ). Moreover, every distribution of discrete support can be written under the form (see, [11] e.g.,). We denote by the -vector space of distributions of discrete support, contained in . Hence, the preceding discussion and Theorem 6, allow us to derive the following result.
Theorem 9. Let be a hyponormal operator such that and , for all . Suppose that the sequence of moments is a positive sequence and there exists a distribution such that , for every . Then, is near subnormal.
The natural extension of the -moment problem can be formulated as follows. Let be a closed subset of and a sequence of . The associated distributional -moment problem consists of finding a distribution of support contained in such that
A distribution solution of problem (3) is called a representing distribution of the sequence . Therefore, the equivalent form of Theorem 9 can be expressed as follows.
Proposition 10. Let be a hyponormal operator such that with , for all . Suppose that the sequence of moments is a positive sequence. If the distributional moment problem (3) owns a solution , then is near subnormal.
In light of Riesz theorem, it is well known that if is a positive distribution then , where is a measure, and we write (see, [11] e.g.,). As a consequence, we have the following result.
Theorem 11. Let be a hyponormal operator given by such that its sequence of moments is a positive sequence. Let be the sequence defined by , satisfying , for all . If the moment problem (3) owns a positive representing distribution , then is subnormal.
The authors would like to thank the anonymous referee for his (or her) useful remarks and suggestions that improved this paper. M. Rachidi is an Associate with “Group of DEFA.”
R. E. Curto and L. A. Fialkow, “Recursively generated weighted shifts and the subnormal completion problem,” Integral Equations and Operator Theory, vol. 17, no. 2, pp. 202–246, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. E. Curto and L. A. Fialkow, “Recursively generated weighted shifts and the subnormal completion problem. II,” Integral Equations and Operator Theory, vol. 18, no. 4, pp. 369–426, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
P. R. Halmos, A Hilbert Space Problem Book, Van Nostrand, Princeton, NJ, USA, 1967. View at: MathSciNet
W. Gong-Bao and M. Ji-Pu, “Near subnormality of weighted shifts and the answer to the Hilbert space problem 160,” Northeastern Mathematical Journal, vol. 17, no. 1, pp. 45–48, 2001. View at: Google Scholar | Zentralblatt MATH | MathSciNet
W. Gong-Bao and M. Ji-Pu, “Near subnormal operators and subnormal operators,” Science in China A, vol. 33, no. 1, pp. 1–9, 1990. View at: Google Scholar | Zentralblatt MATH | MathSciNet
F. Dubeau, W. Motta, M. Rachidi, and O. Saeki, “On weighted
r
-generalized Fibonacci sequences,” The Fibonacci Quarterly, vol. 35, no. 2, pp. 102–110, 1997. View at: Google Scholar | MathSciNet
L. A. Fialkow, Positivity, Extensions and the Truncated Moment Problem, Multi-Variable Operator Theory, vol. 185 of Contemporary Mathematics, American Mathematical Society, Providence, RI, USA, 1995.
R. Ben Taher, M. Rachidi, and E. H. Zerouali, “Recursive subnormal completion and the truncated moment problem,” The Bulletin of the London Mathematical Society, vol. 33, no. 4, pp. 425–432, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
C. E. Chidume, M. Rachidi, and E. H. Zerouali, “Solving the general truncated moment problem by the
r
-generalized Fibonacci sequences method,” Journal of Mathematical Analysis and Applications, vol. 256, no. 2, pp. 625–635, 2001. View at: Publisher Site | Google Scholar | MathSciNet
B. Bernoussi, M. Rachidi, and O. Saeki, “Factorial Binet formula and distributional moment formulation of generalized Fibonacci sequences,” The Fibonacci Quarterly, vol. 42, no. 4, pp. 320–329, 2004. View at: Google Scholar | MathSciNet
L. Schwartz, Théorie des Distributions, Hermann, 1966. View at: MathSciNet
Copyright © 2013 R. Ben Taher and M. Rachidi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
Plot threshold transitions - MATLAB ttplot - MathWorks Deutschland
ttplot
Plot Discrete Threshold Transitions
Visually Compare Transition Functions
Plot Exponential Threshold Transitions
Plot threshold transitions
ttplot(tt,Name,Value)
ttplot(ax,___)
h = ttplot(___)
ttplot plots transition functions of threshold transitions. To evaluate the transition function for observations of the threshold variable, use ttdata.
ttplot(tt) plots transition bands between states of the threshold transitions tt on the y-axis. The plot shows gradient shading of the mixing level in the transition bands.
ttplot(tt,Name,Value) uses additional options specified by one or more name-value arguments. For example, ttplot(tt,Type="graph") specifies plotting a line plot of the transition function at each threshold level on the same axes.
ttplot(ax,___) plots on the axes specified by ax instead of the current axes (gca) using any of the input argument combinations in the previous syntaxes.
h = ttplot(___) returns a handle h to the threshold transitions plot. Use h to modify properties of the plot after you create it.
Create discrete threshold transitions at 0 and 2.
tt = threshold(t)
tt is a threshold object. The specified thresholds split the space into three states.
Plot the threshold transitions.
ttplot(tt);
ttplot graphs a gradient plot by default. The
\mathit{y}
-axis represents the value of the threshold variable
{\mathit{z}}_{\mathit{t}}
(currently undefined) and the state-space:
The system is in state 1 when
{\mathit{z}}_{\mathit{t}}<0
0\le {\mathit{z}}_{\mathit{t}}<2
{\mathit{z}}_{\mathit{t}}\ge 2
Because the transitions are discrete, ttplot graphs the levels as lines—the regime switches abruptly when
{\mathit{z}}_{\mathit{t}}
crosses a threshold variable.
{\mathit{z}}_{\mathit{t}}
is undefined, the
\mathit{x}
-axis is arbitrary. When you specify threshold variable data by using the Data name-value argument, the
\mathit{x}
-axis is the sample index.
Create normal cdf threshold transitions at levels 0 and 5, with rates 0.5 and 1.5.
r = [0.5 1.5];
tt = threshold(t,Type="normal",Rates=r)
To compare the behavior of the transition functions, plot their graphs at the same level.
ttplot(tt,Type="graph",Width=20)
Plot the transition functions at their levels. Evaluate the transition function over a 1-D grid of values by using ttdata, and then plot the results.
lower = tt.Levels(1) - 3/min(tt.Rates);
upper = tt.Levels(end) + 3/min(tt.Rates);
z = lower:0.1:upper;
F = ttdata(tt,z,UseZeroLevels=false);
plot(z,F,LineWidth=2)
xlabel("Level")
legend(["Level 0, Rate 0.5" "Level 5, Rate 1.5"],Location="NorthWest")
Create smooth threshold transitions for the Australian to US dollar exchange rate to model price parity.
Load the Australia/US purchasing power and interest rates data set. Extract the log of the exchange rate EXCH from the table.
EXCH = DataTable.EXCH;
Consider a two-state system where:
State 1 occurs when the Australian dollar buys more than the US dollar (EXCH
\ge 0
State 2 occurs when the US dollar buys more than the Australian dollar (EXCH
<\mathrm{0}
States are weighed more highly as the system deviates from parity (EXCH = 0).
Create threshold transitions representing the system. To attribute a greater amount of mixing away from the threshold, specify an exponential transition function. Set the transition rate to 2.5.
tt = threshold(0,Type="exponential",Rates=2.5)
Plot the threshold transitions with the threshold data.
ttplot(tt,Data=EXCH);
Try improving the display by experimenting with the transition band width (Width name-value argument).
ttplot(tt,Data=EXCH,Width=2);
Plot the transition function.
ttplot(tt,Type="graph");
tt — Threshold transitions
Threshold transitions, with NumStates states, specified as a threshold object. tt must be fully specified (no NaN entries).
By default, ttplot plots to the current axes (gca).
Example: Type="graph" specifies plotting a line plot of the transition function at each threshold level on the same axes.
"gradient" (default) | "graph"
Plot type, specified as a value in this table.
"gradient" Gradient shading of the mixing level in each transition band.
Graphs of transition functions at each level.
ttplot plots graphs with levels set to zero.
Example: Type="graph"
Data — Data on threshold variable zt to include in plot
[] (empty array) (default) | numeric vector
Data on a threshold variable zt to include in the plot, specified as a numeric vector.
ttplot plots Data with gradient shading of transition bands (Type="gradient"). If Type="graph", ttplot ignores Data.
Width — Width of transition bands
Width of transition bands, specified as a positive numeric scalar.
For gradient plots (Type="gradient"), ttplot truncates transition function data outside of the bands.
For transition function graphs (Type="graph"), ttplot sets the x-axis limits to [-Width/2 Width/2].
By default, ttplot selects the band width automatically.
Example: Width=10
Plot handle, returned as a graphics object. h contains a unique plot identifier, which you can use to query or modify properties of the plot.
The mixing level is the degree to which adjacent states contribute to a response.
Transition functions F vary between 0 and 1; adjacent states are assigned weights F and 1 – F. The mixing level between adjacent states is the minimum weight min(F, 1 – F).
The following characteristics define the mixing behavior of each transition type:
Discrete transitions have no mixing.
Normal and logistic transitions achieve maximum mixing at threshold levels.
Exponential transitions achieve maximum mixing on either side of threshold levels.
For more details, see threshold.
Use the Width name-value argument to adjust the display of transition function graph (Type="graph") plots with varying rates. In multilevel gradient plots (Type="gradient"), a large enough width results in overlapping transition bands that can misrepresent data. By default, ttplot chooses an appropriate width for displaying all transitions.
|
Science:Math Exam Resources/Courses/MATH103/April 2012/Question 01 (a) - UBC Wiki
< Science:Math Exam Resources | Courses/MATH103 | April 2012(Redirected from Science:Math Exam Resources/Courses/MATH103/April 2012/Question 1 (a))
MATH103 April 2012
• Q1 (a) • Q1 (b) • Q1 (c) • Q1 (d) • Q1 (e) • Q1 (f) • Q1 (g) • Q2 (a) • Q2 (b) • Q2 (c) • Q2 (d) • Q2 (e) • Q2 (f) • Q3 (a) • Q3 (b) • Q4 (a) • Q4 (b) • Q4 (c) • Q4 (d) • Q4 (e) • Q4 (f) • Q5 (a) • Q5 (b) • Q5 (c) •
• April 2006 • April 2005 • April 2011 • April 2010 • April 2009 • April 2012 • April 2013 • April 2014 • April 2015 • April 2016 • April 2017 •
Answer the following multiple choice question. Check your answer very carefully. Your answer will be marked right or wrong (work will not be considered for this problem).
Does the sum
{\displaystyle \sum _{n=1}^{\infty }{\frac {n^{0.1}}{n^{0.99}+n^{1.1}+1}}}
A. Converge.
B. Diverge.
Use the series p test.
If you do not see a possibility to use the series p test, try to estimate the sum downstairs.
The difference between the highest exponents is 1.1-0.1 = 1. By the series p test (with p = 1) this series diverges.
More precisely we can prove
{\displaystyle \sum _{n=1}^{\infty }{\frac {n^{0.1}}{n^{0.99}+n^{1.1}+1}}>\sum _{n=1}^{\infty }{\frac {n^{0.1}}{n^{1.1}+n^{1.1}+n^{1.1}}}=\sum _{n=1}^{\infty }{\frac {n^{0.1}}{3n^{1.1}}}={\frac {1}{3}}\sum _{n=1}^{\infty }{\frac {1}{n}}=\infty }
The last sum diverges because of the series p test with p=1.
MER QGH flag, MER QGQ flag, MER QGS flag, MER RT flag, MER Tag Comparison test, Pages using DynamicPageList parser function, Pages using DynamicPageList parser tag
Retrieved from "https://wiki.ubc.ca/index.php?title=Science:Math_Exam_Resources/Courses/MATH103/April_2012/Question_01_(a)&oldid=356960"
|
Intensity reflection and transmission coefficients
Specific acoustic impedance of some materials
How to use the acoustic impedance calculator
Our acoustic impedance calculator will help you find the specific acoustic impedance of a material (z) and determine the intensity coefficients of reflection and transmission of a sound wave at the boundary of two materials. The wide range of applications of acoustic impedance from ultrasound, tympanometries, architectural acoustics, soundproofing, aeronautical noise control, etc., makes it an important property.
Keep reading to learn what acoustic impedance is, the terms in the acoustic impedance equation, reflection and transmission of the sound wave, and some materials' acoustic impedance.
If we recreated a sound from the same source in a room filled with air and underwater in a pool, will it behave the same? 🤔 Sound is a wave of pressure that requires a medium to propagate, and the properties of each material affect the speed and intensity of the wave.
The speed of sound in a given medium (gas, liquid or solid) depends primarily upon how compressible it is. In solids and liquids, which are less compressible than gases and with a higher modulus of elasticity, the speed of sound is faster.
The acoustic impedance (Z) is a material's property that affects how sound travels through it. It represents the medium's resistance to the propagation of the sound, affecting its intensity. The higher the value of Z, the greater is the opposition to the transmission of the sound.
The acoustic impedance (Z) is particular for a geometry and a material, given by the wave's acoustic pressure to flow ratio. Similarly, the specific acoustic impedance (z) is an intensive material's property that relates the wave's pressure and the medium's velocity.
For plane waves, the specific acoustic impedance formula is expressed in terms of density of the medium (
\rho
) and the speed of the sound wave in that particular material (
c
\quad z=\rho\times c
From our initial question, and using the specific acoustic impedance equation, let's compare the z of water and air at the same temperature:
Water with a density of 1000 kg/m3 and speed of sound of 1480 m/s, has a z of 1.48 MRayl.
Air has a density of 1.225 kg/m3 and speed of 343 m/s, has a z of 0.0004 MRayl.
Notice that even though sound moves 4.3 times faster in water than in air, the intensity of the sound wave is 3700 times higher in air than in the water!
💡 The specific acoustic impedance unit is often denoted as Pa⋅s/m × 106 or MRayl (106 Rayleigh).
The acoustic impedance helps us understand what happens to the sound when it travels from one medium to another. At the boundary of two materials, a fraction of the sound intensity is reflected, and the rest is transmitted. This is why we can hear music playing next room 🎵
To quantify how much is reflected and how much is transmitted, we compare the specific acoustic impedances of the two materials. When a sound wave impacts normally (perpendicular) on a boundary, the intensity reflection (R) and transmission (T) coefficients are expressed in terms of the impedances as:
\quad\begin{aligned} R &= (z_1- z_2)^2/(z_1+z_2)^2 \\\\ T &= 4z_1z_2/(z_1+z_2)^2 \end{aligned}
From these expressions we can see that:
z_1 = z_2
, there’s no reflection
(R=0)
and all the sound is transmitted
(T=1)
z_1
z_2
are similar, there’s little reflection and most is transmitted
(T>R)
Otherwise if
z_1
z_2
are very different, most of the sound is reflected
(R>T)
Notice how combining different materials results in different fractions of sound being reflected or transmitted. This effect has a practical application:
For example, in architecture, for soundproofing of buildings, it's common to combine layers of different materials to reduce the intensity of the sound that comes from the streets or rooms within the facility 🏠
In contrast, in ultrasound scanning, the goal is that most of the wave transmits into the body. This is why a gel with an acoustic impedance similar to the skin is used, allowing a small reflection of the wave. This is known as acoustic impedance matching.
In the tables of this section, you can find the specific acoustic impedance of everyday materials, ranging from gases, liquids, solids to body tissues and organs.
Specific acoustic impedance of common gases and liquids:
Spec. acoustic impedance z (MRayl)
Air (20 °C/68 °F)
Seawater (20 °C/68 °F)
Water (0 °C/32 °F)
Water (20 °C/68 °F)
Specific acoustic impedance of solids. Here, you'll find the specific acoustic impedance of steel and other common construction materials:
Specific acoustic impedance of body tissues and organs. In medicine and ultrasounds, these specific acoustic impedances are the most commonly used:
Blood (37 °C/98.6 °F)
Eye aqueous humor
Gel (ultrasound)
Source: Signal Processing / Nanomedicine
The acoustic impedance calculator will help you find the specific acoustic impedance of a given material from a list or for a custom material. This tool also determines the intensity reflection and transmission coefficients:
To find the specific acoustic impedance from a listed material:
From the Find menu, choose: Acoustic impedance of chosen material.
In the Choose material list, select the material that you'd like to know the specific acoustic impedance.
The calculator will display the Specific acoustic impedance (z).
In order to get the specific acoustic impedance of a custom material with the acoustic impedance formula:
From the Find menu, choose: Acoustic impedance of custom material.
Enter values of density and speed of sound of the material.
The calculator will give you the Specific acoustic impedance (z) value.
To calculate the intensity reflection (R) and transmission (T) coefficients:
From the Find menu, choose: Intensity reflection and transmission coef..
Indicate the materials that you'd like to study.
The calculator will show the values for the intensity coefficients R and T.
Acoustic impedance of chosen material
Alfvén velocityBeat frequencyCritical damping… 18 more
Use our orbital velocity calculator to estimate the parameters of orbital motion of the planets.
|
DerivedDistribution - Maple Help
Home : Support : Online Help : Mathematics : Differential Equations : Lie Symmetry Method : Commands for PDEs (and ODEs) : LieAlgebrasOfVectorFields : Distribution : DerivedDistribution
find derived distribution of a distribution
DerivedDistribution(dist)
The DerivedDistribution method returns a Distribution object spanned by all the commutators of vector fields in dist.
This method is of little interest if the input Distribution dist is involutive, since in that case DerivedDistribution(dist) will simply return dist itself.
\mathrm{with}\left(\mathrm{LieAlgebrasOfVectorFields}\right):
\mathrm{V1}≔\mathrm{VectorField}\left(\mathrm{D}[x],\mathrm{space}=[x,y,z,w]\right)
\textcolor[rgb]{0,0,1}{\mathrm{V1}}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}
\mathrm{V2}≔\mathrm{VectorField}\left(\mathrm{D}[y]+x\mathrm{D}[z]+z\mathrm{D}[w],\mathrm{space}=[x,y,z,w]\right)
\textcolor[rgb]{0,0,1}{\mathrm{V2}}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{w}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}
\mathrm{\Sigma }≔\mathrm{Distribution}\left(\mathrm{V1},\mathrm{V2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{\Sigma }}\textcolor[rgb]{0,0,1}{≔}{\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{z}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{z}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{w}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}
Construct derived distribution...
\mathrm{DerivedDistribution}\left(\mathrm{\Sigma }\right)
{\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{z}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{w}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}
\mathrm{IsInvolutive}\left(\mathrm{\Sigma }\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
The DerivedDistribution command was introduced in Maple 2020.
|
Exact Solutions of the Kudryashov-Sinelshchikov Equation Using the Multiple -Expansion Method
Yinghui He, Shaolin Li, Yao Long, "Exact Solutions of the Kudryashov-Sinelshchikov Equation Using the Multiple -Expansion Method", Mathematical Problems in Engineering, vol. 2013, Article ID 708049, 7 pages, 2013. https://doi.org/10.1155/2013/708049
Exact traveling wave solutions of the Kudryashov-Sinelshchikov equation are studied by the -expansion method and its variants. The solutions obtained include the form of Jacobi elliptic functions, hyperbolic functions, and trigonometric and rational functions. Many new exact traveling wave solutions can easily be derived from the general results under certain conditions. These methods are effective, simple, and many types of solutions can be obtained at the same time.
The investigation of the traveling wave solutions to nonlinear evolution equations (NLEEs) plays an important role in mathematical physics. A lot of physical models have supported a wide variety of solitary wave solutions. Here, we study the Kudryashov-Sinelshchikov equation. In 2010, Kudryashov and Sinelshchikov [1] obtained a more common nonlinear partial differential equation for describing the pressure waves in a mixture liquid and gas bubbles taking into consideration the viscosity of liquid and the heat transfer, that is, where , are real parameters. In [2], they derived partial cases of nonlinear evolution equations of the fourth order for describing nonlinear pressure waves in a mixture liquid and gas bubbles. Some exact solutions are found and properties of nonlinear waves in a liquid with gas bubbles are discussed. Equation (1) is called Kudryashov-Sinelshchikov equation; it is generalization of the KdV and the BKdV equations and similar but not identical to the Camassa-Holm (CH) equation; it has been studied by some authors [1, 3–5]. Undistorted waves are governed by a corresponding ordinary differential equation which, for special values of some integration constant, is solved analytically in [1]. Solutions are derived in a more straightforward manner and cast into a simpler form, and some new types of solutions which contain solitary wave and periodic wave solutions are presented in [4]. Ryabov [5] obtained some exact solutions for and using a modification of the truncated expansion method [6, 7]. Li and He discussed the equation by the bifurcation method of dynamical systems and the method of phase portraits analysis [8–10]. In [11], the equation is studied by the Lie symmetry method.
The -expansion method proposed by Wang et al. [12] is one of the most effective direct methods to obtain travelling wave solutions of a large number of nonlinear evolution equations, such as the KdV equation, the mKdV equation, the variant Boussinesq equations, and the Hirota-Satsuma equations. Later, the further developed methods named the generalized -expansion method, the modified -expansion method, the extended -expansion method, and the improved -expansion method have been proposed in [13–15], respectively. The aim of this paper is to derive more and new traveling wave solutions of the Kudryashov-Sinelshchikov equation by the -expansion method and its variants.
The organization of the paper is as follows: in Section 2, a brief account of the -expansion and its variants that is, the generalized, improved, and extended versions, for finding the traveling wave solutions of nonlinear equations, is given. In Section 3, we will study the Kudryashov-Sinelshchikov equation by these methods. Finally conclusions are given in Section 4.
2.1. The -Expansion Method
Step 1. Consider a general nonlinear PDE in the form Using , , we can rewrite (2) as the following nonlinear ODE: where the prime denotes differentiation with respect to .
Step 2. Suppose that the solution of ODE (3) can be written as follows: where , are constants to be determined later, is a positive integer, and satisfies the following second-order linear ordinary differential equation: where , are real constants. The general solutions of (5) can be listed as follows.
When , we obtain the hyperbolic function solution of (5)
When , we obtain the trigonometric function solution of (5)
When , we obtain the solution of (5) where and are arbitrary constants.
Step 4. Substituting (4) along with (5) into (3) and then setting all the coefficients of of the resulting system's numerator to zero yields a set of over-determined nonlinear algebraic equations for and .
Step 5. Assuming that the constants and can be obtained by solving the algebraic equations in Step 4 and then substituting these constants and the known general solutions of (5) into (4), we can obtain the explicit solutions of (2) immediately.
2.2. The Generalized -Expansion Method
In generalized version [13], one makes an ansatz for the solution as where satisfies the following Jacobi elliptic equation: where , , and are the arbitrary constants to be determined later and . Substituting (9) into (3) and using (10), we obtain a polynomial in , . Equating each coefficient of the resulted polynomials to zero yields a set of algebraic equations for , , , and . Now, substituting and the general solutions of (10) (see Table 1) into (9), we obtain many new traveling wave solutions in terms of Jacobi elliptic functions of the nonlinear PDE (2).
Relations between values of , , and corresponding in (10).
2.3. The Extended -Expansion Method
In the extended form of this method [15], the solution of (3) can be expressed as where , , are constants to be determined later, , is a positive integer, and satisfies the following second order linear ODE: where is a constant. Substituting (11) into (3), using (12), collecting all terms with the same order of and together, and then equating each coefficient of the resulting polynomial to zero yield a set of algebraic equations for , , , . On solving these algebraic equations, we obtain the values of the constants , , , , and then substituting these constants and the known general solutions of (12) into (11), we obtain the explicit solutions of nonlinear differential equation (2).
After the brief description of the methods, we now apply these for solving the Kudryashov-Sinelshchikov equation.
3. The Exact Solutions of the Kudryashov-Sinelshchikov Equation
3.1. Using -Expansion Method
Let , with , , that is, , where is the wave speed. Under this transformation, (1) can be reduced to the following ordinary differential equation (ODE): Integrating (13) once with respect to and setting the constant of integration to zero, we have
Balancing with in (10) we find that , so is an arbitrary positive integer. For simplify, we take . Suppose that (14) owns the solutions in the form Substituting (15) along with (5) into (14) and then setting all the coefficients of of the resulting system's numerator to zero yield a set of overdetermined nonlinear algebraic equations about , , , , , . Solving the overdetermined algebraic equations, we can obtain the following results.
Case 1. We have
where , are arbitrary constants and .
where , are arbitrary constants, , .
Using Case 3, (15) and the general solutions of (5), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Subcase 3.1. When , , we obtain the hyperbolic function solutions of (1) as follows: where , , , are arbitrary constants.
It is easy to see that the hyperbolic function solution can be rewritten at and as follows: where, .
Subcase 3.2. When , , the trigonometric function solution of (1) can be rewritten at and as follows: where, , where, .
Subcase 3.3. When , , we obtain the rational function solutions of (1) as follows:
Using other two cases, (15), and the general solutions of (5), we could obtain other exact solutions of (1), and here we do not list all of them.
3.2. Using Generalized -Expansion Method
Suppose that (13) owns the solutions in the form in this case, satisfies the Jacobi elliptic equation (10).
Substituting (24) along with (10) into (14) and then setting all the coefficients of , of the resulting system's numerator to zero yield a set of overdetermined nonlinear algebraic equations about , , , , , . Solving the overdetermined algebraic equations, we can obtain the following results.
Thus using (24) and (26), the following solutions of (1) are obtained: where, . Now, with the aid of Table 1, we get the following set of exact solutions of (1).
Using Case 1, (24), and the general solutions of (10), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Set 1.1, if , , , , or , then we obtain where, .
When , , solution (28) becomes where, .
Set 1.2, if , , , , then we obtain where, .
When , , solution (33) becomes where, . It is the same with the solution (34).
When , , , solution (35) becomes where, .
When , , solution (34) becomes It is the same with solution (30). where, .
Set 1.6 if , , , , then we obtain where, .
Set 1.7 if , , , , solution (39) becomes where, .
Similarly, we can write down the other sets of exact solutions of (1) with the help of Table 1 and the Case 2, which are omitted for convenience. Thus using the generalized form of the -expansion method, we can obtain families of the exact traveling wave solutions of (1) in terms of Jacobi elliptic functions. Under some conditions, these solutions change into hyperbolic and trigonometric functional forms.
3.3. Using Extended -Expansion Method
Suppose that (14) owns the solutions in the form where , , , , are constants to be determined later, , is a positive integer, and satisfies the second-order linear ODE (12).
Substituting (42) along with (12) into (14) and then setting all the coefficients of and of the resulting system to zero yield a set of overdetermined nonlinear algebraic equations about , , , , , , , . Solving the overdetermined algebraic equations, we can obtain the following results.
Subcase 1.1. When , we have the hyperbolic function solution as where, .
In particular, setting , , then (47) can be written as
Setting , , then (47) can be written as
Subcase 1.2. When , we have the trigonometric function solution as
In particular, setting , , then (50) can be written as setting , , then (50) can be written as where, .
Subcase 2.1. When , we have the hyperbolic function solution as
In particular, setting , then (53) can be written as Setting , , then (53) can be written as where, .
Subcase 2.2. When , we have the trigonometric function solution as
Similarly, we can get the other exact solutions of (1) in Cases 3 and 4, which are omitted for convenience.
Remark 1. The validity of the solutions we obtained is verified.
Remark 2. The solutions expressed by Jacobi elliptic functions are not given in the related literature. So, the solutions we obtained are new.
Remark 3. The solutions we got are general involving various arbitrary parameters. If we set the parameters to special values, some results in the literature can be obtained.
In the present work, we successfully obtained exact traveling wave solutions of the Kudryashov-Sinelshchikov equation using the -expansion method and its variants. some obtained new exact and explicit analytic solutions are in general forms involving various arbitrary parameters. These solutions are expressed by the hyperbolic functions, the trigonometric functions, the rational functions, and the Jacobi elliptic functions. The results of [1–11] have been enriched.
N. A. Kudryashov and D. I. Sinelshchikov, “Nonlinear waves in bubbly liquids with consideration for viscosity and heat transfer,” Physics Letters A, vol. 374, no. 19-20, pp. 2011–2016, 2010. View at: Publisher Site | Google Scholar
N. A. Kudryashov and D. I. Sinelshchikov, “Nonlinear evolution equations for describing waves in bubbly liquids with viscosity and heat transfer consideration,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 414–421, 2010. View at: Publisher Site | Google Scholar | MathSciNet
N. A. Kudryashov and D. I. Sinel'shchikov, “Nonlinear waves in liquids with gas bubbles with account of viscosity and heat transfer,” Fluid Dynamics, vol. 45, no. 1, pp. 96–112, 2010. View at: Publisher Site | Google Scholar
M. Randruut, “On the Kudryashov-Sinelshchikov equation for waves in bubbly liquids,” Physics Letters A, vol. 375, no. 42, pp. 3687–3692, 2011. View at: Google Scholar
P. N. Ryabov, “Exact solutions of the Kudryashov-Sinelshchikov equation,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3585–3590, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
J. Li, “Exact traveling wave solutions and their bifurcations for the Kudryashov-Sinelshchikov equation,” International Journal of Bifurcation and Chaos, vol. 22, no. 5, pp. 12501181–125011819, 2012. View at: Google Scholar
B. He, “The bifurcation and exact peakons solitary and periodic wave solutions for the Kudryashov-Sinelshchikov equation,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4137–4148, 2012. View at: Publisher Site | Google Scholar
B. He, Q. Meng, J. Zhang, and Y. Long, “Periodic loop solutions and their limit forms for the Kudryashov-Sinelshchikov equation,” Mathematical Problems in Engineering, vol. 2012, Article ID 320163, 10 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet
M. Nadjafikhah and V. Shirvani-Sh, “Lie symmetry analysis of Kudryashov-Sinelshchikov equation,” Mathematical Problems in Engineering, vol. 2011, Article ID 457697, 9 pages, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
{G}^{’}/G
S. Zhang, J. L. Tong, and W. Wang, “A generalized
{G}^{’}/G
-expansion method for the mKdV equation with variable coefficients,” Physics Letters A, vol. 372, no. 13, pp. 2254–2257, 2008. View at: Publisher Site | Google Scholar
{G}^{’}/G
{G}^{’}/G
|
{\displaystyle {\mathrm {\partial } x \over \mathrm {\partial } t}=\mathrm {K} {\,\mathrm {\partial } ^{2}x \over \mathrm {\partial } y^{2}}}
Daedalus or Science and the Future (1923)[edit]
The Causes of Evolution (1932)[edit]
Quoted in book prefaces[edit]
Quotes about Haldane[edit]
|
numtheory(deprecated)/kronecker - Maple Help
Home : Support : Online Help : numtheory(deprecated)/kronecker
kronecker(ineqs, xvars, yvars)
kronecker(form, alpha, err)
list of lists of real numbers or list of lists of p-adic numbers and primes
real number or list of real numbers or list of p-adic numbers
Important: The numtheory package has been deprecated. Use the superseding command NumberTheory[InhomogeneousDiophantine] instead.
{x}_{1},{x}_{2},\dots ,{x}_{n},{y}_{1},\dots ,{y}_{m}
|{a}_{11}{x}_{1}+\dots +{a}_{1n}{x}_{n}-{\mathrm{\alpha }}_{1}-{y}_{1}|\le {\mathrm{err}}_{1}
\mathrm{..............}
|{a}_{\mathrm{m1}}{x}_{1}+\dots +{a}_{\mathrm{mn}}{x}_{n}-{\mathrm{\alpha }}_{m}-{y}_{m}|\le {\mathrm{err}}_{m}
\mathrm{valuep}\left({a}_{11}{x}_{1}+\dots +{a}_{1n}{x}_{n}-{\mathrm{\alpha }}_{1}-{y}_{1},{p}_{1}\right)\le {\mathrm{err}}_{1}
\mathrm{..............}
\mathrm{valuep}\left({a}_{m1}{x}_{1}+\dots +{a}_{mn}{x}_{n}-{\mathrm{\alpha }}_{m}-{y}_{m},{p}_{m}\right)\le {\mathrm{err}}_{m}
[{x}_{1}=\mathrm{...}],\mathrm{...},[{x}_{n}=\mathrm{...}],[{y}_{1},\mathrm{...}],\mathrm{...},[{y}_{m}=\mathrm{...}]
In the second calling sequence, if the
\mathrm{\alpha }
's are all the same, the list
[{\mathrm{\alpha }}_{1},\mathrm{...},{\mathrm{\alpha }}_{m}]
\mathrm{\alpha }
. The err's may be similarly replaced in the real case.
The command with(numtheory,kronecker) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{numtheory}\right):
\mathrm{with}\left(\mathrm{padic}\right):
\mathrm{kronecker}\left({\mathrm{abs}\left(-3.7\mathrm{exp}\left(2\right)x+y+{3}^{\frac{1}{3}}z-{5}^{\frac{1}{3}}-v\right)\le {10}^{-3},\mathrm{abs}\left(0.01\mathrm{log}\left(2\right)x+24\mathrm{log}\left(5\right)y-8{3}^{\frac{1}{2}}z-\mathrm{exp}\left(2.5\right)-u\right)\le {10}^{-7}},{x,y,z},{u,v}\right)
[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{8026}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-3174}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{6916}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{v}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-212628}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-218388}]
x≔'x':
y≔'y':
u≔'u':
v≔'v':
\mathrm{kronecker}\left({\mathrm{valuep}\left(\frac{1}{\mathrm{log}\left(7\right)}x+\mathrm{log}\left(11\right)y-\mathrm{log}\left(7\right)-v,5\right)\le {5}^{-15},\mathrm{valuep}\left(\mathrm{log}\left(3\right)x+\mathrm{exp}\left(7\right)y-\mathrm{log}\left(3\right)-w,7\right)\le {7}^{-12},\mathrm{valuep}\left(\mathrm{log}\left(5\right)x+\mathrm{log}\left(7\right)y-\mathrm{log}\left(5\right)-u,3\right)\le {3}^{-20}},{x,y},{u,v,w}\right)
[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-15516275}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{6404775}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-9747866955}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-1192024656}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{v}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-27148890349}]
\mathrm{kronecker}\left([[\mathrm{log}\left(2\right),\mathrm{log}\left(5\right),{3}^{\frac{1}{2}}],[\mathrm{exp}\left(2\right),\mathrm{\pi },{3}^{\frac{1}{3}}]],[\mathrm{exp}\left(1\right),{2}^{\frac{1}{2}}],[{10}^{-2},{10}^{-5}]\right)
[\textcolor[rgb]{0,0,1}{-2863}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-10057}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1494}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{-20761}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-54906}]
\mathrm{kronecker}\left([[[\mathrm{log}\left(3\right),\mathrm{log}\left(7\right),\mathrm{log}\left(13\right)],[\mathrm{sin}\left(5\right),\frac{1}{\mathrm{log}\left(7\right)},\mathrm{exp}\left(5\right)]],[2,5]],[\mathrm{log}\left(5\right),\mathrm{log}\left(11\right)],[10,15]\right)
[\textcolor[rgb]{0,0,1}{-2000}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3125}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2825}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{-800}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-26295606385}]
|
Welcome to another tutorial. In the last tutorial series, we wrote a logistic regression function. Now it's time to build our first neural network, which will have one hidden layer. You will see no big difference between this model and the one we implemented using logistic regression.
Define the model structure (data shape);
Initialize model parameters;
Create a loop to:
Implement forward propagation;
Compute loss;
Implement backward propagation to get the gradients;
We often build helper functions to compute 3 first steps and then merge them into one function we will call nn_model(). Once we have built nn_model() and learned the right parameters, we'll make predictions.
You would agree that we can't get good results with Logistic Regression, we can train our model as long as we want, but it won't improve. So we going to train a Neural Network with a single hidden layer:
The mathematical expression of the forward propagation algorithm for one example x(i):
{z}^{\left[1\right]\left(i\right)}={W}^{\left[1\right]}{x}^{\left(i\right)}+{b}^{\left[1\right]}\phantom{\rule{0ex}{0ex}}{a}^{\left[1\right]\left(i\right)}=\mathrm{tan}h\left({z}^{\left[1\right]\left(i\right)}\right)\phantom{\rule{0ex}{0ex}}{z}^{\left[2\right]\left(i\right)}={W}^{\left[2\right]}{x}^{\left[1\right]\left(i\right)}+{b}^{\left[2\right]}\phantom{\rule{0ex}{0ex}}\stackrel{^}{y}={a}^{\left[2\right]\left(i\right)}=\sigma \left({z}^{\left[2\right]\left(i\right)}\right)\phantom{\rule{0ex}{0ex}}{y}_{prediction}^{\left(i\right)}=\left\{\begin{array}{ll}1& if {a}^{\left[2\right]\left(i\right)}>0.5\\ 0& otherwise\end{array}\right\\phantom{\rule{0ex}{0ex}}
J=-\frac{1}{m}\sum _{i=1}^{m}\left({y}^{\left(i\right)}\mathrm{log}\left({a}^{\left[2\right]\left(i\right)}\right)-\left(1-{y}^{\left(i\right)}\right)\mathrm{log}\left(1-{a}^{\left[2\right]\left(i\right)}\right)\right)
Initialize the model's parameters:
Now, while we'll have one hidden layer in our model, we'll need to initialize parameters for input and hidden layers. Now our parameters can't be zeros from the start because hidden layer parameters depend on input layers, so we initialize them as minimal random numbers. If our weight were zeros, they would be all with the same value in every training iteration, but when we initialize them as different random numbers, they train differently. But bias can be zeros:
Before computing forward propagation, we'll cover tanh function.
Same as in our logistic regression, where we visualized our sigmoid and sigmoid_derivative functions and generated data from -10 to 10, we'll use the same code for our tanh visualization. Below is the full code used to print tanh and tanh_derivative functions:
def tanh_derivative(x):
ds = 1 - np.power(np.tanh(x), 2)
# linespace generate an array from start and stop value, 100 elements
values = plt.linspace(-10,10,100)
# prepare the plot, associate the color r(ed) or b(lue) and the label
plt.plot(values, np.tanh(values), 'r')
plt.plot(values, tanh_derivative(values), 'b')
# Draw the grid line in background.
# Title & Subtitle
plt.title('Sigmoid and Sigmoid derivative functions')
# plt.plot(x)
As a result, we'll receive the following graph:
The above curve in red is a plot of our tanh function, and the curve in red color is our tanh_derivative function.
#TRAIN_DIR = 'Train_data/'
#TEST_DIR = 'Test_data/'
In this tutorial part, we initialized our model's parameters and visualized tanh function. In the next tutorial, we'll start building our forward propagation function. You'll see that these functions are not different from the functions we used in the logistic regression tutorial series.
|
How do I calculate the surface area of a square pyramid?
What is the formula for surface area of a square pyramid?
How do I calculate the base area of a square pyramid?
How do I find the face area of a square pyramid?
How to find surface area of square pyramid using slant height
How to use the surface area of a square pyramid calculator
Other related square pyramid calculators
Are you in search of a surface area of a square pyramid calculator to estimate all types of area of the Great pyramid?! If yes, you're in the right place. You can calculate the total surface area, base area, lateral surface area, and face area of any square pyramid using our tool.
We also discuss how to find the surface area of a square pyramid using slant height and base length and how to calculate surface area using slant height and base perimeter. You can also find answers to some interesting numerical problems like the surface area of a pyramid of Giza and the amount of groundsheet required for any tent.
To find the surface area of a square pyramid:
Determine how many faces are there on a square pyramid: there are 4 triangular faces. Sum up their areas.
Find the area of a square base.
Add up 4 triangular faces area to 1 square base area to find the surface area of a square pyramid.
The formula to calculate the surface area of a square pyramid is:
SA = a^2 + 2a\sqrt{\frac{a^2}{4}+h^2}
or, on simplifying:
SA = a^2 + a\sqrt{{a^2}+4h^2}
SA is the surface area of the square pyramid;
The base of a square pyramid is a square. Hence, the base area of the square pyramid of base edge a is a2.
First of all, let's explain what the lateral area of a square pyramid is. The lateral surface area or lateral area of a square pyramid is the total area of its 4 triangular faces. We calculate the lateral area as:
LSA = {a}\sqrt{{a^2}+4h^2}
💡 The total surface area of a three-dimensional object is the sum of the base area and the lateral surface area!
SA = BA + LSA
The face area of a square pyramid is the area of one of its four triangular faces. We calculate the area of a triangular face, the face area (FA), using the formula:
FA = \frac{a}{2} \sqrt{\frac{a^2}{4}+h^2}
💡 The lateral surface area is the sum of the areas of four triangular faces:
LSA = 4 × FA
To find the surface area using the slant height, we use the formula:
SA = a2 + 2×a×l
🔎 Proof: The surface area of a square pyramid is the sum of the areas of its square base and four triangular faces:
The area of a triangle is half of the product of its base length (a) and height (l):
FA = a×l/2
Therefore, the area of four triangular faces or the lateral surface area of the square pyramid is:
4 × FA = 2 × a × l
Thus, the lateral surface area (LSA) of the square pyramid of slant height l is
LSA = 2 × a × l
and the total surface area is
SA = a2 + 2 × a × l
❓ The Pyramid of Giza has a height of 480 feet. If the length of each side of the base is approximately 756 feet, what is its total surface area?
Okay, let's see how to use this total surface area of a square pyramid calculator to solve the problem given above:
Identify the measurements:
Height (h) = 480 ft
Base edge (a) = 756 ft
Switch the units from cm to ft and cm2 to ft2 (or the desired units for the area) using the drop-down list near each variable in the square-based pyramid surface area calculator.
Enter the values for height (480 ft) and base edge (756 ft).
Ta-da! You got the results in no time! Our tool uses the surface area square pyramid formula to find:
Slant height (l) = 611 ft
Total surface area (SA) = 1,495,322 ft2
Base area (BA) = 571,536 ft2
Lateral surface area (LSA) = 923,786 ft2
Face area (FA) = 230,947 ft2
How many faces are there on a square pyramid?
A square pyramid has 5 faces: 4 equal triangles (side faces) and 1 square (base face).
How do I calculate the surface area of a square pyramid using its base perimeter?
The surface area of a square pyramid is half of the product of its base perimeter and its slant height:
SA = P × l / 2
where P is the base perimeter and l is the slant height.
How many square meters of groundsheet is required for a tent of base 5m and height 1.8m?
You need (5 m)2 = 25 m2 of groundsheet for the tent of base length 5 m irrespective of its height. The amount of groundsheet in square meters is the base area of the tent. The base area of a tent is BA = a2, where a is the base length.
After learning how to calculate surface area of a square pyramid, you might want to check our other square pyramid calculators:
Face area (FA)
|
Leverage Token Design - IndexZoo
Using the Bear as the example, here is a breakdown of the Bear's inner workings, with an example Bear token with x-2 targeted leverage.
Mandate: These are fixed value given upon the token's inception.
Target Leverage: The mandated leverage for this token.
Number of Token: The starting number of token minted.
Token Start Price: The starting token price.
Maintenance Margin: The maintenance margin DEX required for margin trading.
Account Leverage: The leverage used in margin trading provided by DEX.
Current State: These values fluctuates as market moves.
Indexed Price: The price of the underlying asset being tracked.
Equity: This is the net buy in from users, hence our total value locked. It changes as users mint/withdraw.
Equity = Σ (BuyInAmount * BuyInPrice)
Token Unit Price: This is the price at which we take mint/withdraw, hence the book value of token.
Token Unit Price = NetAssetValue / NumberOfToken
Net Exposure: This is the Bear's margin trading account's total short exposure.
NetExposure = MarginUsed * AccountLeverageUsed
Effective Leverage: The effective leverage Bear's account currently has over Equity, hence this is how much user's are leveraged.
EffectiveLeverage = Exposure / Equity
Margin Account State: These values fluctuates as market moves, and changes as the Bear rebalances.
Average Entry Price: The average price of the short position that the Bear maintains. In other words, it is the weighted sum of each buy in, hence the average price of our current position. This only change when there is new buy in, either due to redemption or re-leveraging.
AverageEntryPrice = Σ(BuyInAmount * BuyInPrice) / TotalBuyIn
Margin Used: This is the margin used to maintain current position.
MarginUsed = Exposure / AccountLeverageUsed
Cash Left: The cash we still have in the account after margin used and float P/L. The cash left amount represents the buying power we have left.
(Cash Left + Float P/L) * Account Leverage = Buying Power
Float P/L: The unrealized float P/L the Bear's current exposure results to. The current exposure is also referred to as notional position amount that we are maintaining with margin used.
Float P/L = Exposure * (Average Entry Price - Indexed Price)
Realized P/L: This is the realize P/L whenever we exited our position. It only register when an exit trade (buy trade for the Bear) is executed, then the amount will be add to Cash Left.
Realized P/L = Exited Amount * (Average Entry Price - Indexed Price)
NAV (Net Asset Value): This is the Net Asset Value of Bear's margin trading account. It's the actual value we have after float P/L. This is used to calculate book value of each token.
NAV = Margin Used + Cash Left - Float P/L
|
Reconstruct approximate 3-D radiation pattern from two orthogonal slices - MATLAB patternFromSlices - MathWorks Italia
\theta =90-el
G\left(\varphi ,\theta \right)={G}_{H}\left(\varphi \right)+{G}_{V}\left(\theta \right)
{G}_{H}\left(\varphi ,\theta \right)=\frac{{G}_{H}\left(\varphi \right)•{w}_{1}+{G}_{V}\left(\theta \right)•{w}_{2}}{\sqrt[k]{{w}_{1}^{k}+{w}_{2}^{k}}}
\left\{\begin{array}{l}{w}_{1}\left(\varphi ,\theta \right)=\text{vert}\left(\theta \right)•\left[1-\text{hor}\left(\varphi \right)\right]\\ {w}_{2}\left(\varphi ,\theta \right)=\text{hor}\left(\varphi \right)•\left[1-\text{vert}\left(\theta \right)\right]\end{array}
|
Section 59.36 (03PZ): Inverse image—The Stacks project
Section 59.36: Inverse image (cite)
59.36 Inverse image
In this section we briefly discuss pullback of sheaves on the small étale sites. The precise construction of this is in Topologies, Section 34.4.
Definition 59.36.1. Let $f: X\to Y$ be a morphism of schemes. The inverse image, or pullback1 functors are the functors
\[ f^{-1} = f_{small}^{-1} : \mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale}) \longrightarrow \mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale}) \]
\[ f^{-1} = f_{small}^{-1} : \textit{Ab}(Y_{\acute{e}tale}) \longrightarrow \textit{Ab}(X_{\acute{e}tale}) \]
which are left adjoint to $f_* = f_{small, *}$. Thus $f^{-1}$ thus characterized by the fact that
\[ \mathop{\mathrm{Hom}}\nolimits _{{\mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale})}} (f^{-1}\mathcal{G}, \mathcal{F}) = \mathop{\mathrm{Hom}}\nolimits _{\mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale})} (\mathcal{G}, f_*\mathcal{F}) \]
functorially, for any $\mathcal{F} \in \mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale})$ and $\mathcal{G} \in \mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale})$. We similarly have
\[ \mathop{\mathrm{Hom}}\nolimits _{{\textit{Ab}(X_{\acute{e}tale})}} (f^{-1}\mathcal{G}, \mathcal{F}) = \mathop{\mathrm{Hom}}\nolimits _{\textit{Ab}(Y_{\acute{e}tale})} (\mathcal{G}, f_*\mathcal{F}) \]
for $\mathcal{F} \in \textit{Ab}(X_{\acute{e}tale})$ and $\mathcal{G} \in \textit{Ab}(Y_{\acute{e}tale})$.
It is not trivial that such an adjoint exists. On the other hand, it exists in a fairly general setting, see Remark 59.36.3 below. The general machinery shows that $f^{-1}\mathcal{G}$ is the sheaf associated to the presheaf
\begin{equation} \label{etale-cohomology-equation-pullback} U/X \longmapsto \mathop{\mathrm{colim}}\nolimits _{U \to X \times _ Y V} \mathcal{G}(V/Y) \end{equation}
where the colimit is over the category of pairs $(V/Y, \varphi : U/X \to X \times _ Y V/X)$. To see this apply Sites, Proposition 7.14.7 to the functor $u$ of Equation (59.34.0.1) and use the description of $u_ s = (u_ p\ )^\# $ in Sites, Sections 7.13 and 7.5. We will occasionally use this formula for the pullback in order to prove some of its basic properties.
Lemma 59.36.2. Let $f : X \to Y$ be a morphism of schemes.
The functor $f^{-1} : \textit{Ab}(Y_{\acute{e}tale}) \to \textit{Ab}(X_{\acute{e}tale})$ is exact.
The functor $f^{-1} : \mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale}) \to \mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale})$ is exact, i.e., it commutes with finite limits and colimits, see Categories, Definition 4.23.1.
Let $\overline{x} \to X$ be a geometric point. Let $\mathcal{G}$ be a sheaf on $Y_{\acute{e}tale}$. Then there is a canonical identification
\[ (f^{-1}\mathcal{G})_{\overline{x}} = \mathcal{G}_{\overline{y}}. \]
where $\overline{y} = f \circ \overline{x}$.
For any $V \to Y$ étale we have $f^{-1}h_ V = h_{X \times _ Y V}$.
Proof. The exactness of $f^{-1}$ on sheaves of sets is a consequence of Sites, Proposition 7.14.7 applied to our functor $u$ of Equation (59.34.0.1). In fact the exactness of pullback is part of the definition of a morphism of topoi (or sites if you like). Thus we see (2) holds. It implies part (1) since given an abelian sheaf $\mathcal{G}$ on $Y_{\acute{e}tale}$ the underlying sheaf of sets of $f^{-1}\mathcal{F}$ is the same as $f^{-1}$ of the underlying sheaf of sets of $\mathcal{F}$, see Sites, Section 7.44. See also Modules on Sites, Lemma 18.31.2. In the literature (1) and (2) are sometimes deduced from (3) via Theorem 59.29.10.
Part (3) is a general fact about stalks of pullbacks, see Sites, Lemma 7.34.2. We will also prove (3) directly as follows. Note that by Lemma 59.29.9 taking stalks commutes with sheafification. Now recall that $f^{-1}\mathcal{G}$ is the sheaf associated to the presheaf
\[ U \longrightarrow \mathop{\mathrm{colim}}\nolimits _{U \to X \times _ Y V} \mathcal{G}(V), \]
see Equation (59.36.1.1). Thus we have
\begin{align*} (f^{-1}\mathcal{G})_{\overline{x}} & = \mathop{\mathrm{colim}}\nolimits _{(U, \overline{u})} f^{-1}\mathcal{G}(U) \\ & = \mathop{\mathrm{colim}}\nolimits _{(U, \overline{u})} \mathop{\mathrm{colim}}\nolimits _{a : U \to X \times _ Y V} \mathcal{G}(V) \\ & = \mathop{\mathrm{colim}}\nolimits _{(V, \overline{v})} \mathcal{G}(V) \\ & = \mathcal{G}_{\overline{y}} \end{align*}
in the third equality the pair $(U, \overline{u})$ and the map $a : U \to X \times _ Y V$ corresponds to the pair $(V, a \circ \overline{u})$.
Part (4) can be proved in a similar manner by identifying the colimits which define $f^{-1}h_ V$. Or you can use Yoneda's lemma (Categories, Lemma 4.3.5) and the functorial equalities
\[ \mathop{\mathrm{Mor}}\nolimits _{\mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale})}(f^{-1}h_ V, \mathcal{F}) = \mathop{\mathrm{Mor}}\nolimits _{\mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale})}(h_ V, f_*\mathcal{F}) = f_*\mathcal{F}(V) = \mathcal{F}(X \times _ Y V) \]
combined with the fact that representable presheaves are sheaves. See also Sites, Lemma 7.13.5 for a completely general result. $\square$
The pair of functors $(f_*, f^{-1})$ define a morphism of small étale topoi
\[ f_{small} : \mathop{\mathit{Sh}}\nolimits (X_{\acute{e}tale}) \longrightarrow \mathop{\mathit{Sh}}\nolimits (Y_{\acute{e}tale}) \]
Many generalities on cohomology of sheaves hold for topoi and morphisms of topoi. We will try to point out when results are general and when they are specific to the étale topos.
Remark 59.36.3. More generally, let $\mathcal{C}_1, \mathcal{C}_2$ be sites, and assume they have final objects and fibre products. Let $u: \mathcal{C}_2 \to \mathcal{C}_1$ be a functor satisfying:
if $\{ V_ i \to V\} $ is a covering of $\mathcal{C}_2$, then $\{ u(V_ i) \to u(V)\} $ is a covering of $\mathcal{C}_1$ (we say that $u$ is continuous), and
$u$ commutes with finite limits (i.e., $u$ is left exact, i.e., $u$ preserves fibre products and final objects).
Then one can define $f_*: \mathop{\mathit{Sh}}\nolimits (\mathcal{C}_1) \to \mathop{\mathit{Sh}}\nolimits (\mathcal{C}_2)$ by $ f_* \mathcal{F}(V) = \mathcal{F}(u(V))$. Moreover, there exists an exact functor $f^{-1}$ which is left adjoint to $f_*$, see Sites, Definition 7.14.1 and Proposition 7.14.7. Warning: It is not enough to require simply that $u$ is continuous and commutes with fibre products in order to get a morphism of topoi.
[1] We use the notation $f^{-1}$ for pullbacks of sheaves of sets or sheaves of abelian groups, and we reserve $f^*$ for pullbacks of sheaves of modules via a morphism of ringed sites/topoi.
Comment #2053 by Yuzhou Gu on May 26, 2016 at 18:45
In the comment after the proof of Lemma 36.2 and before Remark 36.3, what does it mean by "the pair of functors define a morphism of small etale topoi"? Each of the functor is already a morphism of small etale topoi.
I am also confused with the formula "f_small: Sh(X_etale)->Sh(Y_etale)". In other places "f_small" is used as a morphism of sites.
Comment #7381 by Long Liu on May 24, 2022 at 08:22
Just a typo: in the sentence 'Thus
f^{-1}
thus characterized by the fact ...', the second 'thus' should be 'is'.
|
In our previous tutorial, we wrote forward propagation and cost functions. So next, we need to write a backpropagation function. For this, we'll use cache computed during the forward propagation.
Backpropagation is usually the hardest (most mathematical) part of deep learning. Here again, is the picture with six mathematical equations we'll use. We'll use six equations on the right of this image since we are building a vectorized implementation.
Code for our backward propagation function:
parameters - python dictionary containing our parameters;
cache - a dictionary containing "Z1", "A1", "Z2" and "A2";
X - input data of shape (input, number of examples);
Y - "true" labels vector of shape (1, number of examples).
grads - python dictionary containing our gradients with respect to different parameters.
def backward_propagation(parameters, cache, X, Y):
# Retrieve W1 and W2 from the "parameters" dictionary
# Retrieve A1 and A2 from "cache" dictionary
# Backward propagation for dW1, db1, dW2, db2
dW2 = 1./m*np.dot(dZ2, A1.T)
db2 = 1./m*np.sum(dZ2, axis = 1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = 1./m*np.dot(dZ1, X.T)
Code for our update parameters function:
We'll implement the update rule using gradient descent. So we'll have to use (dW1, db1, dW2, db2) to update (W1, b1, W2, b2).
\text{General Gradient descent rule is:}\phantom{\rule{0ex}{0ex}}\theta =\theta -\alpha \frac{\partial J}{\partial \theta }\phantom{\rule{0ex}{0ex}}\text{where}: \alpha - \text{learning rate}, \theta \text{represents a parameter}
grads - python dictionary containing your gradients;
def update_parameters(parameters, grads, learning_rate = 0.1):
# Retrieve each gradient from the "grads" dictionary
dW1 = grads["dW1"]
db1 = grads["db1"]
W1 = W1 - dW1 * learning_rate
b1 = b1 - db1 * learning_rate
# Retrieve each parameter from "parameters" dictionary
So up to this point, we already wrote parameter initialization, forward propagation, backward propagation, cost, and parameters update functions. So in the next tutorial, we'll connect all of them into a model, and we'll start training neural networks with one hidden layer.
|
True Strain Calculator
True stress – Engineering stress
True stress curve vs nominal stress curve
Example: Using the true strain calculator
You can use the true strain calculator to convert the engineering stress-strain values into true strain values. Stresses play an essential role in the design of every component of machinery and equipment, be it a pen, airplane, or automobile engine. The birth of each machine or any component of it begins from the drawing board and analyses. The concept of stresses and strains forms the basis of any design and analysis study.
So what is true stress? Read on to find out what engineering stress is and how do you convert it to true stress.
Consider a simple tensile test for a material specimen having a cross-sectional area, A. When you exert pulling or tensile force on a material, the cross-sectional area decreases which leads to necking and ultimately failure. Now, the usual definition of stress,
\sigma
\footnotesize \sigma = \frac{\text{Force}}{\text{Area}}
\text{Area}
is the cross-sectional area of the specimen. However, it could either be an area of the undeformed specimen or a deformed one. While the equation to obtain stress remains, the type of stress calculation depends on the area used in the equation. If the stress is calculated based on the undeformed area, it is known as the nominal or engineering stress whereas when the reduced or instantaneous area is considered, the stress is known as the true stress.
A material specimen under tension exhibiting necking.
For instance, the stress obtained using the above equation is the nominal or engineering stress. The equation gives the corresponding strain value:
\footnotesize \epsilon_{nom} = \frac{l - l_0}{l_0} = \frac{l}{l_0} - 1
l
– Deformed length of specimen; and
l_0
– Undeformed length of specimen.
Add unity on both sides and take the natural log would give us the relationship between true strain,
\epsilon
and nominal strain,
\epsilon_{nom}
\footnotesize \epsilon = \ln(1 + \epsilon_{nom})
Their stress counterparts — the nominal stress,
\sigma_{nom}
and true stress,
\sigma
are related to each other using the equation below:
\footnotesize \sigma = \sigma_{nom}(1 + \epsilon_{nom})
Now that you know both types of stress, the question arises why do we need them, and where do we use them?
In simple terms, the engineering stress-strain data is based upon an unchanged reference, i.e., the undeformed cross-sectional area. In contrast, the true stress-strain data utilizes the dynamic change in the area to estimate the data. The true stress-strain data is important when studying the material behavior while considering strain hardening behavior, i.e., the stress-strain relationship beyond the yield point. True strain depicts the actual strain on the specimens.
Most CAE tools like ABAQUS and ANSYS accept true stress-strain data to compute the plastic behavior of the material. Therefore, you'll have to convert the engineering or nominal stress-strain curve into a true one before inputting the data into the software. The figure below depicts the difference between the true and engineering stress-strain. You can notice that, unlike the engineering stress curve, the true stress curve goes on increasing.
Stress-strain curve for true and engineering data.
Find the true stress for nominal stress of 8 MPa. Take nominal strain as 0.1.
To find the true stress value:
Enter the engineering strain as 0.1.
The true strain calculator will return 0.09531 as the value of the true strain.
Fill in the engineering or nominal stress as 8 MPa.
Using the true stress calculator:
\qquad \footnotesize \sigma = 8 \times (1 + 0.8) = 8.8 \text{ MPa}
What is the difference between true stress and engineering stress?
The difference between true stress and engineering stress is that the engineering stress is based on an unchanged reference, i.e., the undeformed crossectional area, whereas for the calculation of true stress, the instantaneous cross-sectional area is considered. True stress is beneficial to model strain hardening behavior.
How do I calculate true strain from engineering strain?
To calculate true strain:
Find the nominal or engineering strain value.
Add 1 to the engineering strain value.
Find the natural logarithm of the sum to obtain the corresponding true strain value. Mathematically, Ɛ = ln(1 + Ɛ_nom).
How do I calculate true stress from engineering stress?
To calculate true stress:
Multiply the sum by the engineering stress value to obtain the corresponding true stress value. Mathematically, σ = σ_nom(1 + Ɛ_nom).
How do I convert the engineering strain 0.05 to true strain?
The true strain counterpart for an engineering strain of 0.05 is 0.04879. Mathematically, the true strain, Ɛ = ln (1 + Ɛ_nom) = ln(1 + 0.05) = 0.04879. Here, Ɛ_nom is the nominal or engineering strain. You can also use the value of nominal strain to find the true stress if you know the corresponding nominal strain value.
Engineering strain (Ɛₑ)
True strain (Ɛₜ)
Engineering stress (σₑ)
True stress (σₜ)
Calculate the g force acting on an object in motion using the g force calculator.
|
CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS (103258)
Elementary Particle Physics (160242)
Nuclear Physics. B (13633)
PARTICLE MODELS (170121)
[en] The effects of noncommutativity and of the existence of a minimal length on the phase space of a dilatonic cosmological model are investigated. The existence of a minimum length results in the generalized uncertainty principle (GUP), which is a deformed Heisenberg algebra between the minisuperspace variables and their momenta operators. I extend these deformed commutating relations to the corresponding deformed Poisson algebra. For an exponential dilaton potential, the exact classical and quantum solutions in the commutative and noncommutative cases, and some approximate analytical solutions in the case of GUP, are presented and compared
Physical Review. D, Particles Fields; ISSN 0556-2821; ; CODEN PRVDAQ; v. 77(4); p. 044023-044023.10
ALGEBRA, ANALYTICAL SOLUTION, COMMUTATION RELATIONS, COSMOLOGICAL MODELS, COSMOLOGY, PHASE SPACE, POTENTIALS, UNCERTAINTY PRINCIPLE
MATHEMATICAL MODELS, MATHEMATICAL SOLUTIONS, MATHEMATICAL SPACE, MATHEMATICS, SPACE
The geometry of large causal diamonds and the no-hair property of asymptotically de Sitter space-times
Gibbons, G.W.; Solodukhin, S.N., E-mail: sergey@theorie.physik.uni-muenchen.de
[en] In a previous paper we obtained formulae for the volume of a causal diamond or Alexandrov open set I+(p) intersection I-(q) whose duration τ(p,q) is short compared with the curvature scale. In the present Letter we obtain asymptotic formulae valid when the point q recedes to the future boundary I+ of an asymptotically de Sitter space-time. The volume (at fixed τ) remains finite in this limit and is given by the universal formula V(τ)=4/3 π(2lncosh(τ)/2 -tanh2(τ)/2 ) plus corrections (given by a series in e-tq) which begin at order e-4tq. The coefficients of the corrections depend on the geometry of I+. This behaviour is shown to be consistent with the no-hair property of cosmological event horizons and with calculations of de Sitter quasi-normal modes in the literature
S0370-2693(07)00771-X; Available from http://dx.doi.org/10.1016/j.physletb.2007.06.073; Copyright (c) 2007 Elsevier Science B.V., Amsterdam, The Netherlands, All rights reserved.; Country of input: International Atomic Energy Agency (IAEA)
Physics Letters. Section B; ISSN 0370-2693; ; CODEN PYLBAJ; v. 652(2-3); p. 103-110
ASYMPTOTIC SOLUTIONS, COSMOLOGICAL MODELS, DE SITTER SPACE, GEOMETRY, SPACE-TIME
Strict and stable generalization of convex functions and monotone maps
Phan Thanh An, E-mail: thanhan@ictp.trieste.it
[en] A function f is said to be stable with respect to some property (P) if there exists ε > 0 such that f + ξ fulfill (P) for all linear functional ξ satisfying parallel ξ parallel < ε. S-quasiconvex functions introduced by Phu (Optimization, Vol.38, 1996) are stable with respect to the properties: 'every lower level set is convex', 'each local minimizer is a global minimizer', and 'each stationary point is a global minimizer'. Correspondingly, we introduced the concepts of s-quasimonotone maps and showed that in the case of a differentiable map, s-quasimonotonicity of the gradient is equivalent to s-quasiconvexity of the underlying function. In this paper, strictly s-quasiconvex functions and strictly s-quasimonotone maps are introduced. In the case of a differentiable map, strict s-quasimonotonicity of the gradient is equivalent to strict s-quasiconvexity of the underlying function, too. An algorithm for finding supremum of the set of all ε above of a continuously twice differentiable strictly s-quasiconvex function on R1 is presented. (author)
Jul 2003; 19 p; Also available at: http://www.ictp.trieste.it; 14 refs
IC--2003/59
ALGORITHMS, CONVEX MANIFOLDS, FUNCTIONS, MAPS, MATHEMATICAL LOGIC, RIEMANN SPACE, STABILITY, TOPOLOGY
MATHEMATICAL LOGIC, MATHEMATICAL MANIFOLDS, MATHEMATICAL SPACE, MATHEMATICS, SPACE
Circle problem and the spectrum of the Laplace operator on closed 2-manifolds
Popov, D. A., E-mail: Popov-Kupavna@yandex.ru
[en] In this survey the circle problem is treated in the broad sense, as the problem of the asymptotic properties of the quantity
P\left(x\right)
, the remainder term in the circle problem. A survey of recent results in this direction is presented. The main focus is on the behaviour of
P\left(x\right)
on short intervals. Several conjectures on the local behaviour of
P\left(x\right)
which lead to a solution of the circle problem are presented. A strong universality conjecture is stated which links the behaviour of
P\left(x\right)
with the behaviour of the second term in Weyl’s formula for the Laplace operator on a closed Riemannian 2-manifold with integrable geodesic flow.
Bibliography: 43 titles. (paper)
Available from http://dx.doi.org/10.1070/RM9911; Country of input: International Atomic Energy Agency (IAEA)
Russian Mathematical Surveys; ISSN 0036-0279; ; CODEN RMSUAF; v. 74(5); p. 909-925
ASYMPTOTIC SOLUTIONS, CHAOS THEORY, GEODESICS, LAPLACIAN, QUANTUM MECHANICS, RIEMANN SPACE
MATHEMATICAL OPERATORS, MATHEMATICAL SOLUTIONS, MATHEMATICAL SPACE, MATHEMATICS, MECHANICS, SPACE
The geometrical properties of Riemannian superspaces, exact solutions and the mechanism of localization
Cirilo-Lombardo, Diego Julio, E-mail: diego@thsun1.jinr.ru
[en] The geometrical meaning of a particularly simple metric in the superspace is elucidated and the possible connection with mechanisms of topological origin in high energy physics is analyzed and discussed. New possible mechanism of the localization of the fields in a particular sector of the supermanifold is proposed and the similarity and differences with a 5-dimensional warped model is shown. The description and the analysis of some interesting aspects of the simplest Riemannian superspaces are presented from the point of view of the possible vacuum solutions
S0370-2693(08)00175-5; Available from http://dx.doi.org/10.1016/j.physletb.2008.02.003; Copyright (c) 2008 Elsevier Science B.V., Amsterdam, The Netherlands, All rights reserved.; Country of input: International Atomic Energy Agency (IAEA)
COSMOLOGY, EXACT SOLUTIONS, GEOMETRY, MANY-DIMENSIONAL CALCULATIONS, METRICS, RIEMANN SPACE, SMOOTH MANIFOLDS, TOPOLOGY, VACUUM STATES
MATHEMATICAL MANIFOLDS, MATHEMATICAL SOLUTIONS, MATHEMATICAL SPACE, MATHEMATICS, SPACE
[en] We give the exact solution of orbit dependent nuclear pairing problem between two nondegenerate energy levels using the Bethe ansatz technique. Our solution reduces to previously solved cases in the appropriate limits including Richardson's treatment of reduced pairing in terms of rational Gaudin algebra operators
ALGEBRA, CALCULATION METHODS, ENERGY LEVELS, EXACT SOLUTIONS, NUCLEAR MODELS, NUCLEAR STRUCTURE, ORBITS, PAIRING INTERACTIONS, QUANTUM OPERATORS
INTERACTIONS, MATHEMATICAL MODELS, MATHEMATICAL OPERATORS, MATHEMATICAL SOLUTIONS, MATHEMATICS
Feature reconstruction in inverse problems
Louis, Alfred K, E-mail: louis@num.uni-sb.de
[en] A strategy for the derivation of fast, accurate and stable algorithms for combining the reconstruction and the feature extraction step for solving linear ill-posed problems in just one method is presented. The precomputation of special reconstruction kernels with optimized parameters for the combination of the two tasks allows for fast implementations and better results than separate realizations. The concept of order optimality is generalized to the solution of feature reconstruction and to Banach spaces in order to find criteria for the selection of suitable mollifiers. Results from real data in different tomographic modalities and scanning geometries are presented with the direct calculation of derivatives, as in Canny edge detectors, and the Laplacian of the solution used in many segmentation algorithms. The method works also when the searched-for solution is not smooth or when the data are very noisy. This shows the versatility of the approach
Inverse Problems; ISSN 0266-5611; ; CODEN INVPET; v. 27(6); [21 p.]
ALGORITHMS, BANACH SPACE, CALCULATION METHODS, GEOMETRY, IMPLEMENTATION, KERNELS, LAPLACIAN, MATHEMATICAL SOLUTIONS
MATHEMATICAL LOGIC, MATHEMATICAL OPERATORS, MATHEMATICAL SPACE, MATHEMATICS, SPACE
Acharya, B.S.; Benini, F.; Valandro, R., E-mail: bacharya@ictp.it, E-mail: benini@sissa.it, E-mail: valandro@sissa.it
[en] Type IIA flux compactifications with O6-planes have been argued from a four dimensional effective theory point of view to admit stable, moduli free solutions. We discuss in detail the ten dimensional description of such vacua and present exact solutions in the case when the O6-charge is smoothly distributed. In the localised case, the solution is a half-flat, non-Calabi-Yau metric. Finally, using the ten dimensional description we show how all moduli are stabilised and reproduce precisely the results of de Wolfe et al. (author)
Dec 2006; 21 p; SISSA--45/2006/EP; Also available at: http://www.ictp.it; 23 refs
IC--2006/067
ALGORITHMS, COMPACTIFICATION, EXACT SOLUTIONS, FOUR-DIMENSIONAL CALCULATIONS, TOPOLOGY
MATHEMATICAL LOGIC, MATHEMATICAL SOLUTIONS, MATHEMATICS
Degenerate Hermitian Manifolds
Erkekoglu, Fazilet, E-mail: fazilet@hacetepe.edu.tr
[en] The geometry of almost complex manifolds with degenerate indefinite Hermitian metrics is studied
Copyright (c) 2006 Springer Science+Business Media, Inc.; Country of input: International Atomic Energy Agency (IAEA)
Mathematical Physics, Analysis and Geometry; ISSN 1385-0172; ; v. 8(4); p. 361-388
COMPLEX MANIFOLDS, GEOMETRY, HERMITIAN OPERATORS, METRICS, RIEMANN SPACE, TENSORS
MATHEMATICAL MANIFOLDS, MATHEMATICAL OPERATORS, MATHEMATICAL SPACE, MATHEMATICS, SPACE
Asymptotic properties of the Dedekind zeta-function in families of number fields
Zykin, Aleksei I
Available from http://dx.doi.org/10.1070/RM2009v064n06ABEH004657; Abstract only; Country of input: International Atomic Energy Agency (IAEA)
Russian Mathematical Surveys; ISSN 0036-0279; ; CODEN RMSUAF; v. 64(6); p. 1145-1147
ASYMPTOTIC SOLUTIONS, FUNCTIONS, FUZZY LOGIC, INFORMATION THEORY, SET THEORY
|
Find sources: "Spirograph" – news · newspapers · books · scholar · JSTOR (July 2011) (Learn how and when to remove this template message)
Spirograph set (early 1980s UK version)
In 1827, Greek-born English architect and engineer Peter Hubert Desvignes developed and advertised a "Speiragraph", a device to create elaborate spiral drawings. A man named J. Jopling soon claimed to have previously invented similar methods.[1] When working in Vienna between 1845 and 1848, Desvignes constructed a version of the machine that would help prevent banknote forgeries,[2] as any of the nearly endless variations of roulette patterns that it could produce were extremely difficult to reverse engineer. The mathematician Bruno Abakanowicz invented a new Spirograph device between 1881 and 1900. It was used for calculating an area delimited by curves.[3]
Drawing toys based on gears have been around since at least 1908, when The Marvelous Wondergraph was advertised in the Sears catalog.[4][5] An article describing how to make a Wondergraph drawing machine appeared in the Boys Mechanic publication in 1913.[6]
The definitive Spirograph toy was developed by the British engineer Denys Fisher between 1962 and 1964 by creating drawing machines with Meccano pieces. Fisher exhibited his spirograph at the 1965 Nuremberg International Toy Fair. It was subsequently produced by his company. US distribution rights were acquired by Kenner, Inc., which introduced it to the United States market in 1966 and promoted it as a creative children's toy. Kenner later introduced Spirotot, Magnetic Spirograph, Spiroman, and various refill sets.[7]
In 2013 the Spirograph brand was re-launched worldwide, with the original gears and wheels, by Kahootz Toys. The modern products use removable putty in place of pins to hold the stationary pieces in place. The Spirograph was Toy of the Year[further explanation needed] in 1967, and Toy of the Year finalist, in two categories, in 2014.
Animation of a Spirograph
Several Spirograph designs drawn with a Spirograph set using multiple different colored pens
The original US-released Spirograph consisted of two differently sized plastic rings (or stators), with gear teeth on both the inside and outside of their circumferences. Once either of these rings were held in place (either by pins, with an adhesive, or by hand) any of several provided gearwheels (or rotors)—each having holes for a ballpoint pen—could be spun around the ring to draw geometric shapes. Later, the Super-Spirograph introduced additional shapes such as rings, triangles, and straight bars. All edges of each piece have teeth to engage any other piece; smaller gears fit inside the larger rings, but they also can rotate along the rings' outside edge or even around each other. Gears can be combined in many different arrangements. Sets often included variously colored pens, which could enhance a design by switching colors, as seen in the examples shown here.
Beginners often slip the gears, especially when using the holes near the edge of the larger wheels, resulting in broken or irregular lines. Experienced users may learn to move several pieces in relation to each other (say, the triangle around the ring, with a circle "climbing" from the ring onto the triangle).
Mathematical basisEdit
Consider a fixed outer circle
{\displaystyle C_{o}}
{\displaystyle R}
centered at the origin. A smaller inner circle
{\displaystyle C_{i}}
{\displaystyle r<R}
is rolling inside
{\displaystyle C_{o}}
and is continuously tangent to it.
{\displaystyle C_{i}}
will be assumed never to slip on
{\displaystyle C_{o}}
(in a real Spirograph, teeth on both circles prevent such slippage). Now assume that a point
{\displaystyle A}
lying somewhere inside
{\displaystyle C_{i}}
is located a distance
{\displaystyle \rho <r}
{\displaystyle C_{i}}
's center. This point
{\displaystyle A}
corresponds to the pen-hole in the inner disk of a real Spirograph. Without loss of generality it can be assumed that at the initial moment the point
{\displaystyle A}
was on the
{\displaystyle X}
axis. In order to find the trajectory created by a Spirograph, follow point
{\displaystyle A}
as the inner circle is set in motion.
Now mark two points
{\displaystyle T}
{\displaystyle C_{o}}
{\displaystyle B}
{\displaystyle C_{i}}
{\displaystyle T}
always indicates the location where the two circles are tangent. Point
{\displaystyle B}
, however, will travel on
{\displaystyle C_{i}}
, and its initial location coincides with
{\displaystyle T}
. After setting
{\displaystyle C_{i}}
in motion counterclockwise around
{\displaystyle C_{o}}
{\displaystyle C_{i}}
has a clockwise rotation with respect to its center. The distance that point
{\displaystyle B}
traverses on
{\displaystyle C_{i}}
is the same as that traversed by the tangent point
{\displaystyle T}
{\displaystyle C_{o}}
, due to the absence of slipping.
Now define the new (relative) system of coordinates
{\displaystyle (X',Y')}
with its origin at the center of
{\displaystyle C_{i}}
and its axes parallel to
{\displaystyle X}
{\displaystyle Y}
. Let the parameter
{\displaystyle t}
be the angle by which the tangent point
{\displaystyle T}
rotates on
{\displaystyle C_{o}}
{\displaystyle t'}
be the angle by which
{\displaystyle C_{i}}
rotates (i.e. by which
{\displaystyle B}
travels) in the relative system of coordinates. Because there is no slipping, the distances traveled by
{\displaystyle B}
{\displaystyle T}
along their respective circles must be the same, therefore
{\displaystyle tR=(t-t')r,}
{\displaystyle t'=-{\frac {R-r}{r}}t.}
It is common to assume that a counterclockwise motion corresponds to a positive change of angle and a clockwise one to a negative change of angle. A minus sign in the above formula (
{\displaystyle t'<0}
) accommodates this convention.
{\displaystyle (x_{c},y_{c})}
be the coordinates of the center of
{\displaystyle C_{i}}
in the absolute system of coordinates. Then
{\displaystyle R-r}
represents the radius of the trajectory of the center of
{\displaystyle C_{i}}
, which (again in the absolute system) undergoes circular motion thus:
{\displaystyle {\begin{aligned}x_{c}&=(R-r)\cos t,\\y_{c}&=(R-r)\sin t.\end{aligned}}}
As defined above,
{\displaystyle t'}
is the angle of rotation in the new relative system. Because point
{\displaystyle A}
obeys the usual law of circular motion, its coordinates in the new relative coordinate system
{\displaystyle (x',y')}
{\displaystyle {\begin{aligned}x'&=\rho \cos t',\\y'&=\rho \sin t'.\end{aligned}}}
In order to obtain the trajectory of
{\displaystyle A}
in the absolute (old) system of coordinates, add these two motions:
{\displaystyle {\begin{aligned}x&=x_{c}+x'=(R-r)\cos t+\rho \cos t',\\y&=y_{c}+y'=(R-r)\sin t+\rho \sin t',\\\end{aligned}}}
{\displaystyle \rho }
is defined above.
Now, use the relation between
{\displaystyle t}
{\displaystyle t'}
as derived above to obtain equations describing the trajectory of point
{\displaystyle A}
in terms of a single parameter
{\displaystyle t}
{\displaystyle {\begin{aligned}x&=x_{c}+x'=(R-r)\cos t+\rho \cos {\frac {R-r}{r}}t,\\y&=y_{c}+y'=(R-r)\sin t-\rho \sin {\frac {R-r}{r}}t\\\end{aligned}}}
(using the fact that function
{\displaystyle \sin }
is odd).
It is convenient to represent the equation above in terms of the radius
{\displaystyle R}
{\displaystyle C_{o}}
and dimensionless parameters describing the structure of the Spirograph. Namely, let
{\displaystyle l={\frac {\rho }{r}}}
{\displaystyle k={\frac {r}{R}}.}
{\displaystyle 0\leq l\leq 1}
represents how far the point
{\displaystyle A}
is located from the center of
{\displaystyle C_{i}}
{\displaystyle 0\leq k\leq 1}
represents how big the inner circle
{\displaystyle C_{i}}
is with respect to the outer one
{\displaystyle C_{o}}
It is now observed that
{\displaystyle {\frac {\rho }{R}}=lk,}
and therefore the trajectory equations take the form
{\displaystyle {\begin{aligned}x(t)&=R\left[(1-k)\cos t+lk\cos {\frac {1-k}{k}}t\right],\\y(t)&=R\left[(1-k)\sin t-lk\sin {\frac {1-k}{k}}t\right].\\\end{aligned}}}
{\displaystyle R}
is a scaling parameter and does not affect the structure of the Spirograph. Different values of
{\displaystyle R}
would yield similar Spirograph drawings.
The two extreme cases
{\displaystyle k=0}
{\displaystyle k=1}
result in degenerate trajectories of the Spirograph. In the first extreme case, when
{\displaystyle k=0}
, we have a simple circle of radius
{\displaystyle R}
, corresponding to the case where
{\displaystyle C_{i}}
has been shrunk into a point. (Division by
{\displaystyle k=0}
in the formula is not a problem, since both
{\displaystyle \sin }
{\displaystyle \cos }
are bounded functions.)
The other extreme case
{\displaystyle k=1}
corresponds to the inner circle
{\displaystyle C_{i}}
's radius
{\displaystyle r}
matching the radius
{\displaystyle R}
of the outer circle
{\displaystyle C_{o}}
{\displaystyle r=R}
. In this case the trajectory is a single point. Intuitively,
{\displaystyle C_{i}}
is too large to roll inside the same-sized
{\displaystyle C_{o}}
{\displaystyle l=1}
{\displaystyle A}
is on the circumference of
{\displaystyle C_{i}}
. In this case the trajectories are called hypocycloids and the equations above reduce to those for a hypocycloid.
Spirograph Nebula, a planetary nebula that displays delicate, spirograph-like filigree.
^ Knight, John I. (1828). "Mechanics Magazine". Knight; Lacey – via Google Books.
^ "Spirograph and examples of patterns drawn using it | Science Museum Group Collection".
^ Goldstein, Cathérine; Gray, Jeremy; Ritter, Jim (1996). L'Europe mathématique: histoires, mythes, identités. Editions MSH. p. 293. ISBN 9782735106851. Retrieved 17 July 2011.
^ Kaveney, Wendy. "CONTENTdm Collection : Compound Object Viewer". digitallibrary.imcpl.org. Retrieved 17 July 2011.
^ Linderman, Jim. "ArtSlant - Spirograph? No, MAGIC PATTERN!". artslant.com. Retrieved 17 July 2011.
^ "From The Boy Mechanic (1913) - A Wondergraph". marcdatabase.com. 2004. Retrieved 17 July 2011.
^ Coopee, Todd (17 August 2015). "Spirograph". ToyTales.ca.
Voevudko, A. E. (12 March 2018). "Gearographic Curves". Code Project.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Spirograph&oldid=1084264121"
|
Effects in everyday life[change | change source]
Surfactants[change | change source]
Basic physics[change | change source]
Two definitions[change | change source]
Surface tension, represented by the symbol γ is defined as the force along a line of unit length, where the force is parallel to the surface but perpendicular to the line. One way to picture this is to imagine a flat soap film bounded on one side by a taut thread of length, L. The thread will be pulled toward the interior of the film by a force equal to 2
{\displaystyle \scriptstyle \gamma }
L (the factor of 2 is because the soap film has two sides, hence two surfaces).[4] Surface tension is therefore measured in forces per unit length. Its SI unit is newton per meter but the cgs unit of dyne per cm is also used.[5] One dyn/cm corresponds to 0.001 N/m.
An equivalent definition, one that is useful in thermodynamics, is work done per unit area. As such, in order to increase the surface area of a mass of liquid by an amount, δA, a quantity of work,
{\displaystyle \scriptstyle \gamma }
δA, is needed.[4] This work is stored as potential energy. Consequently surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm2. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume.
Surface curvature and pressure[change | change source]
{\displaystyle \Delta p\ =\ \gamma \left({\frac {1}{R_{x}}}+{\frac {1}{R_{y}}}\right)}
{\displaystyle \scriptstyle \gamma }
is surface tension.
Liquid surface[change | change source]
Contact angles[change | change source]
Where the two surfaces meet, they form a contact angle,
{\displaystyle \scriptstyle \theta }
, which is the angle the tangent to the surface makes with the solid surface. The diagram to the right shows two examples. Tension forces are shown for the liquid-air interface, the liquid-solid interface, and the solid-air interface. The example on the left is where the difference between the liquid-solid and solid-air surface tension,
{\displaystyle \scriptstyle \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }}
, is less than the liquid-air surface tension,
{\displaystyle \scriptstyle \gamma _{\mathrm {la} }}
, but is still positive, that is
{\displaystyle \gamma _{\mathrm {la} }\ >\ \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }\ >\ 0}
In the diagram, both the vertical and horizontal forces must cancel exactly at the contact point, known as equilibrium. The horizontal component of
{\displaystyle \scriptstyle f_{\mathrm {la} }}
is canceled by the adhesive force,
{\displaystyle \scriptstyle f_{\mathrm {A} }}
{\displaystyle f_{\mathrm {A} }\ =\ f_{\mathrm {la} }\sin \theta }
The more important balance of forces, though, is in the vertical direction. The vertical component of
{\displaystyle \scriptstyle f_{\mathrm {la} }}
must exactly cancel the force,
{\displaystyle \scriptstyle f_{\mathrm {ls} }}
{\displaystyle f_{\mathrm {ls} }-f_{\mathrm {sa} }\ =\ -f_{\mathrm {la} }\cos \theta }
{\displaystyle \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }\ =\ -\gamma _{\mathrm {la} }\cos \theta }
{\displaystyle \scriptstyle \gamma _{\mathrm {ls} }}
{\displaystyle \scriptstyle \gamma _{\mathrm {la} }}
{\displaystyle \scriptstyle \gamma _{\mathrm {sa} }}
is the solid-air surface tension,
{\displaystyle \scriptstyle \theta }
This means that although the difference between the liquid-solid and solid-air surface tension,
{\displaystyle \scriptstyle \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }}
, is difficult to measure directly, it can be inferred from the liquid-air surface tension,
{\displaystyle \scriptstyle \gamma _{\mathrm {la} }}
, and the equilibrium contact angle,
{\displaystyle \scriptstyle \theta }
, which is a function of the easily measurable advancing and receding contact angles (see main article contact angle).
{\displaystyle \gamma _{\mathrm {la} }\ >\ 0\ >\ \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }}
Special contact angles[change | change source]
{\displaystyle \gamma _{\mathrm {la} }\ =\ \gamma _{\mathrm {ls} }-\gamma _{\mathrm {sa} }\ >\ 0\qquad \theta \ =\ 180^{\circ }}
Methods of measurement[change | change source]
Liquid in a vertical tube[change | change source]
{\displaystyle h\ =\ {\frac {2\gamma _{\mathrm {la} }\cos \theta }{\rho gr}}}
{\displaystyle \scriptstyle h}
{\displaystyle \scriptstyle \gamma _{\mathrm {la} }}
{\displaystyle \scriptstyle \rho }
{\displaystyle \scriptstyle r}
{\displaystyle \scriptstyle g}
{\displaystyle \scriptstyle \theta }
is the angle of contact described above. If
{\displaystyle \scriptstyle \theta }
Puddles on a surface[change | change source]
Profile curve of the edge of a puddle where the contact angle is 180°. The curve is given by the formula:[6]
{\displaystyle \scriptstyle x-x_{0}\ =\ {\frac {1}{2}}H\cosh ^{-1}\left({\frac {H}{h}}\right)-H{\sqrt {1-{\frac {h^{2}}{H^{2}}}}}}
{\displaystyle \scriptstyle H\ =\ 2{\sqrt {\frac {\gamma }{g\rho }}}}
{\displaystyle h\ =\ 2{\sqrt {\frac {\gamma }{g\rho }}}}
{\displaystyle \scriptstyle h}
{\displaystyle \scriptstyle \gamma }
{\displaystyle \scriptstyle g}
{\displaystyle \scriptstyle \rho }
{\displaystyle h\ =\ {\sqrt {\frac {2\gamma _{\mathrm {la} }\left(1-\cos \theta \right)}{g\rho }}}.}
The breakup of streams into drops[change | change source]
Intermediate stage of a jet breaking into drops. Radii of curvature in the axial direction are shown. Equation for the radius of the stream is
{\displaystyle \scriptstyle R\left(z\right)=R_{0}+A_{k}\cos \left(kz\right)}
{\displaystyle \scriptstyle R_{0}}
is the radius of the unperturbed stream,
{\displaystyle \scriptstyle A_{k}}
is the amplitude of the perturbation,
{\displaystyle \scriptstyle z}
is distance along the axis of the stream, and
{\displaystyle \scriptstyle k}
is the wave number
Data table[change | change source]
Gallery of effects[change | change source]
|
Mapping the Elastic and Osmotic Properties of Cartilage Extracellular Matrix | SBC | ASME Digital Collection
Lin, DC, & Horkay, F. "Mapping the Elastic and Osmotic Properties of Cartilage Extracellular Matrix." Proceedings of the ASME 2009 Summer Bioengineering Conference. ASME 2009 Summer Bioengineering Conference, Parts A and B. Lake Tahoe, California, USA. June 17–21, 2009. pp. 677-678. ASME. https://doi.org/10.1115/SBC2009-206312
The inhomogeneous distribution of crosslinks in polymer networks results in nonuniform swelling. Concomitant with this behavior is local variability in the elastic properties of synthetic and biopolymer gels. Articular cartilage exemplifies the compositional and structural complexities found in soft tissues. At the most basic level, cartilage extracellular matrix (ECM) is a relatively stiff network of collagen type II fibers with entangled hyaluronic acid chains and enmeshed aggrecan molecules. Despite significant differences in composition, synthetic and biological gels exhibit qualitatively similar responses (e.g., viscoelasticity and nonlinear stress-strain behavior at large deformation). Scaling theory [1] and experiments [2–3] have verified that the shear modulus (Ge) of chemically identical, fully swollen gels differing only in the degree of crosslinking is proportional to the polymer concentration (ce):
Ge=Acen
where A and n are constants. In a good solvent, n = 2.25 [1]. Recent studies have shown that the power law applies to collagen gels, with n ≈ 2.68 [4]. In the general case,
G=Acen-mcm
where G and c are the general shear modulus and polymer concentration, respectively, and m = 1/3 [5].
Cartilage, Polymers, Shear modulus, Biopolymers, Chain, Deformation, Elasticity, Fibers, Soft tissues, Stress, Viscoelasticity
|
Pyramid Block Calculator
Rules for building a pyramid in Minecraft
Minecraft pyramid dimensions, height, and total blocks required
How to use this pyramid block calculator
How to build a pyramid in Minecraft
Welcome to our pyramid block calculator! If you're here, you probably want to build a pyramid in Minecraft, and you are wondering how many blocks you will need and what dimensions the pyramid should be. To make your life easier, this Minecraft pyramid calculator will calculate the number of blocks in the pyramid if you know the base dimensions or height of the block pyramid.
In this article, we will go over the basic mathematical rules on how to build a pyramid in Minecraft so that you can figure out what base dimensions your pyramid needs to be to reach a certain height and how many stacks of blocks you will need to build it.
Minecraft may be a sandbox game, but it does come with some restrictions:
Each Minecraft block is a 1 m3 cube, with 1 m (or 3.281 ft) side.
You can only place blocks next to each other or on top of one another, but not in an overlapping manner.
The only feasible arrangement of blocks to build pyramids in Minecraft.
Due to these factors, in order to build a pyramid in Minecraft, we insist on following these rules:
Pyramid must have a square base – Aside from the fact that a square pyramid is easy to build, other pyramids are infeasible in Minecraft. They will leave gaps open to the sides or do not culminate in a proper pyramid apex.
Pyramid base must have an odd number of blocks per side – This is related to the pyramid's apex and height. Pyramids with an even number of blocks per side in their base cannot culminate in a proper pyramid apex – with one block in the center as the capstone. Instead, they will end with four blocks, giving you a flat-topped pyramid.
Also, note that the height of a block pyramid is given by:
\qquad h =\begin{cases} \frac{n+1}{2}, & n\text{ is odd} \\ \frac{n}{2}, & n\text{ is even} \end{cases} <p></p>
h
– Height of a block pyramid; and
n
– Number of blocks per side in the pyramid base.
29 \times 29
pyramid and a
30 \times 30
pyramid will have the same height of
15
tiers. However, a
29 \times 29
pyramid will require fewer blocks than a
30 \times 30
pyramid while also giving you that excellent pyramid apex.
The pyramid is hollow – You may want to build a pyramid to serve as your base of operations in survival mode, or you want to show off your super cool pyramid to your friends. Whatever the case, the basic pyramid you should build is hollow. Solid pyramids don't make sense in Minecraft – they require exponentially more blocks than a hollow pyramid of the same dimension, and they don't serve any purpose. Of course, you may want to build rooms and booby traps in your pyramid, but you should still start with a basic hollow pyramid.
Top view of a Minecraft pyramid.
First, let us look at a Minecraft pyramid from the top. We can see it is a square. Given that it is hollow, we can infer that this pyramid is a square with each inner layer of the square elevated to successive upper levels (or tiers) to form the pyramid. To calculate the number of blocks in the pyramid, we can calculate the area of the square base. Hence, the total number of blocks in a pyramid is given by:
\qquad N = n^2
N
– Total number of blocks in the pyramid.
The dimensions of a Minecraft pyramid are generally given in terms of blocks per side in the base, as
n \times n
. For example, a pyramid with
29
blocks in its base per side is a
29 \times 29
pyramid.
The height of a block pyramid depends on its dimensions. In terms of blocks, it is given by (when
is odd):
\qquad h = \frac{n+1}{2} <p></p>
And when
is even by:
\qquad h = \frac{n}{2}
For example, the height of a
29 \times 29
pyramid would be
\frac{29+1}{2} = 15
15
blocks high.
This Minecraft pyramid block calculator can calculate everything you need for building your pyramid:
Enter your pyramid's desired height to calculate the number of blocks in the pyramid and base dimensions.
Input the desired base dimensions to calculate the pyramid's height and the total number of blocks.
Enter the total number of blocks in the pyramid to get the height and base dimensions. Note that this will not work if the value you entered is not a perfect square or an odd number since they cannot form a Minecraft pyramid, for reasons discussed in the previous sections. Instead, try the step below.
Enter the total blocks in your inventory in the Pyramid stats from a quantity of blocks section and find out the largest pyramid you can build with these resources.
Once you've calculated all the necessary parameters using our Minecraft pyramid calculator, you can jump into Minecraft and start building your pyramid:
Place the blocks for the pyramid base in an
n \times n
square frame arrangement.
For the next tier, place one temporary block anywhere along the inside of the base tier, then place another block on it as the starter block of the second tier.
Place blocks next to this starter block and continue like in step one to place the remaining blocks of this tier in an
(n-2) \times (n-2)
You can remove the temporary block after you've placed the starter block.
Repeat these steps until you arrive at the pyramid's apex, where you will place only one block. Note that every upper-tier will require a temporary block to initiate a starter block for that tier.
Alternatively, you can build a temporary solid tower from the ground to the apex of your would-be pyramid and then place blocks from top to bottom. Or you can create a staircase from the center of any base side to the peak of the pyramid and then place blocks in any order you want. So, no matter how you build a pyramid in Minecraft, the only "rule" here pertains to the structure's geometry: every side of the upper tier has two blocks fewer than the one below it.
And lo! You have built your first pyramid! Now you can customize it to your heart's content! or you can find out the surface area of the pyramid for an assignment. Want to add a Nether portal to your pyramid? Check out our nether portal calculator to figure out everything you need!
Height (number of tiers)
Blocks in base (per side)
Blocks in nᵗʰ layer
nᵗʰ tier
Blocks per side in nᵗʰ tier
Total blocks in nᵗʰ tier
Pyramid stats from a quantity of blocks
Total blocks in inventory
|
Apparent_molar_property Knowpia
An apparent molar property of a solution component in a mixture or solution is a quantity defined with the purpose of isolating the contribution of each component to the non-ideality of the mixture. It shows the change in the corresponding solution property (for example, volume) per mole of that component added, when all of that component is added to the solution. It is described as apparent because it appears to represent the molar property of that component in solution, provided that the properties of the other solution components are assumed to remain constant during the addition. However this assumption is often not justified, since the values of apparent molar properties of a component may be quite different from its molar properties in the pure state.
For instance, the volume of a solution containing two components identified[1] as solvent and solute is given by
{\displaystyle V=V_{0}+{}^{\phi }{V}_{1}\ ={\tilde {V}}_{0}n_{0}+{}^{\phi }{\tilde {V}}_{1}n_{1}\,}
where V0 is the volume of the pure solvent before adding the solute and
{\displaystyle {\tilde {V}}_{0}}
its molar volume (at the same temperature and pressure as the solution), n0 is the number of moles of solvent,
{\displaystyle {}^{\phi }{\tilde {V}}_{1}\,}
is the apparent molar volume of the solute, and n1 is the number of moles of the solute in the solution. By dividing this relation to the molar amount of one component a relation between the apparent molar property of a component and the mixing ratio of components can be obtained.
This equation serves as the definition of
{\displaystyle {}^{\phi }{\tilde {V}}_{1}\,}
. The first term is equal to the volume of the same quantity of solvent with no solute, and the second term is the change of volume on addition of the solute.
{\displaystyle {}^{\phi }{\tilde {V}}_{1}\,}
may then be considered as the molar volume of the solute if it is assumed that the molar volume of the solvent is unchanged by the addition of solute. However this assumption must often be considered unrealistic as shown in the Examples below, so that
{\displaystyle {}^{\phi }{\tilde {V}}_{1}\,}
is described only as an apparent value.
An apparent molar quantity can be similarly defined for the component identified as solvent
{\displaystyle {}^{\phi }{\tilde {V}}_{0}\,}
. Some authors have reported apparent molar volumes of both (liquid) components of the same solution.[2][3] This procedure can be extended to ternary and multicomponent mixtures.
Apparent quantities can also be expressed using mass instead of number of moles. This expression produces apparent specific quantities, like the apparent specific volume.
{\displaystyle V=V_{0}+{}^{\phi }{V}_{1}\ =v_{0}m_{0}+{}^{\phi }{v}_{1}m_{1}\,}
where the specific quantities are denoted with small letters.
Apparent (molar) properties are not constants (even at a given temperature), but are functions of the composition. At infinite dilution, an apparent molar property and the corresponding partial molar property become equal.
Some apparent molar properties that are commonly used are apparent molar enthalpy, apparent molar heat capacity, and apparent molar volume.
Relation to molalityEdit
The apparent (molal) volume of a solute can be expressed as a function of the molality b of that solute (and of the densities of the solution and solvent). The volume of solution per mole of solute is
{\displaystyle {\frac {1}{\rho }}\left({\frac {1}{b}}+M_{1}\right).}
Subtracting the volume of pure solvent per mole of solute gives the apparent molal volume:
{\displaystyle {}^{\phi }{\tilde {V}}_{1}={\frac {V-V_{0}}{n_{1}}}=\left({\frac {m}{\rho }}-{\frac {m_{0}}{\rho _{0}^{0}}}\right){\frac {1}{n_{1}}}=\left({\frac {m_{1}+m_{0}}{\rho }}-{\frac {m_{0}}{\rho _{0}^{0}}}\right){\frac {1}{n_{1}}}=\left({\frac {m_{0}}{\rho }}-{\frac {m_{0}}{\rho _{0}^{0}}}\right){\frac {1}{n_{1}}}+{\frac {m_{1}}{\rho n_{1}}}}
{\displaystyle {}^{\phi }{\tilde {V}}_{1}={\frac {1}{b}}\left({\frac {1}{\rho }}-{\frac {1}{\rho _{0}^{0}}}\right)+{\frac {M_{1}}{\rho }}}
For more solutes the above equality is modified with the mean molar mass of the solutes as if they were a single solute with molality bT:
{\displaystyle {}^{\phi }{\tilde {V}}_{12..}={\frac {1}{b_{T}}}\left({\frac {1}{\rho }}-{\frac {1}{\rho _{0}^{0}}}\right)+{\frac {M}{\rho }}}
{\displaystyle M=\sum y_{i}M_{i}}
The sum of products molalities – apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary of multicomponent solution mentioned above.
{\displaystyle {}^{\phi }{\tilde {V}}_{123..}(b_{1}+b_{2}+b_{3}+...)=b_{1}{}^{\phi }{\tilde {V}}_{1}+b_{2}{}^{\phi }{\tilde {V}}_{2}+b_{3}{}^{\phi }{\tilde {V}}_{3}+...}
Relation to mixing ratioEdit
A relation between the apparent molar of a component of a mixture and molar mixing ratio can be obtained by dividing the definition relation
{\displaystyle V=V_{0}+{}^{\phi }{V}_{1}\ ={\tilde {V}}_{0}n_{0}+{}^{\phi }{\tilde {V}}_{1}n_{1}\,}
to the number of moles of one component. This gives the following relation:
{\displaystyle {}^{\phi }{\tilde {V}}_{1}={\frac {V}{n_{1}}}-{\tilde {V}}_{0}{\frac {n_{0}}{n_{1}}}={\frac {V}{n_{1}}}-{\tilde {V}}_{0}r_{01}}
Relation to partial (molar) quantitiesEdit
Note the contrasting definitions between partial molar quantity and apparent molar quantity: in the case of partial molar volumes
{\displaystyle {\bar {V_{0}}},{\bar {V_{1}}}}
, defined by partial derivatives
{\displaystyle {\bar {V_{0}}}={\Big (}{\frac {\partial V}{\partial n_{0}}}{\Big )}_{T,p,n_{1}},{\bar {V_{1}}}={\Big (}{\frac {\partial V}{\partial n_{1}}}{\Big )}_{T,p,n_{0}}}
one can write
{\displaystyle dV={\bar {V_{0}}}dn_{0}+{\bar {V_{1}}}dn_{1}}
{\displaystyle V={\bar {V_{0}}}n_{0}+{\bar {V_{1}}}n_{1}}
always holds. In contrast, in the definition of apparent molar volume, the molar volume of the pure solvent,
{\displaystyle {\tilde {V}}_{0}}
, is used instead, which can be written as
{\displaystyle {\tilde {V_{0}}}={\Big (}{\frac {\partial V}{\partial n_{0}}}{\Big )}_{T,p,n_{1}=0}}
for comparison. In other words, we assume that the volume of the solvent does not change, and we use the partial molar volume where the number of moles of the solute is exactly zero ("the molar volume"). Thus, in the defining expression for apparent molar volume
{\displaystyle {}^{\phi }{\tilde {V}}_{1}}
{\displaystyle V=V_{0}+{}^{\phi }{V}_{1}\ ={\tilde {V}}_{0}n_{0}+{}^{\phi }{\tilde {V}}_{1}n_{1}\,}
{\displaystyle V_{0}}
is attributed to the pure solvent, while the "leftover" excess volume,
{\displaystyle {}^{\phi }V_{1}}
, is considered to originate from the solute. At high dilution with
{\displaystyle n_{0}\gg n_{1}\approx 0}
{\displaystyle {\tilde {V_{0}}}\approx {\bar {V_{0}}}}
, and so the apparent molar volume and partial molar volume of the solute also converge:
{\displaystyle {}^{\phi }{\tilde {V}}_{1}\approx {\bar {V}}_{1}}
Quantitatively, the relation between partial molar properties and the apparent ones can be derived from the definition of the apparent quantities and of the molality. For volume,
{\displaystyle {\bar {V_{1}}}={}^{\phi }{\tilde {V}}_{1}+b{\frac {\partial {}^{\phi }{\tilde {V}}_{1}}{\partial b}}.}
Relation to the activity coefficient of an electrolyte and its solvation shell numberEdit
The ratio ra between the apparent molar volume of a dissolved electrolyte in a concentrated solution and the molar volume of the solvent (water) can be linked to the statistical component of the activity coefficient
{\displaystyle \gamma _{s}}
of the electrolyte and its solvation shell number h:[4]
{\displaystyle \ln \gamma _{s}={\frac {h-\nu }{\nu }}\ln(1+{\frac {br_{a}}{55.5}})-{\frac {h}{\nu }}\ln(1-{\frac {br_{a}}{55.5}})+{\frac {br_{a}(r_{a}+h-\nu )}{55.5(1+{\frac {br_{a}}{55.5}})}}}
where ν is the number of ions due to dissociation of the electrolyte, and b is the molality as above.
The apparent molar volume of salt is usually less than the molar volume of the solid salt. For instance, solid NaCl has a volume of 27 cm3 per mole, but the apparent molar volume at low concentrations is only 16.6 cc/mole. In fact, some aqueous electrolytes have negative apparent molar volumes: NaOH −6.7, LiOH −6.0, and Na2CO3 −6.7 cm3/mole.[5] This means that their solutions in a given amount of water have a smaller volume than the same amount of pure water. (The effect is small, however.) The physical reason is that nearby water molecules are strongly attracted to the ions so that they occupy less space.
Excess volume of a mixture of ethanol and water
Another example of the apparent molar volume of the second component is less than its molar volume as a pure substance is the case of ethanol in water. For example, at 20 mass percents ethanol, the solution has a volume of 1.0326 liters per kg at 20 °C, while pure water is 1.0018 L/kg (1.0018 cc/g).[6] The apparent volume of the added ethanol is 1.0326 L – 0.8 kg x 1.0018 L/kg = 0.2317 L. The number of moles of ethanol is 0.2 kg / (0.04607 kg/mol) = 4.341 mol, so that the apparent molar volume is 0.2317 L / 4.341 mol = 0.0532 L / mol = 53.2 cc/mole (1.16 cc/g). However pure ethanol has a molar volume at this temperature of 58.4 cc/mole (1.27 cc/g).
If the solution were ideal, its volume would be the sum of the unmixed components. The volume of 0.2 kg pure ethanol is 0.2 kg x 1.27 L/kg = 0.254 L, and the volume of 0.8 kg pure water is 0.8 kg x 1.0018 L/kg = 0.80144 L, so the ideal solution volume would be 0.254 L + 0.80144 L = 1.055 L. The nonideality of the solution is reflected by a slight decrease (roughly 2.2%, 1.0326 rather than 1.055 L/kg) in the volume of the combined system upon mixing. As the percent ethanol goes up toward 100%, the apparent molar volume rises to the molar volume of pure ethanol.
Electrolyte – non-electrolyte systemsEdit
Apparent quantities can underline interactions in electrolyte – non-electrolyte systems which show interactions like salting in and salting out, but also give insights in ion-ion interactions, especially by their dependence on temperature.
Multicomponent mixtures or solutionsEdit
For multicomponent solutions, apparent molar properties can be defined in several ways. For the volume of a ternary (3-component) solution with one solvent and two solutes as an example, there would still be only one equation
{\displaystyle (V={\tilde {V}}_{0}n_{0}+{}^{\phi }{\tilde {V}}_{1}n_{1}+{}^{\phi }{\tilde {V}}_{2}n_{2})}
, which is insufficient to determine the two apparent volumes. (This is in contrast to partial molar properties, which are well-defined intensive properties of the materials and therefore unambiguously defined in multicomponent systems. For example, partial molar volume is defined for each component i as
{\displaystyle {\bar {V_{i}}}=(\partial V/\partial n_{i})_{T,p,n_{j\neq i}}}
One description of ternary aqueous solutions considers only the weighted mean apparent molar volume of the solutes,[7] defined as
{\displaystyle {}^{\phi }{\tilde {V}}(n_{1},n_{2})={}^{\phi }{\tilde {V}}_{12}={\frac {V-V_{0}}{n_{1}+n_{2}}}}
{\displaystyle V}
is the solution volume and
{\displaystyle V_{0}}
the volume of pure water. This method can be extended for mixtures with more than 3 components.[8]
{\displaystyle {}^{\phi }{\tilde {V}}(n_{1},n_{2},n_{3},..)={}^{\phi }{\tilde {V}}_{123..}={\frac {V-V_{0}}{n_{1}+n_{2}+n_{3}+...}}}
{\displaystyle {}^{\phi }{\tilde {V}}_{123..}(b_{1}+b_{2}+b_{3}+...)=b_{1}{}^{\phi }{\tilde {V}}_{1}+b_{2}{}^{\phi }{\tilde {V}}_{2}+b_{3}{}^{\phi }{\tilde {V}}_{3}+...}
Another method is to treat the ternary system as pseudobinary and define the apparent molar volume of each solute with reference to a binary system containing both other components: water and the other solute.[9] The apparent molar volumes of each of the two solutes are then
{\displaystyle {}^{\phi }{\tilde {V}}_{1}={\frac {V-V(solvent+solute\ 2)}{n_{1}}}}
{\displaystyle {}^{\phi }{\tilde {V}}_{2}={\frac {V-V(solvent+solute\ 1)}{n_{2}}}}
The apparent molar volume of the solvent is:
{\displaystyle {}^{\phi }{\tilde {V}}_{0}={\frac {V-V(solute\ 1+solute\ 2)}{n_{0}}}}
However, this is an unsatisfactory description of volumetric properties.[10]
The apparent molar volume of two components or solutes considered as one pseudocomponent
{\displaystyle {}^{\phi }{\tilde {V}}_{12}}
{\displaystyle {}^{\phi }{\tilde {V}}_{ij}}
is not to be confused with volumes of partial binary mixtures with one common component Vij, Vjk which mixed in a certain mixing ratio form a certain ternary mixture V or Vijk.[clarification needed]
Of course the complement volume of a component in respect to other components of the mixture can be defined as a difference between the volume of the mixture and the volume of a binary submixture of a given composition like:
{\displaystyle {}^{c}{\tilde {V}}_{2}={\frac {V-V_{01}}{n_{2}}}}
There are situations when there is no rigorous way to define which is solvent and which is solute like in the case of liquid mixtures (say water and ethanol) that can dissolve or not a solid like sugar or salt. In these cases apparent molar properties can and must be ascribed to all components of the mixture.
Excess molar quantity
^ This labelling is arbitrary. For mixtures of two liquids either may be described as solvent. For mixtures of a liquid and a solid, the liquid is usually identified as the solvent and the solid as the solute, but the theory is still valid if the labels are reversed.
^ Rock, Peter A., Chemical Thermodynamics, MacMillan 1969, p.227-230 for water-ethanol mixtures.
^ H. H. Ghazoyan and Sh. A. Markarian (2014) DENSITIES, EXCESS MOLAR AND PARTIAL MOLAR VOLUMES FOR DIETHYLSULFOXIDE WITH METHANOL OR ETHANOL BINARY SYSTEMS AT TEMPERATURE RANGE 298.15 – 323.15 K PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY no.2, p.17-25. See Table 4.
^ Glueckauf, E. (1955). "The Influence of Ionic Hydration on Activity Coefficients in Concentrated Electrolyte Solutions". Transactions of the Faraday Society. 51: 1235–1244. doi:10.1039/TF9555101235.
^ Herbert Harned and Benton Owen, The Physical Chemistry of Electrolytic Solutions, 1950, p. 253.
^ Calculated from data in the CRC Handbook of Chemistry and Physics, 49th edition.
^ Citric acid Apelblat, Alexander (Springer 2014) p.50 ISBN 978-3-319-11233-6
^ Harned, Owen, op. cit. third edition 1958, p. 398-399
^ Citric acid Apelblat p.320
^ Apelblat p.320
The (p,ρ,T) Properties and Apparent Molar Volumes of ethanol solutions of LiI or ZnCl2
Apparent molar volumes and apparent molar heat capacities of Pr(NO3)3(aq), Gd(NO3)3(aq), Ho(NO3)3(aq), and Y(NO3)3(aq) at T = (288.15, 298.15, 313.15, and 328.15) K and p = 0.1 MPa
Isotopic effects for electrolytes apparent properties
|
What is stress concentration — Stress concentration factor
Stress concentration factor equation
Example: Using the stress concentration factor calculator
The stress concentration factor calculator is helpful to know about the increase in stresses in some of the regions of a machine part or a structure. The most basic example of this would be an infinitely long plate with a circular cutout. When the plate undergoes tension or compression or both, the stresses near the boundary of the cutout will be significantly higher and play an essential role in determining its failing circumstances.
Insights gained from this primary case could be valuable in designing and studying large machine parts that have holes for bolts and assemblies. One of the foremost uses of stress concentration is in composite structures where it plays a huge role due to the anisotropic nature of the material. To this end, the topic is of central interest in mechanics — solid mechanics, fatigue, and fracture mechanics. Read on to understand what stress concentration is and more.
A structure or machine comprises several pieces of material — metals, composites, alloys, or polymers, joined together. Either permanently like welding or fastened with bolts or rivets. Some structures also have fillets or chamfers in the design to ensure certain functionality. These features in a structure are known as geometric discontinuities.
A geometric discontinuity could be a fillet, chamfer, a sharp edge, cracks, change in cross-sectional area, a hole, or a cutout made on the structure. A higher value of stress is observed around these discontinuities when the structure is under load. Such localized higher stress values would contribute to microcracks and catastrophic material failure through cracks around the interface of discontinuities. Therefore, this parameter is of great interest for researchers in the field of fatigue and cycling loading and fracture mechanics.
The general equation for stress concentration is:
\small K_t = \frac{\sigma_{max}}{\sigma_{nom}}
Such that, the stress concentration factor,
K_t
is the ratio of maximum (highest) stress (
\sigma_{max}
) to the nominal stress (
\sigma_{nom}
) of the gross cross-section.
Stress distribution in plates with cutouts; note the red regions having higher stress values.
There are solutions for specific cases to estimate the stress concentration around a hole using analytical formulation. Such as the solution from E. Kirsch and Inglis for elastic stress around an elliptical hole in a plate. The maximum stress for an elliptical hole, having length and width as a and b in an infinitely long plate is:
\small \sigma_{max} = \sigma_{nom} \left ( 1 + 2 \frac{a}{b} \right )
For a circle,
a = b
, therefore, the stress concentration factor is 3.
Similarly, for an anisotropic composite material, the stress concentration factor depends upon its elastic parameters such as moduli (
E_x, E_y, G_{xy}
) and Poisson's ratio (
\nu_{xy}
) which is given by the equation:
\footnotesize \begin{align*} K_t &= \frac{\sigma_{max}}{\sigma_{nom}} \\\\ &= 1 + \sqrt{2 \left [ \sqrt{\frac{E_x}{E_y}} - \nu_{xy}\right ] + \frac{E_x}{G_{xy}}} \end{align*}
Similarly, there are cases where solutions are available in various forms to predict the stress concentration factor analytically. In addition to this, there are methods to find the stress concentration factor experimentally. This method is applicable for especially determining the stress concentration around a hole. You can do it using optical techniques like photoelasticity and digital image correlation (DIC) systems. Such systems work on the principle that some plastic, polymers, and transparent materials exhibit fringe patterns related to the principal stress differences when under a source of illumination.
Due to advances in technology, this non-destructive optical technique is beneficial in studying strain and stress distribution for photoelastic specimens under elastic loads for both two and three-dimensional cases. In addition to experiments, you can also use any finite element (FE)-based simulation tool to study the stress concentration around geometric discontinuities. Once you know that the stress concentration factor is high, you can tweak your design using optimization techniques to reduce these values.
Find the stress concentration for a plate with a square hole having nominal stress as 100 MPa. Take maximum stress as 150 MPa.
Select the mode of the calculator as ratio.
Enter the maximum stress as 150 MPa.
Fill in the nominal stress as 100 MPa.
Using the stress concentration factor formula:
\small \qquad K_t = \frac{\sigma_{max}}{\sigma_{nom}} = \frac{150}{100} = 1.5
Modes of calculator
You can also use the elliptical hole mode and the anisotropic composite mode to find the stress concentration factor for those specific cases.
What do you mean by stress concentration factor?
The stress concentration factor is the ratio of maximum stress to nominal stress at a location in an object under load. This parameter is useful to study the rise in stresses or discontinuity in the stress field due to the presence of geometric changes such as a hole, fillet, a chamfer, or even a change in cross-sectional area.
How do I calculate stress concentration factor?
To calculate stress concentration factor:
Find the nominal stress at the point of interest.
FInd the maximum stress for the material specimen.
Divide the maximum stress by nominal stress to obtain the stress concentration factor, Kₜ = σ_maximum / σ_nominal.
Please note that the nominal stress calculation has specific formulae for different cases based on geometry and loading conditions.
What is the stress concentration factor for an elliptical hole in an infinite plate?
The stress concentration factor for an elliptical hole in a plate is given by the equation: Kt = σ_maximum = σ_nominal × (1 + 2 × (a / b)). Where a and b are the major and minor axes, respectively. Now, in the case of a circular hole, this equation returns the answer as 3.
What do you mean by stress raisers?
The stress raises, or stress risers are the geometric locations where the stress is higher than the surrounding. In other words, the region where the localized stress distribution is higher than the stress in the specimen. Examples of stress raisers include fillets, chamfers, holes, weld joints, riveted or bolted joints, corners, and sharp edges.
Stress concentration factor (Kₜ)
|
Comparison of the gene expression patterns (the
{\overline{\mathbit{R}}}_{\mathbit{r}}
values) in three mutant viruses. The
{\overline{R}}_{r}
values of the us1-, ep0- and vhs-null mutants were visualized in a heatmap presentation. The expression profile of the us1-KO virus is similar to that of the vhs-KO, i.e. complementary to that of the ep0-KO virus.
|
Richard G. Budynas and J. Keith Nisbett “Roark's Formulas for Stress and Strain (Table A.1)“ McGraw-Hill Education (2020)
How to calculate section modulus from the moment of inertia
Plastic section modulus: beyond elastic section modulus
Section modulus formulas for a rectangular section and other shapes
This tool calculates the section modulus, one of the most critical geometric properties in the design of beams subjected to bending. Additionally, it calculates the neutral axis and second moment of area (area moment of inertia) of the most common structural profiles.
The formulas for the section modulus of a rectangle or circle are relatively easy to calculate. Still, when dealing with complex geometries like a tee, channel, or I beam, a calculator can save time and help us avoid mistakes.
In the following sections, we discuss the two types of section modulus, how to calculate section modulus from the moment of inertia, and present the section modulus formulas of a rectangle and many other common shapes.
The section modulus is used by engineers to quickly predict the maximum stress a bending moment will cause on a beam. The equation for the maximum absolute value of the stress in a beam subjected to bending:
\sigma_m = \frac{Mc}{I}
\sigma_m
— Maximum absolute value of the stress in a specific beam section;
M
— Bending moment to which the beam is subjected in that section;
c
— Largest distance from the neutral axis to the surface of the member; and
I
— Second moment of area (also known as area moment of inertia) about the section neutral axis (calculated by this tool as well).
For example, in a circle, the largest distance equals the radius, while it equals half the height in a rectangle.
I/c
only depends on the geometric characteristics, we can define a new geometric property from it, called the section modulus, denoted by the
S
letter:
S = \frac{I}{c}
As well as the second moment of area, this new geometric property is available in many tables and calculators, but if you want to know how to calculate section modulus from the moment of inertia, simply divide
I
c
, and you'll get it.
Finally, we can relate the section modulus to the stress and moment:
\sigma_m = \frac{M}{S}
🙋 Does this relationship seem familiar to you? This relation is equivalent to the axial stress equation:
\sigma = \frac{F}{A}
. The bending moment is analogous to axial force, while the section modulus is analog to the cross-sectional area.
Take these considerations into account when calculating section modulus and maximum stresses:
We obtain the bending moment through a static or structural analysis of the beam.
To get the section modulus, we can use tables for predefined structural members, but this calculator is the best option if you're dealing with custom geometries.
If we're considering a uniform section beam (as usual), the location of the maximum stress will be at the point of maximum bending moment. If that's not the case,
\sigma_m
could be at a different place.
The previous formulas apply to materials that exhibit an elastic behavior and obey Hook's law. When there's plastic deformation instead of elastic deformation, we need to use the plastic section modulus.
The previous equations don't apply when we subject a beam material to stresses beyond the yield strength, as they assume stress and strain are linearly related. In this case, we must use the plastic section modulus. Similar to the elastic section modulus
S
, its plastic counterpart provides a relationship between stress and moment:
M_p = Z\sigma_Y
M_p
— Plastic moment of the section;
Z
— Plastic section modulus; and
\sigma_Y
— Yield strength of the member material.
The plastic moment refers to the moment required to cause plastic deformation across the whole transverse area of a section of the member.
The usefulness of the last equation is that we can predict the bending moment that will cause plastic deformation by just knowing the yield strength and plastic section modulus.
The following graphic describes better what we refer to when talking about plastic moments:
Graphic representation of the bending stress distribution during plastic deformations. We're assuming a perfect plasticity model; therefore, actual stress distributions are not that uniform.
For plastic deformation to occur, we must cause some stress equal to the yield strength of the material. As you can note, the transition from elastic to plastic is not uniform across the member, as some regions will reach the yield strength before others.
Once all the section reaches the yield strength, plastic deformation occurs across all that section. The bending moment needed to achieve this is called the plastic moment.
In the following table, we list the section modulus formula for a rectangular section and many other profiles (scroll the table sideways to see all the equations):
y_c = x_c = a/2
Z_x = Z_y = 0.25a^3
I_x = I_y = \frac{a^4}{12}
S_x = S_y =\frac{I_x}{y_c}
y_c=d/2
Z_x = 0.25bd^2
x_c=b/2
Z_y = 0.25db^2
I_x = \frac{bd^3}{12}
I_y = \frac{db^3}{12}
S_x = \frac{I_x}{y_c}
S_y = \frac{I_y}{x_c}
y_c=d/2
Z_x = 0.25(bd^2-b_id_i^2)
x_c=b/2
Z_y = 0.25(db^2-d_ib_i^2)
I_x = \frac{bd^3-b_id_i^3}{12}
I_y = \frac{db^3-d_ib_i^3}{12}
S_x = \frac{I_x}{y_c}
S_y = \frac{I_y}{x_c}
y_c=\frac{bt^2+t_wd(2t+d)}{2(tb+t_wd)}
t_wd \ge bt
x_c=b/2
Z_x=\frac{d^2t_w}{4}-\frac{b^2t^2}{4t_w}-\frac{bt(d+t)}{2}
I_x = \frac{b(d+t)^3-d^3(b-t_w)}{3}-\footnotesize A(d+t-y_c)^2
t_wd \le bt
I_y = \frac{tb^3+dt_w^3}{12}
Z_x=\frac{t^2b}{4}-\frac{t_wd(t+d-t_wd/2b)}{2}
S_x = \frac{I_x}{d+t-y_c}
Z_y= \frac{b^2t+t_w^2d}{4}
S_y = \frac{I_y}{x_c}
y_c=\frac{bt^2+2t_wd(2t+d)}{2(tb+2t_wd)}
2t_wd \ge bt
x_c=b/2
Z_x=\frac{d^2t_w}{2}-\frac{b^2t^2}{8t_w}-\frac{bt(d+t)}{2}
I_x = \frac{b(d+t)^3-d^3(b-2t_w)}{3}-\footnotesize A(d+t-y_c)^2
t_wd \le bt
I_y = \frac{(d+t)b^3-d(b-2t_w)^3}{12}
Z_x=\frac{t^2b}{4}+t_wd(t+d-\frac{t_wd}{b})
S_x = \frac{I_x}{d+t-y_c}
|
Jeffreys’ Substitution Posterior for the Median: A Nice Trick to Non-parametrically Estimate the Median - Publishable Stuff
While reading up on quantile regression I found a really nice hack described in Bayesian Quantile Regression Methods (Lancaster & Jae Jun, 2010). It is called Jeffreys’ substitution posterior for the median, first described by Harold Jeffreys in his Theory of Probability, and is a non-parametric method for approximating the posterior of the median. What makes it cool is that it is really easy to understand and pretty simple to compute, while making no assumptions about the underlying distribution of the data. The method does not strictly produce a posterior distribution, but has been shown to produce a conservative approximation to a valid posterior (Lavine, 1995). In this post I will try to explain Jeffreys’ substitution posterior, give R-code that implements it and finally compare it with a classical non-parametric test, the Wilcoxon signed-rank test. But first a picture of Sir Harold Jeffreys:
Jeffreys proposed to use the probability
{{n}\choose{n_l}} \big / 2^n
as a substitution for the likelihood,
p(x \mid m)
, and then calculate
p(m \mid x) \propto p(x \mid m) \cdot p(m)
## -0.4378 0.6978
## data: y - x
## V = 5, p-value = 0.01953
## alternative hypothesis: true location is less than 0
## Jeffreys’ Substitution Posterior for the median
## -0.49
## 95% CI
## -0.9133 0.04514
Posted by Rasmus Bååth May 3rd, 2014 Bayesian, R, Statistics
« Bayesian First Aid: Pearson Correlation Test The Most Comprehensive Review of Comic Books Teaching Statistics »
|
Tox Box - Ukikipedia
The Tox Box is an enemy in Shifting Sand Land. The enemy moves in a set pattern and will crush Mario is he is caught below it.
1.1 Movement Function
The Tox Box movement function takes 4 parameters; 2 floats (a0 and a1) and 2 shorts (a2 and a3). The Y-position of Tox Box is given as:
{\displaystyle Y=99.41124\sin(2^{15}({\text{timer}}+1)/8)+Y_{\text{home}}+3}
Its forward velocity is set to a0 and its z-velocity is set to a1. Its pitch facing angle is increased by a2 whenever the function runs. If this angle is less than 0, a3 is set to negative a3 and its roll facing angle is increased by a3 whenever the function runs. After 8 frames, the Tox Box moves into another action and plays a sound.
Tox Box has 1 initialization action and 7 actions used to control its movement. When Tox Box is initialized the game checks which parameter it has to assign its movement pattern. Action 1 sets the forward velocity to 0, shakes the screen for one frame if Mario is close enough and increases its Y-value by 3. After 21 frames, the next action occurs. Actions 2 and 3 simply keep the Tox Box in its current position for 21 frames. Action 4 runs the movement function with the parameters (64,0,0x800,0). Action 5 runs the movement function with the parameters (-64,0,-0x800,0). Action 6 runs the movement function with the parameters (0,-64,0,0x800). Action 7 runs the movement function with the parameters (0,64,0,-0x800).
There are three movement patterns used for Tox Box. Each consists of a list of numbers corresponding to one of the 7 movement actions. Every other action in the list is action 1, which halts the Tox Box for 21 frames, and each ends with -1, looping to the start of the list. Action 2 is used to freeze the Tox Box for an additional 20 frames before it reverses direction and Action 3 is never used. The Tox Box takes 8 frames to make a full movement, during which it moves 11.25 degrees per frame and 64 units/frame in the direction it is moving in.
Pattern 1: { 4, 1, 4, 1, 6, 1, 6, 1, 5, 1, 5, 1, 6, 1, 6, 1, 5, 1, 2, 4, 1, 4, 1, 4, 1, 2, 5, 1, 5, 1, 7, 1, 7, 1, 4, 1, 4, 1, 7, 1, 7, 1, 5, 1, 5, 1, 5, 1, 2, 4, 1, -1 }
Pattern 2: { 4, 1, 4, 1, 7, 1, 7, 1, 7, 1, 2, 6, 1, 6, 1, 6, 1, 5, 1, 5, 1, 6, 1, 5, 1, 5, 1, 2, 4, 1, 4, 1, 7, 1, -1 }
Pattern 3: { 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 2, 5, 1, 5, 1, 5, 1, 5, 1, 5, 1, 7, 1, 2, 6, 1, 6, 1, 5, 1, 2, 4, 1, 7, 1, -1 }
Tox Box behavior file
Retrieved from "https://ukikipedia.net/mediawiki/index.php?title=Tox_Box&oldid=14464"
|
Statics - Wikipedia @ WordDisk
Statics is the branch of mechanics that is concerned with the analysis of (force and torque, or "moment") acting on physical systems that do not experience an acceleration (a=0), but rather, are in static equilibrium with their environment. The application of Newton's second law to a system gives:
{\displaystyle {\textbf {F}}=m{\textbf {a}}\,.}
For static analysis in economics, see Comparative statics. For the technique of static correction used in exploration geophysics, see Reflection seismology. For other uses, see Static analysis.
{\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
Where bold font indicates a vector that has magnitude and direction.
{\displaystyle {\textbf {F}}}
is the total of the forces acting on the system,
{\displaystyle m}
is the mass of the system and
{\displaystyle {\textbf {a}}}
is the acceleration of the system. The summation of forces will give the direction and the magnitude of the acceleration and will be inversely proportional to the mass. The assumption of static equilibrium of
{\displaystyle {\textbf {a}}}
= 0 leads to:
{\displaystyle {\textbf {F}}=0\,.}
The summation of forces, one of which might be unknown, allows that unknown to be found. So when in static equilibrium, the acceleration of the system is zero and the system is either at rest, or its center of mass moves at constant velocity. Likewise the application of the assumption of zero acceleration to the summation of moments acting on the system leads to:
{\displaystyle {\textbf {M}}=I\alpha =0\,.}
{\displaystyle {\textbf {M}}}
is the summation of all moments acting on the system,
{\displaystyle I}
is the moment of inertia of the mass and
{\displaystyle \alpha }
= 0 the angular acceleration of the system, which when assumed to be zero leads to:
{\displaystyle {\textbf {M}}=0\,.}
From Newton's first law, this implies that the net force and net torque on every part of the system is zero. The net forces equaling zero is known as the first condition for equilibrium, and the net torque equaling zero is known as the second condition for equilibrium. See statically indeterminate.
This article uses material from the Wikipedia article Statics, and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
|
How do I size a breaker?
How do I calculate circuit breaker and wire size?
Microwave breaker size and other examples
Our breaker size calculator allows you to estimate the breaker size you need to protect your electrical appliances while withstanding their load without using a breaker size chart.
If you're wondering, 'What breaker size does a dishwasher need?'; 'What's the correct breaker size for a microwave?'; or 'What does a circuit breaker do?', we will answer all those questions in a few paragraphs below.
You will also learn how to size a breaker and use our breaker size calculator to obtain an appropriate value.
⚠️ Remember to use this tool as a starting point only, and allow a licensed electrician to get and install the correct breaker size for your needs.
A circuit breaker protects electrical appliances when an overload or fault is produced in the circuit. Devices connected to the same circuit lose power when the breaker trips, preventing the excess current from reaching them.
Common causes that make a circuit breaker trip are:
Overload. When too many devices are connected, the load may be greater than the breaker can withstand. In return, the breaker trips and cuts the power to avoid damaging the equipment.
Short circuit. When the hot wire touches the neutral wire somewhere in the circuit, the current increases significantly and overloads the circuit. If the breaker didn't trip, this short circuit could produce a fire.
Ground fault surges. When the hot wire comes into contact with a conductive surface instead. This again produces an overload of the circuit, tripping the breaker.
🙋 Our breaker size calculator allows you to plan ahead and input up to 5 different appliances to estimate the total load the breaker will need to protect.
Let's learn how to correctly size a breaker next.
Generally, you should size a breaker for 125% of the load (or 25% extra capacity) and no less. Oversized breakers can allow wires to heat above safety levels without stopping the current. On the other hand, undersized breakers may continuously trip under normal operation.
A circuit breaker is rated in amperes (A), and the rating tells us how much current can safely flow through the breaker without causing it to trip. For example, a 15 A breaker allows up to 15 A of total load to be connected simultaneously.
If we know the wattage of each appliance connected in the circuit, we can obtain their respective loads with different equations based on the type of current.
Load in DC
\quad I = \frac{W}{V}
I
– Resulting current expressed in A;
W
– Appliance's wattage in watts (W); and
V
– System's voltage in volts (V).
Load in AC single-phase
\quad I = \frac{W}{V \cdot pf}
pf
is the appliance's power factor (ratio between real power and apparent power).
Load in AC three-phase
\quad I = \frac{W}{V \cdot pf \cdot \sqrt{3}}
This is similar to the single-phase equation, but we also divide by
\sqrt{3}
We covered how to size a breaker based on load. What if we want to go the other way around and figure out how many appliances you can connect to a single circuit breaker?
For example, 'How many watts can a 15 amp breaker handle?'. To answer that, we use the previous equations.
Suppose we're dealing with a
400\ \text{V}
DC circuit, then the calculation is straightforward:
We multiply both sides of the breaker size for DC formula and obtain:
W = I \cdot V
Then we input our
15\ \text{A}
and voltage to get:
W = 6000\ \text{W}
To calculate circuit breaker and wire size:
Write down an approximation of the total load you will connect to the circuit breaker.
Get a circuit breaker rated for 125% of this load.
Make sure the wire it will be paired with has a higher ampacity than the circuit breaker's rating. Otherwise, the current may heat the wire above safety levels under normal operation.
Always ask a certified electrician to do the work for you.
After using our breaker size calculator, input the result in the wire size calculator to obtain matching breaker-wire sizes. Alternatively, a wire-breaker size chart may provide the same information.
A breaker's function within a circuit;
How to size a breaker based on load and the reverse example (watts from breaker size); and
How to calculate circuit breaker and wire size.
You already have all the knowledge you need to use the breaker size calculator! Feel free to use our tool or keep reading to view some practical examples.
We've seen the example 'How many watts can a 15 amp breaker handle?'. Let's solve some similar problems with the breaker size calculator.
Microwave breaker size
Suppose we need to build a dedicated circuit for a microwave, so we need to pair it with an appropriate circuit breaker.
First, we need to check the current type at home (usually AC single-phase) and its voltage. Assume we do that and realize we have 230 V AC single-phase.
Then, we must find the microwave's power requirement or wattage and its power factor. You can find this information on a label on the back of the appliance. Let's say this example microwave has a
1000 \ \text{W}
wattage paired with a
0.72
power factor.
Last, we input the data in the breaker size calculator (or in the respective equation if we're solving manually) using the advanced mode of the calculator.
The calculator will output the microwave's load on the circuit
6.04 \ \text{A}
and the closest standard breaker size suitable for that load
15 \ \text{A}
. This means we can probably add a few lights to that circuit as well.
🙋 If you're solving the equation manually, remember to add an extra 25% capacity to the breaker.
What breaker size does a dishwasher need?
Again, we need to check the current type at home and the appliance's wattage and power factor.
Suppose it's a
1800 \ \text{W}
dishwasher with a
0.88
power factor operating with
230 \ \text{V}
AC single-phase supply.
The formula for the load on this type of circuit says we need to divide the wattage by the power factor times the voltage. As a result, we will get a load of
8.89 \ \text{A}
11.11 \ \text{A}
when we add the extra 25%.
The closest standard breaker size is
15 \ \text{A}
, so we would pick that one for our dishwasher circuit. We've already explained how to calculate circuit breaker and wire size, so remember to choose a wire with greater ampacity than the breaker's size to connect the outlets.
How much load can I connect to a 15A circuit breaker?
12A is the maximum load you should connect to a 15A circuit breaker. Your load should use at most 80% of the breaker's capacity to avoid power loss.
What are the standard breaker sizes?
Some of the standard breaker sizes are: 15A, 20A, 25A, 30A, 35A, 40A, 45A, 50A, 60A, 70A, 80A, 90A, 100A, 110A, 125A, 150A, 175A, 200A, 225A, 250A, 300A, 350A, 400A.
Can I replace a breaker with a bigger breaker?
You can replace a breaker with a bigger breaker, but it's not recommended. You should make sure the wiring is rated for a greater load to avoid overheating and potential fires. You should always ask a certified electrician to replace it for you.
Input up to 5 appliances:
Appliance B (leave empty if unused)
Acceleration of a particle in an electric fieldAC wattageBridge rectifier… 74 more
For easy calculation of the reduced mass in a two-body problem try the reduced mass calculator.
|
Advanced: Loops | Evidence Docs
Needful Things have asked us where they should spend their marketing dollars.
To best allocate marketing dollars, you often look for the cheapest channel to acquire customers, the lowest Cost Per Acquisition (CPA) channel:
CPA=\frac {Spend \space on \space marketing \space channel} {\# \space New \space orders \space from \space channel}
Luckily, each Needful Things customer order is attributed to a marketing channel, and customers only seem to buy once, so this should be quite easy.
Let's create a new file in the pages/ directory called marketing-performance.md to explore this.
Create queries to loop through#
First we'll create a chart to see which channels customers come from.
Secondly, to compare the CPA, we'll need to join the data from the marketing_spend table. We can join on the channel_month field in each table, (which concatenates the channel, and order_month fields).
Paste into marketing-performance.md:
# Marketing Performance
## Orders by Channel
```orders_by_channel
channel_month,
group by channel, order_month
order by orders
data={data.orders_by_channel}
series=channel
## Channel CPA
```channel_cpa
sum(spend) as total_spend,
sum(orders) as total_orders,
sum(spend) / sum(orders) as cpa
from marketing_spend
left join ${orders_by_channel} using(channel_month)
group by marketing_channel
order by cpa
You may notice that we use a ${...} in our second query. This syntax allows us to reference the result of another query on the same page.
Your page should now look like this
Now we'll use the channel_cpa query to demonstrate how to loop through a dataset.
Set up the Loop#
Loops are achieved through an each block.
Let's use an each block to list the names of all the channel.
Add to bottom of marketing-performance.md:
{#each data.channel_cpa as channel}
{channel.marketing_channel}
In the each block, we're passing in the query name data.channel_cpa and giving it an "alias" of channel to reference inside the each block. You must alias the query in the each block.
The each block loops through every row of the table and displays whatever is included in the middle of the block. In this case, we're displaying the marketing_channel column of the channel dataset.
Add data for each channel#
Now we're going to add the CPA, spend and orders for each channel.
We'll use a <Value/> component for this. You could do this with a bare reference as we did with the item name, but that would not allow us to format the value as a currency.
When used inside an each block, the <Value/> component only requires a reference to the column it needs to display.
Change the highlighted line below:
**{channel.marketing_channel} CPA was <Value value={channel.cpa} fmt=usd/>**, with a spend of <Value value={channel.total_spend} fmt=usd/>, bringing in <Value value={channel.total_orders}/> orders.
Great - now we have our data, and Needful Things can see which channel to spend more money on - the one with the lowest CPA!
Advanced: Parameterized Pages »
Create queries to loop through
Add data for each channel
|
Deploy Generated MATLAB Functions from Symbolic Expressions with MATLAB Compiler - MATLAB & Simulink
Generate Deployable Function from Symbolic Expression
Write Script in MATLAB
This example shows how to generate a MATLAB® function from a symbolic expression and use the function to create a standalone application with MATLAB Compiler™.
This example follows the steps described in Create Standalone Application from MATLAB (MATLAB Compiler) and updates the steps to generate a MATLAB function from a symbolic expression.
First, create the second-order differential equation
\frac{{d}^{2}y}{d{t}^{2}}+\frac{1}{2}\frac{dy}{dt}+2y=0
as a symbolic equation using syms.
ode = diff(y,2) + diff(y)/2 + 2*y == 0;
To solve the differential equation, convert it to first-order differential equations by using the odeToVectorField function.
V = odeToVectorField(ode);
Next, convert the symbolic expression V to a MATLAB function file by using matlabFunction. The converted function in the file myODE.m can be used without Symbolic Math Toolbox™. The converted function is deployable with MATLAB Compiler.
matlabFunction(V,'vars',{'t','Y'},'File','myODE');
Write a MATLAB script named plotODESols.m that solves the differential equation using ode45 and plots the solution. Save it in the same directory as myODE.m function.
type plotODESols.m
sol = ode45(@myODE,[0 20],[0 4]);
ylabel('Displacement y')
You can use this script to create and deploy standalone application using Application Compiler app.
On the MATLAB Apps tab, in the Apps section, click the arrow to open the apps gallery. Under Application Deployment, click Application Compiler. The MATLAB Compiler project window opens.
In the Add Files dialog box, browse to the file location that contains your generated script. Select plotODESols.m and click Open. The Application Compiler app adds the plotODESols function to the list of main files.
Files required for your application to run — Additional files required by the generated application to run. The software includes these files in the generated application installer. When you add plotODESols.m to the main file section of the toolstrip, the compiler automatically adds myODE.m as the file required for your application to run.
Files installed for your end user — Files that are installed with your application. These files include the automatically generated readme.txt file and the generated executable for the target platform.
When the deployment process is complete, the output should contain the list of things below.
for_testing — Folder containing all the artifacts created by mcc (such as binary, header, and source files for a specific target). Use these files to test the installation.
Ensure that you have administrator privileges on the other machines to run and deploy the standalone application.
|
{u}_{t}=u{u}_{x}+v{u}_{y}
{v}_{t}=u{v}_{x}+v{v}_{y}
u\left(x,y,0\right)=f\left(x,y\right)
v\left(x,y,0\right)=g\left(x,y\right)
{L}_{t}\left[{u}_{t}\right]={L}_{t}\left[u{u}_{x}+v{u}_{y}\right]
{L}_{t}\left[{v}_{t}\right]={L}_{t}\left[u{v}_{x}+v{v}_{y}\right]
su\left(x,y,s\right)-u\left(x,y,0\right)={L}_{t}\left[u{u}_{x}+v{u}_{y}\right]
sv\left(x,y,s\right)-v\left(x,y,0\right)={L}_{t}\left[u{v}_{x}+v{v}_{y}\right]
su\left(x,y,s\right)=\frac{1}{s}f\left(x,y\right)+\frac{1}{s}{L}_{t}\left[u{u}_{x}+v{u}_{y}\right]
sv\left(x,y,s\right)=\frac{1}{s}g\left(x,y\right)+\frac{1}{s}{L}_{t}\left[u{v}_{x}+v{v}_{y}\right]
u\left(x,y,t\right)=f\left(x,y\right)+{L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[u{u}_{x}+v{u}_{y}\right]\right]
v\left(x,y,t\right)=g\left(x,y\right)+{L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[u{v}_{x}+v{v}_{y}\right]\right]
u\left(x,y,t\right)=\underset{n=0}{\overset{\infty }{\sum }}{u}_{n}\left(x,y,t\right),\text{ }v\left(x,y,t\right)=\underset{n=0}{\overset{\infty }{\sum }}{v}_{n}\left(x,y,t\right)
u{u}_{x}=\underset{n=0}{\overset{\infty }{\sum }}{A}_{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}v{u}_{y}=\underset{n=0}{\overset{\infty }{\sum }}{B}_{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}u{v}_{x}=\underset{n=0}{\overset{\infty }{\sum }}{C}_{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}v{v}_{y}=\underset{n=0}{\overset{\infty }{\sum }}{D}_{n}
{A}_{n}
{B}_{n}
{C}_{n}
{D}_{n}
are adomian polynomials [16]. From the Equations (2.5), (2.6), (2.7) and (2.8), we get
\underset{n=0}{\overset{\infty }{\sum }}{u}_{n}\left(x,y,t\right)=f\left(x,y\right)+{L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\underset{n=0}{\overset{\infty }{\sum }}{A}_{n}+\underset{n=0}{\overset{\infty }{\sum }}{B}_{n}\right]\right]
\underset{n=0}{\overset{\infty }{\sum }}{v}_{n}\left(x,y,t\right)=g\left(x,y\right)+{L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\underset{n=0}{\overset{\infty }{\sum }}{C}_{n}+\underset{n=0}{\overset{\infty }{\sum }}{D}_{n}\right]\right]
{u}_{0}\left(x,y,t\right)=f\left(x,y\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{n+1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\underset{n=0}{\overset{\infty }{\sum }}{A}_{n}+\underset{n=0}{\overset{\infty }{\sum }}{B}_{n}\right]\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\ge 0.
{v}_{0}\left(x,y,t\right)=g\left(x,y\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{v}_{n+1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\underset{n=0}{\overset{\infty }{\sum }}{C}_{n}+\underset{n=0}{\overset{\infty }{\sum }}{D}_{n}\right]\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\ge 0.
f\left(x,y\right)=g\left(x,y\right)=x+y
{u}_{0}\left(x,y,t\right)={v}_{0}\left(x,y,t\right)=x+y
{u}_{1}\left(x,y,t\right)
{v}_{1}\left(x,y,t\right)
can be calculate as
\begin{array}{c}{u}_{1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{A}_{0}+{B}_{0}\right]\right]\\ ={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{u}_{0}{u}_{0x}+{v}_{0}{u}_{0y}\right]\right]\\ ={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\left(x+y\right)+\left(x+y\right)\right]\right]\\ =2t\left(x+y\right)\end{array}
{v}_{1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{C}_{0}+{D}_{0}\right]\right]={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{u}_{0}{v}_{0x}+{v}_{0}{v}_{0y}\right]\right]=2\left(x+y\right)t
{u}_{2}\left(x,y,t\right)
{v}_{2}\left(x,y,t\right)
\begin{array}{c}{u}_{2}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{A}_{1}+{B}_{1}\right]\right]\\ ={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\left({u}_{0}{u}_{1x}+{u}_{1}{u}_{0x}\right)+\left({v}_{0}{u}_{1y}+{v}_{1}{u}_{0y}\right)\right]\right]\\ ={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[\left(2t\left(x+y\right)+2t\left(x+y\right)\right)+\left(2t\left(x+y\right)+2t\left(x+y\right)\right)\right]\right]\\ ={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[8t\left(x+y\right)\right]\right]\\ =2{t}^{2}\left(x+y\right)\end{array}
{v}_{2}\left(x,y,t\right)=4{t}^{2}\left(x+y\right)
Substitute all the values of
{u}_{0},{u}_{1},{u}_{2},\cdots
{v}_{0},{v}_{1},{v}_{2},\cdots
in the Equation (2.7), we get
u\left(x,y,t\right)=\left(x+y\right)+2t\left(x+y\right)+4{t}^{2}\left(x+y\right)+\cdots
v\left(x,y,t\right)=\left(x+y\right)+2t\left(x+y\right)+4{t}^{2}\left(x+y\right)+\cdots
u\left(x,y,t\right)=\left(x+y\right)\left[1+2t+4{t}^{2}+\cdots \right]
v\left(x,y,t\right)=\left(x+y\right)\left[1+2t+4{t}^{2}+\cdots \right]
u\left(x,y,t\right)=\frac{x+y}{1-2t}
v\left(x,y,t\right)=\frac{x+y}{1-2t}
f\left(x,y\right)={x}^{2},\text{ }g\left(x,y\right)=y
{u}_{0}\left(x,y,t\right)={x}^{2},\text{ }{v}_{0}\left(x,y,t\right)=y
{u}_{1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{A}_{0}+{B}_{0}\right]\right]=2{x}^{3}t
{v}_{1}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{C}_{0}+{D}_{0}\right]\right]=yt
{u}_{2}\left(x,y,t\right)
{v}_{2}\left(x,y,t\right)
{u}_{2}\left(x,y,t\right)={L}_{t}^{-1}\left[\frac{1}{s}{L}_{t}\left[{A}_{1}+{B}_{1}\right]\right]=5{x}^{4}t
{v}_{2}\left(x,y,t\right)=y{t}^{2},\text{ }{u}_{3}\left(x,y,t\right)=14{x}^{5}{t}^{3}
and so on. Substitute all the values of
{u}_{0},{u}_{1},{u}_{2},\cdots
{v}_{0},{v}_{1},{v}_{2},\cdots
in Equation (2.7), we get
u\left(x,y,t\right)={x}^{2}\left(1+2tx+5{t}^{2}{x}^{2}+14{x}^{2}{t}^{2}+\cdots \right)
v\left(x,y,t\right)=y\left(1+t+{t}^{2}+\cdots \right)=\frac{y}{1-t}
(The shock occurs at
t=\frac{1}{4x}
). This is an approximate solution of given system of equations.
From the examples above, we can clearly say that we can calculate
u\left(x,y,t\right)
v\left(x,y,t\right)
when explicitly solutions exist for given initial functions. More importantly, the methodology [1] [2] [3] does have potential application to the system of nonlinear partial differential equations and clearly in the case of stochastic parameters as well. The given system of equation has a unique solution for the given boundary conditions.
|
Darcy's law – how to calculate permeability
What is porosity? How to calculate porosity
How to use the porosity and permeability calculator
This porosity and permeability calculator uses Darcy's law to calculate these two properties of a porous material with a fluid flowing through it. The main application for this calculation is in the earth sciences to understand how water, oil, gas, etc., travel through the different layers of the earth.
The diagram above represents how we model a fluid moving through a porous substance due to a pressure gradient on either side of it.
In this accompanying article to our porosity and permeability calculator, we will:
Present Darcy's law equation;
Explain how to calculate permeability; and
Show how to calculate porosity.
Darcy's law is an approximation used extensively in the earth sciences to determine the characteristics of a material. It models the flow of a fluid through a porous medium. The equation for Darcy's law is:
\quad Q = \frac{kA}{\mu L}\Delta p
Q
– Discharge rate, in units of volume per time;
k
– Permeability of the material;
A
– Cross-sectional area of the material;
\mu
– Dynamic viscosity of the fluid;
L
– Distance the fluid travels through the material; and
\Delta p
– Pressure difference either side of the material.
We can rearrange this equation to find the permeability in terms of the other quantities that we can measure from experiments:
\quad k = \frac{Q \mu L}{A \Delta p}
The widely used unit for permeability is appropriately the darcy (d) or millidarcy (md). The dimension of the darcy unit is L2 (length squared). Therefore, you can also represent permeability using the SI units system as m2 (meters squared) – 1 darcy is 9.87 × 10-13 m2.
When we ask "What is the porosity of a substance?", it means to ask how much of the volume of the substance is empty compared to the space taken up by the solid material. The greater the porosity, the more open space per unit volume of material.
When using the Darcy equation, the porosity of a substance means how well it allows a fluid to flow through it. The higher the porosity, the better fluids flow through it, but only if the voids are well connected. Let's first look at the equation for fluid flow porosity, and then we'll unpack it and interpret it.
The equation for porosity is:
\quad \phi = \frac{Qt}{AL}
\phi
– Porosity of the material; and
t
– Time taken for the fluid to travel the distance
L
through the substance with cross-sectional area
A
This equation tells us that the more porous a material is, the quicker the discharge rate is for a given volume of material. Porosity is a dimensionless quantity.
Using our porosity and permeability calculator is relatively straightforward. First, we define the fluid pressure gradient across the substance. You can either enter the pressure on both sides or input the pressure difference directly.
Continue to enter the following terms:
The distance the fluid flows through the material.
The discharge rate of fluid leaving the material.
The viscosity of the fluid.
You will then see that we have enough information to calculate the permeability, and you will be rewarded with a result.
To calculate the porosity, input the residence time, which is the time taken for the fluid to move through the material. And there you have it – the porosity of your substance.
💡 If you have values in different units, first change the unit of the variable by clicking on the unit to open a drop-down list of alternative units.
We hope you have found this porosity and permeability calculator useful. Carry on learning about fluid mechanics by checking out these other related calculators:
Reynolds number calculator;
Hydraulic conductivity calculator;
Darcy Weisbach calculator; and
Flow rate calculator.
What is k in Darcy's law?
The term k in Darcy's law represents the permeability of a material and is a measure of how easily a fluid (liquid or gas) can flow through a porous substance, such as sand, rock, etc.
How has Darcy's law been verified in nature?
Henry Darcy based the law named after him on experiments he performed that involved water flow through beds of sand. His work was the foundation of hydrogeology, one of the earth sciences.
How do I calculate coefficient of permeability?
To calculate the permeability of a porous material, use Darcy's law equation:
Multiply together the fluid discharge rate, dynamic viscosity, and distance traveled.
Divide the result from Step one by the cross-sectional area of the material multiplied by the pressure difference on either side of the material.
The result is the material's permeability the fluid travels through.
What is Darcy velocity?
Darcy velocity is defined as the rate flow of a fluid per unit of the cross-sectional area of a porous material. It depends on the porosity of the material and the pressure difference that drives the fluid flow.
What is the permeability of soil?
1 to 10 darcy, depending on soil type. A very sandy soil will be more permeable (~10 darcy), whereas a very peaty soil with lots of organic material will be less permeable (~1 darcy).
Porosity is a measure of the amount of void space in a material. Permeability is a measure of how well fluids flow through a material. You can have a material with very high porosity, but if the voids in the material are not connected, fluids will not flow through it, and its permeability will be lower.
What is the porosity of soil?
The porosity of soil depends on the soil type. For sandy soils, it is between 0.36 and 0.43, whereas for soil with a lot of clay content, it is between 0.51 and 0.58.
Pressure difference (Δp)
Viscosity (μ)
Pa⋅s
Residence (t)
Use the net force calculator to find the resultant force on a body.
|
What is a camera field of view?
How to calculate the angle of view of a camera
How to calculate the camera field of view
A quick summary: angle of view vs. field of view
...and some examples!
Does the sensor size of a DSLR matter?
Whether you are planning a photoshoot or just a photography enthusiast, our camera field of view calculator will help you learn the whole picture.
With our tool, you will be able to calculate the camera's field of view (any camera!), and not only that, there is more to it. Keep reading to discover:
What is the field of view of a camera?;
What is the difference between the angle of view and the field of view of a camera?;
How to calculate the angle of view;
How to determine the field of view of a camera; and
Some focused examples.
Refine your photographic knowledge with our dedicated tools like the depth of field calculator and our magnification of a lens calculator. Cheese! 📸
📷 Cameras are our way to create memories — instants preserved forever — from what we can see. However, their electronic eyes have some limitations when it comes to how much of those memories they record. We may need to think beforehand about what we want to include in our pictures, and many photographers find it helpful to know their cameras' field of view.
The concept of field of view is not unique to photography: check how it differs for astronomers at our telescope field of view calculator. 🔭
⚠️ There is quite a bit of confusion online (and not only) on the matter, with various definitions and interpretations. Here we gave the one that makes the most sense for us, but feel free to disagree and let us know!
The basic concept underlying the entire matter is that cameras can capture a single, defined portion of the real-world at once. How significant this portion depends on the camera set-up, particularly on the type of lens and camera body used by the photographer.
Imagine drawing a rectangle in front of you. How would you tell someone how big it is? You have two ways of saying so:
Using angles to specify in absolute terms the dimensions; or
Using a pair of length measurements: you can't say only "1 meter", but you have to specify "1 meter at 4 meters of distance" (this is a relative measurement).
The concepts are used almost interchangeably, as they define the same concept. However, their definitions are different. Let's discover them in detail.
When we use angles to define the dimensions of a picture, we talk of the angle of view. The angle of view is pretty easy to visualize: your camera lies at the center of a sphere, and connecting the angles of the scene you are capturing to its center gives you a set of three angles:
A horizontal angle;
A vertical angle; and
A diagonal angle.
The two first ones define the width and height of the rectangle corresponding to our sensor. Notice that these quantities are absolute: you can place the "sphere" as far as you like (even at an infinite distance), and the angle of view will still be the same.
To calculate the angle of view, you need to know two parameters of your set-up; one easy, the other one not so much:
The focal length of the lens you are using; and
The dimensions of the sensor of your camera.
🙋 To find the size of your sensor, search on Google "[your camera model] sensor size": you will easily find the correct values!
The formula for the angle of view (in degrees) is:
\footnotesize \text{aov}_i = 2 \arctan{\left(\frac{\text{sensors size}_i}{2\times \text{focal length}} \right)}
Why the tiny
i
, you ask? This formula holds for all the three possible directions on the sensor: horizontal, vertical, and diagonal.
As a rule of thumb, the higher the magnification of the lens is, the smaller the angle of view. If you manage to fit the Moon in your picture, the vertical angle of view would be around half a degree, but this feat requires a pretty high magnification. On the other hand, a
55\ \text{mm}
lens on an SLR (single-lens reflex) camera will give you a horizontal angle of view of about
20\degree
Think about the Moon for a second more. You managed to fit it into your picture — your sensor! We are talking of a
3,500\ \text{km}
body. You can take a picture of a plane passing between you and our satellite with some luck and good timing. A B747 flying at
6.5\ \text{km}
would almost eclipse the Moon, fitting perfectly in the picture together. Does it mean the jumbo jet is really that jumbo, or that the field of view can be a relative concept, too?
Even if angles are easy to visualize, they can be hard to estimate (how wide is
5\degree
?). The field of view comes in handy to complement the idea of the angle of view.
Take the diverging lines from the sphere's center to the corner of the scene, and stop them at a certain distance
d
. Now, draw the corresponding rectangle: you will obtain a set of measurements we call the field of view at a distance
d
Here is the lack of absoluteness of the field of view: its value varies with the distance between the camera and the subject. That's why in the same angle of view, you can fit both the Moon and a passenger airplane.
As for the angle of view, you can identify three quantities associated with the field of view: a horizontal length, a vertical length, and a diagonal length. To calculate the field of view
\text{fov}_i
of a camera in each of the three possible directions, use the following formula:
\footnotesize \text{fov}_i = 2\tan{\left(\frac{\text{aov}_i}{2}\right)}\times d
You already know what the
i
means and
is the distance at which the field of view is measured. Remember to use the correct dimension of your sensor: you want to use its length to calculate the horizontal field of view of your camera, not the vertical one... unless you are taking a portrait picture.
A camera captures a rectangular portion of the real world, projecting it onto its sensor. We identified two possible ways to measure the size of that portion:
The angle of view; and
We define both quantities for three spatial directions, which allows us to calculate a vertical, diagonal, and horizontal field of view for a camera set-up, alongside the respective angles of view.
The field of view is a function of both the angle of view and the distance between the sensor and the object.
It's impossible to define a typical field of view of a camera: it depends on the lens you are mounting at the moment. However, we can give you some practical examples.
Canon produces a rectilinear
\text{11-24 mm}
lens — which is insane — that allows capturing sharp wide-angle images. How wide? Let's mount the lens on a Canon EOS 550D and calculate this camera field of view!
A rectilinear lens is a lens that preserves orthogonality: two straight, perpendicular lines in the real world are depicted as straight, perpendicular eyes by a rectilinear lens. A fisheye lens, on the other hand, distorts them — a small price to pay for an extremely wide field of view!
The Canon EOS 550D has a sensor size
22.3\times14.9\ \text{mm}
, which allows us to calculate both the horizontal and the vertical angle of view. Let's not be extreme and use the
24\ \text{mm}
focal length.
For the horizontal angle of view we have:
\footnotesize \begin{align*} \text{aov}_{\text{h}} &= 2\times \arctan{\left(\frac{22.3\ \text{mm}}{2\times 24 \text{mm}} \right)}\\\\ &=0.87\ \text{rad} = 50\degree \end{align*}
And for the horizontal direction we calculate:
\footnotesize \begin{align*} \text{aov}_{\text{v}} &= 2\times \arctan{\left(\frac{14.9\ \text{mm}}{2\times 24 \text{mm}} \right)}\\\\ &=0.60\ \text{rad} = 34\degree \end{align*}
That is an extremely wide angle of view: it covers
1700
square degrees! However, you need almost 20 of these fields to capture a true
260\degree
picture.
What can you capture with this lens? For a distance
d=200\ \text{m}
, the angle of view converts into the respective linear field of views:
\footnotesize \begin{align*} &\text{fov}_{\text{h}} = 2\tan{\left(\frac{50\degree}{2}\right)}\times 200= 186\ \text{m}\\\\ &\text{fov}_{\text{v}} = 2\tan{\left(\frac{34\degree}{2}\right)}\times 200= 122\ \text{m}\\ \end{align*}
That is more than enough to fit the whole Colosseum in Rome in a single picture. And all of this standing barely more than the diameter of the arena itself away!
On the other hand, let's calculate the camera field of view for a typical telephoto lens set-up. We keep the same camera but mount a
200\ \text{mm}
lens. Insert the values in the appropriate fields of our camera field of view calculator and calculate the angles of view in this case:
\footnotesize \begin{align*} \text{aov}_{\text{v}} &= 2\times \arctan{\left(\frac{14.9\ \text{mm}}{2\times 200 \text{mm}} \right)}\\\\ &=0.11\ \text{rad} = 6.4\degree \end{align*}
For the horizontal one, and:
\footnotesize \begin{align*} \text{aov}_{\text{v}} &= 2\times \arctan{\left(\frac{14.9\ \text{mm}}{2\times 200 \text{mm}} \right)}\\\\ &=0.074\ \text{rad} = 4.3\degree \end{align*}
The solid angle covered by this set-up is about
27
square degrees for the vertical one! If you want to take a
360\degree
picture, you'll need more than
1500
shots! But at a distance of
200\ \text{m}
you would be able to picture an area of
45\ \text{m}\times 30\ \text{m}
: good enough to take some exciting wildlife pictures without disturbing anyone!
🙋 You can use our camera field of view calculator to find the values of the angles and fields of view and to calculate the needed focal length of the lens you need to mount to obtain a particular field of view. We locked the variables associated with the sensor's size: it's unlikely you will change it instead of the lens!
Do you want to learn more about the fundamental of photography with a slight technical twist? We made the right calculators for you: the aspect ratio calculator and the crop factor calculator!
The sensor size of your camera significantly affects the quality of your pictures. A camera with a larger sensor will give you a wider field of view for the same lens, maintaining the same magnification: your subject will be surrounded by more background. The advantages are relative, though: when printing the photograph in the same format, a larger field of view will necessarily translate to a lower magnification.
What is the angle field of view of a camera with a 23.5 x 15.6 mm sensor and 50 mm lens?
Using angles, it is 26.5° × 17.7°, vertical and horizontal angle of view, respectively. To calculate these values, input them in the angle of view formula:
aovᵢ = 2 × arctan(sᵢ/(2 × f))
sᵢ is either the width w or the height h of the sensor; and
f the focal length of the lens.
To find the result, substitute these values:
aovᵥ = 2 × arctan(23.5/(2 × 50)) = 26.5°
aovₕ = 2 × arctan(15.6/(2 × 50)) = 17.7°
What is the difference between camera angle of view and camera field of view?
The angle of view of a camera is an absolute measure of the horizontal and vertical angles captured by a combination of camera and lens. The field of view measures the same concept but uses lengths. Since the angles don't change, the field of view depends on the distance: specifically, it increases alongside it.
How do I determine the camera field of view?
To calculate the camera field of view, you need to know three parameters:
The focal length f of the lens.
The camera sensor's size (w × l).
The distance d from the camera to the subject.
Calculate the angle of view for each side of the sensor sᵢ with the formula:
Input each result in the camera field of view equation:
fovᵢ = 2 × tan(aov/2) × d
Sensor size (width)
Sensor size (height)
Use this DPMO calculator to find out the popular measure of process performance - defects per million opportunities.
This latitude longitude distance calculator will obtain the distance between any two points on Earth's surface given their coordinates.
Latitude Longitude Distance Calculator
|
\mathrm{with}\left(\mathrm{MultivariatePowerSeries}\right):
a≔\mathrm{PowerSeries}\left(1+3x+7xy+4{x}^{2}\right):
\mathrm{GetCoefficient}\left(a,x\right)
\textcolor[rgb]{0,0,1}{3}
\mathrm{GetCoefficient}\left(a,xy\right)
\textcolor[rgb]{0,0,1}{7}
\mathrm{GetCoefficient}\left(a,x{y}^{2}\right)
\textcolor[rgb]{0,0,1}{0}
We create a univariate polynomial over power series with
z
as its main variable, corresponding to the expression
x+yz+\frac{{z}^{2}}{1-x-y}
f≔\mathrm{UnivariatePolynomialOverPowerSeries}\left([\mathrm{PowerSeries}\left(x\right),\mathrm{PowerSeries}\left(y\right),\mathrm{GeometricSeries}\left([x,y]\right)],z\right):
{z}^{0}
is the power series corresponding to
x
\mathrm{GetCoefficient}\left(f,0\right)
[\textcolor[rgb]{0,0,1}{PowⅇrSⅇrⅈⅇs:}\textcolor[rgb]{0,0,1}{x}]
{z}^{0}
\frac{1}{1-x-y}
\mathrm{GetCoefficient}\left(f,2\right)
[\textcolor[rgb]{0,0,1}{PowⅇrSⅇrⅈⅇs of}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{:}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\dots }]
|
Antonino Drago*
Department of Physical Science, Naples University “Federico II”, Naples, Italy.
Abstract: Since 1947 a foundation of Quantum Mechanics (QM) on functional analysis was suggested by Segal. By defining the C*-algebra of the observables, then the Gelfand-Naimark-Segal theorem faithfully represents this algebra into Hilbert space. In the 70’s Emch has reiterated this formulation and improved it. Recently Strocchi improved it even more. First, he suggested an axiomatization of the paradigmatic Dirac-von Neumann’s formulation of QM to which he addresses two basic criticisms, i.e. a weak linkage with the experimental basis of theoretical physics and the obscurity about the separation mark between classical mechanics and QM. Afterwards, through an analysis of the experimental basis of a physical theory he suggests an explanation of Segal’s restriction of the operators to be bounded. Eventually, he represents this algebra into Hilbert space and at last, by means of Weyl algebra he obtains the symmetries of the dynamics of a particle theory. In fact, several characteristic features of this formulation correspond to those determined by the two choices which are the alternative ones to the choices of the dominant formulation. It is a problem-based theory, since it starts rather from than axioms a problem (i.e. the indeterminacy); then, it argues through both doubly negated propositions and an ad absurdum proof. Moreover, its theoretical development is similar to that of an alternative classical theory since it put, before the geometry, the algebra; the bounded operators are represented by a polynomial algebra; which pertains to constructive mathematics. Eventually, he obtains the symmetries of the theory. The problems to be overcome in order to accurately re-construct his formulation according to the two alternative choices which are listed. It is concluded that rather an alternative role, it plays a complementary role to the paradigmatic formulation.
Keywords: Quantum Mechanics, C*-Algebra Approach, Strocchi’s Formula-tion, Two Dichotomies, Constructive Mathematics, Non-Classical Logic
As a consequence of Axioms I and II, the states∙∙∙ define positive linear functionals on the algebra of observables∙∙∙ The following axiom relates such functionals to the experimental expectations∙∙∙ Axiom III. Expectations [of an experiment applying an operator to a state ω] are given by the Hilbert space matrix element ω> = (Ψ ω, A Ψ ω) (ivi, p. 2).
\left[{q}_{i},{q}_{j}\right]=0=\left[{p}_{i},{p}_{j}\right].\left[{p}_{i},{q}_{j}\right]=ih/2\text{π}{\delta }_{ij}.
{q}_{i}\psi \left(x\right)={x}_{i}\psi \left(x\right);{p}_{j}\psi \left(x\right)=ih/2\text{π}\psi /{x}_{j}\left(x\right)
{\Delta }_{\omega }\left(A\right)+{\Delta }_{\omega }\left(B\right)\ge C>0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{all}\text{\hspace{0.17em}}\omega
\Delta \left(A\right)+\Delta \left(B\right)>0
Cite this paper: Drago, A. (2018) Is the C*-Algebraic Approach to Quantum Mechanics an Alternative Formulation to the Dominant One?. Advances in Historical Studies, 7, 58-78. doi: 10.4236/ahs.2018.72005.
[1] Bishop, E., & Bridges, D. (1985). Constructive Analysis. Berlin: Springer.
[2] Bridges, D. (1979). Constructive Functional Analysis. London: Pitman.
[3] Bridgman, P. W. (1927). The Logic of Modern Physics. New York: MacMillan.
[4] Drago, A. (1996). Mathematics and Alternative Theoretical Physics: The Method for Linking Them Together. Epistemologia, 19, 33-50.
[5] Drago, A. (2000). Which Kind ol’ Mathematics for Quanturn Mechanics? In H. Weyl, C. Garola, & A. Rossi (Eds.), Foundatìons of Quantum Mechanics. Historical Analysis and Open Questions (pp. 167-193). Singapore: World Scientific.
[6] Drago, A. (2002). Lo sviluppo storico della meccanica quantistica visto attraverso i concetti fondamentali della fisica. Giornale di Fisica, 43, 143-167.
[7] Drago, A. (2004). A New Appraisal of Old Formulations of Mechanics. American Journal of Physics, 72, 407-409.
[8] Drago, A. (2012). Pluralism in Logic. The Square of Opposition, Leibniz’s Principle and Markov’s Principle. In J.-Y. Béziau, & D. Jacquette (Eds.), Around and beyond the Square of Opposition (pp. 175-189). Basel: Birckhaueser.
[9] Drago, A. (2013). The Emergence of Two Options from Einstein’s First Paper on Quanta. In R. Pisano, D. Capecchi, & A. Lukesova (Eds.), Physics, Astronomy and Engineering. Critical Problems in the History of Science and Society (pp. 227-234). Siauliai: Scientia Socialis P.
[10] Drago, A. (2014). A Dozen Formulations of Quantum Mechanics: A Mutual Comparison According to Several Criteria. In P. Tucci (Ed.), Attii SISFA 2014 (pp. 103-112). Pavia: Pavia U.P.
[11] Drago, A. (2016). The Three Formulations of Quantum Mechanics Founded on the Alternative Choices. In S. Esposito (Ed.), Società Italiana degli Storici della Fisica e dell’Astronomia: Atti del XXXV Convegno annuale/Proceedings of the 35th Annual Conference (pp. 251-259). Pavia: Pavia U.P.
[12] Drago, A., & Venezia, A. (2002). A Proposal for a New Approach to Quantum Logic. In C. Mataix, & A. Rivadulla (Eds.), Fisica Quantica y Realidad. Quantum Physics and Reality (pp. 251-266). Madrid: Fac. Filosofia, Uiniv. Complutenze de Madrid.
[13] Emch, G. G. (1984). Mathematical and Conceptual Foundations of 20th Century Physics. Amsterdam: North-Holland.
[14] Monna, A. F. (1973). Functional Analysis in Historical Perspective. Utrecht: Oosthoek.
[15] Pour-El, M. B. (1975). On a Simple Definition of Computable Function of a Real Variable—With Applications to Functions of a Complex Variable. Zeifr. F. Math. Logk u. Grundl. Math., 21, 1-19.
[16] Pour-El, M. B., & Richards, J. I. (1989). Computability in Analysis and Physics. Berlin: Springer.
[17] Segal, I. (1947). Postulates of General Quantum Mechanics. Annals of Mathematics, 48, 930-948.
[18] Strocchi, F. (2008). An Introduction to the Mathematical Structure of Quantuim Mechanics (1st ed.). Singapore: World Scientific.
[19] Strocchi, F. (2010). An Introduction to the Mathematical Structure of Quantuim Mechanics (2nd ed.). Singapore: World Scientific.
[20] Strocchi, F. (2012). The Physical Principles of Quantum Mechanics: A Critical Review. The European Physical Journal Plus, 127, 12.
[21] Takamura, H. (2005). An Introduction to the Theory of C-Algebras in Constructive Mathematics. In L. Crosilla, & P. Schuster (Eds.), From Sets Nad Types to Topology and Analysis (pp. 280-292). Oxford: Claredon.
[22] Von Neumann, J. (1936). On an Algebraic Generalization of the Quantum Mechanical Formalism (Part I). Recreational Mathematics, 1, 415-484
|
Finite Element Analysis of Thiopave Modified Asphalt Pavement
Finite Element Analysis of Thiopave Modified Asphalt Pavement ()
Xiushan Wang, Junjie Wang, Yangjie Qiu
School of Architecture and Engineering, ZheJiang Sci-Tech University, Hangzhou, China.
With the aim of studying the anti-rutting performance of Thiopave modified asphalt mixture applied to the upper layer of pavement, the strain-hardening creep model in ABAQUS finite element software was used to analyze the rutting under the condition of introducing temperature field. Compared with the calculation results of the rutting of ordinary asphalt pavement, it is found that Thiopave can improve the temperature sensitivity of asphalt mixture. With the increase of temperature, the rutting change of Thiopave modified asphalt pavement is smaller than that of ordinary asphalt. Thiopave also has a certain degree of improvement in the fatigue resistance of asphalt pavements, which can be applied to sections with high traffic volume in high temperature areas.
Thiopave Modified Asphalt Mixture, Temperature Field, ABAQUS Finite Ele-ment, Temperature Sensitivity, Fatigue Resistance
Wang, X. , Wang, J. and Qiu, Y. (2019) Finite Element Analysis of Thiopave Modified Asphalt Pavement. World Journal of Engineering and Technology, 7, 48-57. doi: 10.4236/wjet.2019.72B006.
Due to the warming of the climate, the heavy load of vehicles has caused the rut damage of asphalt pavement more and more serious. It is especially important to find a road material with excellent performance against rutting. Yang Xiwu, Das et al. [1] [2] conducted a comprehensive study on the road performance and factors of sulfur-modified asphalt mixture, and found that SEAM (Sulphur Extended Asphalt Modifier, called sulfur diluted asphalt modifier) can improve high-temperature stability of asphalt mixture. As an upgraded alternative to SEAM [3], Thiopave, as an upgraded alternative to SEAM, supplemented with a small account of plasticiser and viscosity reducer, may further enhance the strength and durability of sulfur-modified asphalt mixture [4]. Thiopave modifier has a significant improvement on the high temperature stability of asphalt mixtures. At present, the numerical simulation of rut on asphalt pavement has achieved certain results, but there are few studies on the anti-rutting performance of Thiopave modified asphalt mixture applied to the upper layer of pavement. The strain-hardening creep model in ABAQUS finite element software was used to analyze the rutting under the condition of introducing temperature field in this paper. Under the repeated loading of the vehicle, study the influence law of temperature, load size and repeating times to the deformation of asphalt pavement. Explore the anti-rutting deformation performance of Thiopave asphalt mixture, make a contrast to ordinary asphalt pavement. And provide a certain technical reference for its practical application.
2. Establish Rutting Calculation Model
2.1. Pavement Structure
Calculating the rutting by the use of ABAQUS finite element software, it is assumed that the materials of each layer are uniform and isotropic, the asphalt surface layer conforms to the viscoelastic constitutive relation, and the other layers all satisfy Hooke’s law [5]. The three-dimensional model is simplified to two-dimensional, and the width and depth of the road model are defined by finite size, 3.75 m and 3 m respectively. The layered-form of the semi-rigid base pavement is shown in Figure 1. When the upper layer adopts AH-70 ordinary asphalt mixture, the pavement structure is ordinary asphalt pavement structure, and the mesh division adopts CPE8R (eight-node bidirectional secondary plane strain quadrilateral element, reduction integral), and the unit finite element model is shown in Figure 2.
The strain-hardening creep model in ABAQUS finite element is adopted to analyse rutting calculation. The constitutive equation [5] is Equation (1).
{\stackrel{•}{\epsilon }}_{cr}={(Aq{[(m+1){\stackrel{¯}{\epsilon }}_{cr}]}^{m})}^{\frac{1}{m+1}}
Figure 1. Pavement structure.
{\stackrel{•}{\epsilon }}_{cr}
is uniaxial equivalent creep strain rate;
{\stackrel{¯}{\epsilon }}_{\text{cr}}
is uniaxial equivalent creep strain; q is stress, MPa A, n, m are Model parameters, determined by indoor material creep test generally, A, n > 0, −1 < m ≤ 0.
2.3. Model Parameter
It’s assumed that the material parameters of soil foundation, lime soil and cement stabilized macadam base remain constant under different temperature. Anti-pressure rebound test and creep test were performed to determine the elastic and creep parameters of asphalt mixture [6]. The specific values are shown in Table 1, and the thermal parameters of the materials are shown in Table 2.
2.4. Boundary Conditions and Load Definition
The left and right sides of the model are set to zero displacement in the X direction, and the bottom boundary of model is fixed, as shown in Figure 3. Simplify vehicle load form from double-circle uniform load to double-round rectangular uniform load. The axle load takes the standard axle load BZZ-100 in the current road asphalt pavement design specification JTG D50-2017, Tire grounding pressure is 0.7 MPa. The rectangle has a length of 19.2 cm and a width of 18.6 cm. The distance between center of the two axes is 314 mm, as shown in Figure 4.
In this study, the effect of repeated loading of asphalt pavement rutting is simplified to a loading step to reduce the time of finite element analysis calculation, the calculation formula is as shown in Equation (2).
t=\frac{0.36\text{NP}}{{n}_{w}pBv}
where t is Wheel load cumulative action time; N is times of wheel load; P is Vehicle axle weight; nw is number of rounds of the axle; p is tire ground pressure; B is tire ground width; v is traffic speed. The load parameters can be obtained according to formula (2), as shown in Table 3.
2.5. Cite Temperature Field
Affected by the natural environment, the temperature of the road surface fluctuates greatly, while the deep sub-grade fluctuates slightly, it can be considered
Table 1. Material elastic parameters and creep parameters.
Table 2. Material thermal property parameter
that the temperature value remain constant [7]. According to the principle of heat transfer, the change of the road-surface heat is mainly affected by the short-wave radiation of the sun, the long-wave radiation of the road surface and
Figure 4. Rectangular uniform load.
Table 3. Rutting calculation model load parameters.
the atmosphere, and the heat convection [8]. The FILM and DFLUX subroutines in the ABAQUS software are applied to analyze solar radiation and road surface thermal convection exchange processes respectively, and atmospheric temperature is used to determine temperature field boundary conditions, establish transient and steady state analysis steps, and obtain asphalt surface temperature field files [9] [10]. In the rutting calculation model, owing to the elastic parameters and creep parameters are both take temperature variation into consideration, the asphalt temperature field file is imported into the corresponding analysis step to calculate the rutting [11].
3. Analysis of Factors Affecting Deformation of Asphalt Pavement
3.1. The influence of Temperature on Asphalt Mixture
The influence of temperature on asphalt mixture cannot be ignored. The increase of temperature will cause a rapid reduction of dynamic stiffness modulus of asphalt mixture in a short time. Accordingly, the ability of resistance to rutting deformation of pavement reduces, and the deformation expands gradually which will eventually lead to the damage of pavement. The permanent deformation under high temperature of asphalt mixture is main reason for rutting. The rutting usually occurs in summer when the temperature is higher than 25˚C - 30˚C. Road surface temperature fluctuates most with the increase of atmospheric temperature. Figure 5 shows the atmospheric temperature variation during a day in high temperature season of Guangdong city. Figure 6 shows the temperature variation of the asphalt road surface during a day.
It can be seen from Figure 6 that asphalt pavement surface reached 55˚C at 2:00 pm. Then take 25˚C, 35˚C, 45˚C, 55˚C as representative, study the effects of different temperatures on rutting, and the effect is shown in Figure 7(a), Figure 7(b).
As is shown in Figure 7(a), Figure 7(b), the maximum rut deformation of ordinary asphalt pavement at 55˚C is more than twice that of Thiopave modified asphalt pavement at the same load. When the temperature raised from 25˚C to 55˚C, the maximum deflection of the wheel center of the Thiopave asphalt pavement and the ordinary asphalt pavement increased by 2.65 mm and 5.97 mm, respectively. The rutting variation of Thiopave modified asphalt pavement is much smaller than that of ordinary asphalt pavement with the increase of temperature. Therefore, we can make a conclusion that Thiopave modifier may improve the rutting resistance of asphalt mixtures by reducing its temperature sensitivity.
3.2. Effect of Tire Pressure on Asphalt Pavement Rutting
Load is one of the crucial factors of the rutting deformation of asphalt pavement. The more overloaded vehicles, the more severe the deformation of the rutting is. Keep the thickness and the corresponding material of each layer constant, analyzing the rutting caused by different tire pressures under the condition at
Figure 5. Atmospheric temperature variation a day.
Figure 6. Pavement temperature variation a day.
Figure 7. Vertical displacement under vary temperature. (a) Thiopave modified asphalt pavement; (b) Ordinary asphalt pavement.
500,000 times of the axle load and the atmospheric temperature was 30˚C. The results are shown in Figure 8.
From Figure 8, it can be seen that as the tire pressure increases from 0.7 MPa to 1.2 MPa, the rutting deformation of the Thiopave asphalt pavement increases from 3.7 mm to 5.2 mm, and the rutting deformation of the ordinary asphalt pavement increases from 9 mm to 12.8 mm, which means that deformation of the rut generated by Thiopave modification asphalt pavement is smaller than that of the ordinary asphalt pavement. The traffic volume has an important influence on the rutting deformation of asphalt pavement. The cumulative equivalent axis is positively correlated with the rutting deformation. Under the cyclic loads, the rutting deformation of the pavement will gradually increase. Also keep the thickness and corresponding material parameters of each layer constant, analyzing the rutting caused by different load under the condition that the axle load is 0.7 MPa and the atmospheric temperature is 30˚C. The results are shown in Figure 9, Figure 10.
It can be seen from Figure 9 that the depth of rutting changes with the times of loading irregularly under the cyclic loading, which appears nonlinear growth.
Figure 8. Maximum rut deflection under vary tire pressure.
Figure 9. Rut variation under vary load times.
It can be seen from Figure 10 that rutting deformation of pavement is small when the times of axle load less than 10,000. And the rutting deformation of Thiopave modified asphalt pavement is close to that of ordinary asphalt pavement. However, the depth of rutting increases gradually when the times of loading exceeds 10,000, a clear gap begins to emerge between two pavement structures. And the rut depth of the ordinary asphalt pavement is significantly larger than that of the Thiopave modified asphalt pavement. When the times of loading reach 1,000,000 times, the difference of the rut depth between two pavement structures reaches 7.2 mm. It can be seen from Figure 10 that the upheaval on both sides of wheel track appears less obvious when the times of loading is less than 10,000, and the deformation of rutting mainly comes from vertical deflection.
Figure 10. Vertical displacement under vary load times. (a) Thiopave modified asphalt pavement; (b) Ordinary asphalt pavement.
With the increase of the times of loading, the upheaval on both sides of the wheel track gradually increases. When the cyclic load reaches 1,000,000 times, the deflection and elevation of the Thiopave modified asphalt pavement are both smaller than that of ordinary asphalt pavement. And the amplitude of variation is also smaller than that of ordinary asphalt pavement.
a) The rutting deformation of Thiopave modified asphalt pavement increases with the increase of temperature and tire pressure, while the deformation of rutting is smaller than that of ordinary asphalt pavement.
b) In the early stage of cyclic loading, the rutting deformation of Thiopave modified asphalt pavement grew slowly and was relatively close to that of ordinary asphalt pavement. As the times of loading increases to 1,000,000 times gradually, the rutting of ordinary asphalt pavement reaches 12 mm, which is much larger than the that of Thiopave modified asphalt pavement 4.8 mm. It can be concluded that the fatigue resistance of Thiopave modified asphalt pavement is better than that of ordinary asphalt pavement. Therefore, it is recommended to use the Thiopave modified asphalt pavement on the road with large traffic volume to effectively reduce the road surface damage caused by rutting.
[1] Yang, X.W. (2010) Study on the Properties of Sulphur Extended Asphalt Mixture and the Mechanism of Modifying. Journal of Chongqing Jiaotong University (Natural Science Version), 29, 194-198.
[2] Das, A.K. and Panda, M. (2017) Investigation on Rheological Performance of Sulphur Modified Bitumen (SMB) Binders. Construction and Building Materials, 149, 724-732. https://doi.org/10.1016/j.conbuildmat.2017.05.198
[3] Hu, X.D., Gao, Y.M., lin, L.R., Zhong, S. and Dai, X.W. (2013) Performance of Thiopave Modified Asphalt Mixture. Journal of Wuhan Engineering University, 35, 10-13.
[4] Wang, X.S., Qiu, Y.J., Zhu, X.H. and Chang, S. (2018) Performance Evaluation of Thiopave Modified Asphalt Mixture. Highway, 63, 209-211.
[5] Zhang, J.P., Huang, X.M. and Ma, T. (2008) Damage-Creep Characteristics and Model of Asphalt Mixture. Chinese Journal of Geotechnical Engineering, 30, 1867-1871.
[6] Liaogy Application of ABAQUS Finite Element Software in Road Engineering. Southeast University Press.
[7] Li, F.P., Chen, J. and Yan, E.H. (2006) Study and Application of New Asphalt Pavement Structures in China. Journal of Highway and Transportation Research and Development, No. 3, 10-14.
[8] Shan, J.S. and, Guo, Z.Y. (2013) Predictive Method of Temperature Field in Asphalt Pavement. Journal of Jiang Su University (Natural Science Edition), 34, 594-598+604.
[9] Yan, Z.R. (1984) Analysis of the Temperature Field in Layered Pavement System. Journal of Tongji University (Natural Science Edition), No. 3, 76-85.
[10] Shan, J.D. and Du, B.B. (2014) Analysis of Asphalt Pavement Temperature Field and Characteristics of Stress in All time Domain. Highway Engineering, 39, 73-77.
[11] Gu, X.X., Yuan, Q.Q. and Ni, F.J. (2012) Analysis of Factors on Asphalt Pavement Rut Development Based on Measured Load and Temperature Gradient. China Journal of Highway and Transport, 25, 30-36.
|
How to calculate the bulk modulus with the formula
Important considerations about the bulk modulus formula
Bulk modulus for lithium, water, and other materials
Bulk modulus calculation example
This tool calculates the bulk modulus of an object, given its initial volume, the pressure applied to it (bulk stress), and the volume change caused by that stress.
Bulk modulus is a property relevant to many physical phenomena. For example, for sound waves traveling through a fluid medium, the speed of sound is proportional to the square root of the bulk modulus. In the study of isotropic solid materials, it's tightly related to Poisson's ratio, a crucial property in the strain analysis of materials (look at this formula in the FAQ section).
This tool uses the widely known bulk modulus formula, which relates bulk modulus to bulk stress and strain. The following sections show how to calculate the bulk modulus utilizing that formula and give some common values, such as the bulk modulus for water, lithium, etc.
With this formula, you calculate the bulk modulus:
B = -ΔP/(ΔV/V₀)
B — Material's bulk's modulus;
V₀ — Initial volume of the material;
ΔP — Additional pressure applied to the material, known as pressure stress; and
ΔV — Volume change caused by the pressure.
The ΔV/V₀ term is known as bulk strain, which indicates the fractional or percentual volume change caused by the pressure stress.
The following image lets us understand better the bulk modulus concept.
Graphic representation of an object under bulk stress.
Bulk modulus indicates how much pressure we need to apply to a material to cause deformation.
The bulk modulus formula applies to materials that obey Hook's law, as this law is the basis for treating bulk stress and strain as linearly related quantities.
The bulk modulus for lithium, water, and other solids and liquids is considered constant (especially for small pressure changes). However, the bulk modulus of gases depends on the initial pressure P₀.
Bulk modulus is also related to Poisson's ratio and Young's modulus. In the FAQ section, we show the equation.
The following is a list of typical values of bulk modulus:
Water: 2.1 GPa (300,000 psi).
Lithium 11 GPa (1,595,415 psi).
Aluminum: 75 GPa (10,877,828 psi).
Silicone rubber: 2 GPa (290,075 psi).
Steel: 160 GPa (23,206,032 psi).
Now, let's see a bulk modulus calculation example.
A hydraulic press operates at 21 × 10⁶ Pa with 0.001155 m³ volume. The oil inside has a bulk modulus of 5 × 10⁹ Pa. Suppose you're interested in knowing the volume change due to the applied pressure.
Follow these steps to calculate it:
In the bulk modulus formula, solve for ΔV:
ΔV = - \frac{ΔPV_{0}}{B}
In this case, ΔP = 21 × 10⁶ Pa, V₀ = 0.001155 m³ and B = 5 × 10⁹ Pa. Input those values in the formula:
ΔV = - \frac{(21 \times 10^6\ \text{Pa})(0.001155\ \text{m}^3)}{5 \times 10^9\ \text{Pa}} = -0.000004851 \text{m}^3
That's it. You can also use our calculator and check the results of this bulk modulus calculation example.
How do I calculate bulk modulus from Young's modulus?
To calculate the bulk modulus from Young's modulus, use the formula B = E / 3(1 - 2ν), where:
E — Young's modulus of the material; and
ν — Poisson's ratio of the material.
Take into account that this formula only applies to isotropic solid materials.
What is the bulk modulus of air?
The isothermal bulk modulus of air is 101 kPa, while its adiabatic bulk modulus is 142 kPa.
Isothermal bulk modulus corresponds to compression processes with a constant object temperature. In contrast, the adiabatic bulk modulus refers to an absence of heat or mass transfer to that object.
Can bulk modulus be negative?
No, bulk modulus can't be negative. The minus sign in the bulk modulus equation B = -ΔP/(ΔV/V₀) contrasts the invariably negative sign of the volume decrease ΔV. The multiplication of those negative numbers causes a positive bulk modulus result.
How to calculate the bulk stress in a solid?
To calculate the bulk stress (ΔP) in a solid, use the formula ΔP = -B(ΔV/V₀) or follow these steps:
Divide the volume change by the initial volume
Multiply the result by the bulk modulus.
Change the sign of the result, or multiply it by -1.
That's it. You can also use our bulk modulus calculator to get the result.
Additional pressure on object (ΔP)
Change in volume (ΔV)
Original volume (V₀)
Bulk strain (ΔV/V₀)
Our work and power calculator can determine how much energy you need to perform a given amount of work.
|
Adventist Youth Honors Answer Book/Arts and Crafts/Plaster Craft - Wikibooks, open books for an open world
Adventist Youth Honors Answer Book/Arts and Crafts/Plaster Craft
The Plaster Craft Honor is a component of the Artisan Master Award .
1 1. What is the principal ingredient of plaster of Paris?
2 2. Give the steps in pouring a plaster item and preparing it for painting.
2.1 Preparing the mold
2.2 Mixing the plaster
2.3 Pouring the plaster
2.4 Releasing the mold
3 3. Know how to remove air bubbles from a poured item.
4 4. Know how the setup time can be increased or decreased for plaster.
5 5. What precautions should be taken when cleaning the mixing and pouring equipment?
6 6. When is a sealant applied to a plaster item and why?
7 7. What type of paint is best to use on plaster craft items?
8 8. Paint three items that will include the following designs and techniques or equivalent: a. Floral to show shading b. Fruit to show highlighting c. Animal to show fine line or detail d. Religious motto to show letter highlighting
9 9. Mold and paint two additional items of different designs.
1. What is the principal ingredient of plaster of Paris?Edit
Plaster of Paris is based on calcium sulfate which is derived from gypsum.
2. Give the steps in pouring a plaster item and preparing it for painting.Edit
Preparing the moldEdit
Plastic molds generally cannot support themselves once they are filled with wet plaster, so you must prepare a support base to set them on. This support base can be as simple as a sealable plastic bag filled with uncooked rice or sand. Squeeze as much air out of the bag as you can, and then lay the bag flat on the work surface. Place the mold on top of it and press it into the support, wiggling it around until the support conforms to the shape of the mold. Try to get the mold as level as possible. Once the mold is in place and well supported, coat the mold with mold spray (see requirement 3).
Mixing the plasterEdit
The most fool proof way to mix dry plaster powder with water is to weigh them. Use seven parts plaster to 10 parts water by weight. If you know how much your plaster weighs, multiply that weight by 1.43 to get the required weight of the water. It's a lot easier to do this using the metric system, because a milliliter of water weighs one gram. So if you know how many grams of water you need, that's also the number of milliliters of water you need. Let's run through an example:
If you have one kilogram of plaster (that's 1000 grams), you will need
{\displaystyle 1000\times 1.43=1430}
milliliters (1.43 liters) of water.
Pour cold water into a plastic mixing bowl, and then add the plaster powder to it, stirring as you go. Warm water will set faster - maybe too fast! Never add water to plaster - always add the plaster to the water.
The plaster will heat up as it reacts chemically with the water, so never attempt to pour plaster over a body part in an attempt to cast it.
Plaster is also somewhat caustic, so it can cause a chemical burn as well.
Keep powdered and wet plaster away from children and pets
Stir the plaster and water until it is well mixed. Allow it to sit for no more than two minutes to allow the plaster to fully absorb the water and quit bubbling. If desired, this is the time to add pigment for color. Add the pigment and mix it in with a power drill (using a paint mixing attachment). You can also use a potato masher, and for small amounts, you can just use a stick. If using a power drill, do not over do it, as blending it will hasten the setting time. Take it easy!
Pouring the plasterEdit
Pour the wet plaster into one corner of the mold and allow the plaster to flow across to the other side. When the mold is full, tap it a few times to help work out any air bubbles. If you need a hanger, place it in the plaster at the top (pay attention!). You can buy hangers or you can use a paper clip. Make sure the hanger is well embedded in the plaster.
Releasing the moldEdit
Once the plaster heats up, you can remove it from the mold. Do this by gently pulling the edges of the mold away from the cast piece. Work your way around until the mold has been separated all around (it's OK to let the mold snap back into place). Do not overflex the mold or you may crease it. Turn the mold upside down over the support base (the bag of rice) and gently shake it until the piece pops out. It should land gently on the support base. Set the piece aside to allow it to finish curing. For fastest curing, set the piece on a grate so that air may circulate over all surfaces.
3. Know how to remove air bubbles from a poured item.Edit
Air bubbles are caused by the surface tension of the water mixed into the plaster. This surface tension can be eliminated by spraying a mold treatment such as Airid onto the mold before pouring the plaster. The mold treatment will reduce the surface tension and prevent bubbles from forming.
You can also mix up a small amount of new plaster and apply it onto any holes in the finished piece with an artist's brush. Let it dry for an hour before painting.
4. Know how the setup time can be increased or decreased for plaster.Edit
You can speed up the setup time by using hot water, or slow it down by using cold water.
5. What precautions should be taken when cleaning the mixing and pouring equipment?Edit
Under no circumstances should you rinse plaster down the sink! Remember that plaster hardens when it comes into contact with water - it will set even when fully immersed. If you pour it down the drain, you are asking for a very expensive plumbing repair.
The easiest way to clean plaster out of the plastic mixing bowl is to just let it set. Then flex the bowl and the plaster will pop out. Make sure you put the plaster in the trash - not in the sink!
To get plaster off a potato masher or paint blender, strike it smartly with a hammer. The plaster should flake right off.
Wipe your molds out with a damp paper towel.
6. When is a sealant applied to a plaster item and why?Edit
Acrylic sealant is sprayed onto a piece after the paint has dried. This will preserve the color and strengthen the surface of the piece. You can select matte sealants or gloss sealants. Matte sealants will seal the item without having much of an effect on the way it looks. Glossy sealants will brighten the colors and make the piece more shiny.
Be sure to check that the sealant you choose is compatible with the paint you have used. Some combinations can cause the colors to run. If in doubt, paint and seal a piece of scrap first, or apply them to the back or bottom of the piece to test for compatibility.
7. What type of paint is best to use on plaster craft items?Edit
Acrylic paint is the most commonly used type of paint in plaster craft. It is water soluble and cleans up easily.
8. Paint three items that will include the following designs and techniques or equivalent: a. Floral to show shading b. Fruit to show highlighting c. Animal to show fine line or detail d. Religious motto to show letter highlightingEdit
Start by applying a basecoat. Allow it to dry and the begin painting the lightest colors first. If you make a mistake, let the paint dry for five minutes and then paint the desired color over the undesired one.
For shading, paint the lighter color on to the piece. Then mix the darker color on you pallet and paint it on an adjacent area. Rinse and dry the brush, and then use it to blend the light and darker areas together.
For highlighting a fruit, paint the entire fruit the color of your choice. While the paint is still wet, put a small amount of white paint on a clean brush and paint a small circle on the area you wish to highlight. Work the circle into the wet paint surrounding the highlighted area, blending the paint as you go.
For fine lines, use a brush with a very fine tip. Add a small amount of paint to the brush and draw the tip over the area using a steady hand. Experiment with resting the base of your palm on a table as you paint (do your experiments on something other than your piece!) When you think you've got the hang of it, try it on your piece.
For raised lettering, paint the background first. Let it dry, and then carefully paint the top surface of the letters. For painting free-form letters on a flat surface, practice on some scrap first until you get the hang of it. Plan our what you will letter, being careful to center it on the piece.
9. Mold and paint two additional items of different designs.Edit
The easiest items to cast are open-top molds. It is best to not try 3D or two-part molds until you have a little more experience, though these can certainly be used to fulfill this requirement. The reference links below offer hundreds of molds that you can purchase and reuse. You should also be able find molds at your local craft stores.
http://www.onestopcandle.com/plaster/basicplasterinstr.php
http://www.milliesplastercraft.com/BasicInstructions.asp
http://spiritcrafts.stores.yahoo.net/ You can find all the molds you need on this one site.
Retrieved from "https://en.wikibooks.org/w/index.php?title=Adventist_Youth_Honors_Answer_Book/Arts_and_Crafts/Plaster_Craft&oldid=3250672"
|
Section 59.56 (03QW): Galois action on stalks—The Stacks project
Section 59.56: Galois action on stalks (cite)
59.56 Galois action on stalks
In this section we define an action of the absolute Galois group of a residue field of a point $s$ of $S$ on the stalk functor at any geometric point lying over $s$.
Galois action on stalks. Let $S$ be a scheme. Let $\overline{s}$ be a geometric point of $S$. Let $\sigma \in \text{Aut}(\kappa (\overline{s})/\kappa (s))$. Define an action of $\sigma $ on the stalk $\mathcal{F}_{\overline{s}}$ of a sheaf $\mathcal{F}$ as follows
\begin{equation} \label{etale-cohomology-equation-galois-action} \begin{matrix} \mathcal{F}_{\overline{s}} & \longrightarrow & \mathcal{F}_{\overline{s}} \\ (U, \overline{u}, t) & \longmapsto & (U, \overline{u} \circ \mathop{\mathrm{Spec}}(\sigma ), t). \end{matrix} \end{equation}
where we use the description of elements of the stalk in terms of triples as in the discussion following Definition 59.29.6. This is a left action, since if $\sigma _ i \in \text{Aut}(\kappa (\overline{s})/\kappa (s))$ then
\begin{align*} \sigma _1 \cdot (\sigma _2 \cdot (U, \overline{u}, t)) & = \sigma _1 \cdot (U, \overline{u} \circ \mathop{\mathrm{Spec}}(\sigma _2), t) \\ & = (U, \overline{u} \circ \mathop{\mathrm{Spec}}(\sigma _2) \circ \mathop{\mathrm{Spec}}(\sigma _1), t) \\ & = (U, \overline{u} \circ \mathop{\mathrm{Spec}}(\sigma _1 \circ \sigma _2), t) \\ & = (\sigma _1 \circ \sigma _2) \cdot (U, \overline{u}, t) \end{align*}
It is clear that this action is functorial in the sheaf $\mathcal{F}$. We note that we could have defined this action by referring directly to Remark 59.29.8.
Definition 59.56.1. Let $S$ be a scheme. Let $\overline{s}$ be a geometric point lying over the point $s$ of $S$. Let $\kappa (s) \subset \kappa (s)^{sep} \subset \kappa (\overline{s})$ denote the separable algebraic closure of $\kappa (s)$ in the algebraically closed field $\kappa (\overline{s})$.
In this situation the absolute Galois group of $\kappa (s)$ is $\text{Gal}(\kappa (s)^{sep}/\kappa (s))$. It is sometimes denoted $\text{Gal}_{\kappa (s)}$.
The geometric point $\overline{s}$ is called algebraic if $\kappa (s) \subset \kappa (\overline{s})$ is an algebraic closure of $\kappa (s)$.
Example 59.56.2. The geometric point $\mathop{\mathrm{Spec}}(\mathbf{C}) \to \mathop{\mathrm{Spec}}(\mathbf{Q})$ is not algebraic.
Let $\kappa (s) \subset \kappa (s)^{sep} \subset \kappa (\overline{s})$ be as in the definition. Note that as $\kappa (\overline{s})$ is algebraically closed the map
\[ \text{Aut}(\kappa (\overline{s})/\kappa (s)) \longrightarrow \text{Gal}(\kappa (s)^{sep}/\kappa (s)) = \text{Gal}_{\kappa (s)} \]
is surjective. Suppose $(U, \overline{u})$ is an étale neighbourhood of $\overline{s}$, and say $\overline{u}$ lies over the point $u$ of $U$. Since $U \to S$ is étale, the residue field extension $\kappa (u)/\kappa (s)$ is finite separable. This implies the following
If $\sigma \in \text{Aut}(\kappa (\overline{s})/\kappa (s)^{sep})$ then $\sigma $ acts trivially on $\mathcal{F}_{\overline{s}}$.
More precisely, the action of $\text{Aut}(\kappa (\overline{s})/\kappa (s))$ determines and is determined by an action of the absolute Galois group $\text{Gal}_{\kappa (s)}$ on $\mathcal{F}_{\overline{s}}$.
Given $(U, \overline{u}, t)$ representing an element $\xi $ of $\mathcal{F}_{\overline{s}}$ any element of $\text{Gal}(\kappa (s)^{sep}/K)$ acts trivially, where $\kappa (s) \subset K \subset \kappa (s)^{sep}$ is the image of $\overline{u}^\sharp : \kappa (u) \to \kappa (\overline{s})$.
Altogether we see that $\mathcal{F}_{\overline{s}}$ becomes a $\text{Gal}_{\kappa (s)}$-set (see Fundamental Groups, Definition 58.2.1). Hence we may think of the stalk functor as a functor
\[ \mathop{\mathit{Sh}}\nolimits (S_{\acute{e}tale}) \longrightarrow \text{Gal}_{\kappa (s)}\textit{-Sets}, \quad \mathcal{F} \longmapsto \mathcal{F}_{\overline{s}} \]
and from now on we usually do think about the stalk functor in this way.
Theorem 59.56.3. Let $S = \mathop{\mathrm{Spec}}(K)$ with $K$ a field. Let $\overline{s}$ be a geometric point of $S$. Let $G = \text{Gal}_{\kappa (s)}$ denote the absolute Galois group. Taking stalks induces an equivalence of categories
\[ \mathop{\mathit{Sh}}\nolimits (S_{\acute{e}tale}) \longrightarrow G\textit{-Sets}, \quad \mathcal{F} \longmapsto \mathcal{F}_{\overline{s}}. \]
Proof. Let us construct the inverse to this functor. In Fundamental Groups, Lemma 58.2.2 we have seen that given a $G$-set $M$ there exists an étale morphism $X \to \mathop{\mathrm{Spec}}(K)$ such that $\mathop{\mathrm{Mor}}\nolimits _ K(\mathop{\mathrm{Spec}}(K^{sep}), X)$ is isomorphic to $M$ as a $G$-set. Consider the sheaf $\mathcal{F}$ on $\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}$ defined by the rule $U \mapsto \mathop{\mathrm{Mor}}\nolimits _ K(U, X)$. This is a sheaf as the étale topology is subcanonical. Then we see that $\mathcal{F}_{\overline{s}} = \mathop{\mathrm{Mor}}\nolimits _ K(\mathop{\mathrm{Spec}}(K^{sep}), X) = M$ as $G$-sets (details omitted). This gives the inverse of the functor and we win. $\square$
Remark 59.56.4. Another way to state the conclusion of Theorem 59.56.3 and Fundamental Groups, Lemma 58.2.2 is to say that every sheaf on $\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}$ is representable by a scheme $X$ étale over $\mathop{\mathrm{Spec}}(K)$. This does not mean that every sheaf is representable in the sense of Sites, Definition 7.12.3. The reason is that in our construction of $\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}$ we chose a sufficiently large set of schemes étale over $\mathop{\mathrm{Spec}}(K)$, whereas sheaves on $\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}$ form a proper class.
Lemma 59.56.5. Assumptions and notations as in Theorem 59.56.3. There is a functorial bijection
\[ \Gamma (S, \mathcal{F}) = (\mathcal{F}_{\overline{s}})^ G \]
Proof. We can prove this using formal arguments and the result of Theorem 59.56.3 as follows. Given a sheaf $\mathcal{F}$ corresponding to the $G$-set $M = \mathcal{F}_{\overline{s}}$ we have
\begin{eqnarray*} \Gamma (S, \mathcal{F}) & = & \mathop{\mathrm{Mor}}\nolimits _{\mathop{\mathit{Sh}}\nolimits (S_{\acute{e}tale})}(h_{\mathop{\mathrm{Spec}}(K)}, \mathcal{F}) \\ & = & \mathop{\mathrm{Mor}}\nolimits _{G\textit{-Sets}}(\{ *\} , M) \\ & = & M^ G \end{eqnarray*}
Here the first identification is explained in Sites, Sections 7.2 and 7.12, the second results from Theorem 59.56.3 and the third is clear. We will also give a direct proof1.
Suppose that $t \in \Gamma (S, \mathcal{F})$ is a global section. Then the triple $(S, \overline{s}, t)$ defines an element of $\mathcal{F}_{\overline{s}}$ which is clearly invariant under the action of $G$. Conversely, suppose that $(U, \overline{u}, t)$ defines an element of $\mathcal{F}_{\overline{s}}$ which is invariant. Then we may shrink $U$ and assume $U = \mathop{\mathrm{Spec}}(L)$ for some finite separable field extension of $K$, see Proposition 59.26.2. In this case the map $\mathcal{F}(U) \to \mathcal{F}_{\overline{s}}$ is injective, because for any morphism of étale neighbourhoods $(U', \overline{u}') \to (U, \overline{u})$ the restriction map $\mathcal{F}(U) \to \mathcal{F}(U')$ is injective since $U' \to U$ is a covering of $S_{\acute{e}tale}$. After enlarging $L$ a bit we may assume $K \subset L$ is a finite Galois extension. At this point we use that
\[ \mathop{\mathrm{Spec}}(L) \times _{\mathop{\mathrm{Spec}}(K)} \mathop{\mathrm{Spec}}(L) = \coprod \nolimits _{\sigma \in \text{Gal}(L/K)} \mathop{\mathrm{Spec}}(L) \]
where the maps $\mathop{\mathrm{Spec}}(L) \to \mathop{\mathrm{Spec}}(L \otimes _ K L)$ come from the ring maps $a \otimes b \mapsto a\sigma (b)$. Hence we see that the condition that $(U, \overline{u}, t)$ is invariant under all of $G$ implies that $t \in \mathcal{F}(\mathop{\mathrm{Spec}}(L))$ maps to the same element of $\mathcal{F}(\mathop{\mathrm{Spec}}(L) \times _{\mathop{\mathrm{Spec}}(K)} \mathop{\mathrm{Spec}}(L))$ via restriction by either projection (this uses the injectivity mentioned above; details omitted). Hence the sheaf condition of $\mathcal{F}$ for the étale covering $\{ \mathop{\mathrm{Spec}}(L) \to \mathop{\mathrm{Spec}}(K)\} $ kicks in and we conclude that $t$ comes from a unique section of $\mathcal{F}$ over $\mathop{\mathrm{Spec}}(K)$. $\square$
Remark 59.56.6. Let $S$ be a scheme and let $\overline{s} : \mathop{\mathrm{Spec}}(k) \to S$ be a geometric point of $S$. By definition this means that $k$ is algebraically closed. In particular the absolute Galois group of $k$ is trivial. Hence by Theorem 59.56.3 the category of sheaves on $\mathop{\mathrm{Spec}}(k)_{\acute{e}tale}$ is equivalent to the category of sets. The equivalence is given by taking sections over $\mathop{\mathrm{Spec}}(k)$. This finally provides us with an alternative definition of the stalk functor. Namely, the functor
\[ \mathop{\mathit{Sh}}\nolimits (S_{\acute{e}tale}) \longrightarrow \textit{Sets}, \quad \mathcal{F} \longmapsto \mathcal{F}_{\overline{s}} \]
is isomorphic to the functor
\[ \mathop{\mathit{Sh}}\nolimits (S_{\acute{e}tale}) \longrightarrow \mathop{\mathit{Sh}}\nolimits (\mathop{\mathrm{Spec}}(k)_{\acute{e}tale}) = \textit{Sets}, \quad \mathcal{F} \longmapsto \overline{s}^*\mathcal{F} \]
To prove this rigorously one can use Lemma 59.36.2 part (3) with $f = \overline{s}$. Moreover, having said this the general case of Lemma 59.36.2 part (3) follows from functoriality of pullbacks.
[1] For the doubting Thomases out there.
Comment #415 by Rex on December 29, 2013 at 00:07
Concerning "...where the maps \Spec(L \otimes_K L) \to \Spec(L) come from the ring maps
a \otimes b \mapsto a\sigma(b)
Shouldn't the ring maps be going in the other direction i.e. L \longrightarrow L \otimes_K L via
a \mapsto a \otimes \sigma(a)
Thanks for pointing out the mistake. I fixed the direction of the arrow of schemes, in other words, I wrote
\text{Spec}(L) \to \text{Spec}(L \otimes L)
. The reason is that the ring map
L \to L \otimes L
does not have a simple formula. See here.
|
Introduction to the International Fisher Effect
The International Fisher Effect (IFE) is an exchange-rate model designed by the economist Irving Fisher in the 1930s. It is based on present and future risk-free nominal interest rates rather than pure inflation, and it is used to predict and understand present and future spot currency price movements. For this model to work in its purest form, it is assumed that the risk-free aspects of capital must be allowed to free float between nations that comprise a particular currency pair.
Fisher Effect Background
The decision to use a pure interest rate model rather than an inflation model or some combination stems from Fisher's assumption that real interest rates are not affected by changes in expected inflation rates because both will become equalized over time through market arbitrage; inflation is embedded within the nominal interest rate and factored into market projections for a currency price. It is assumed that spot currency prices will naturally achieve parity with perfect ordering markets. This is known as the Fisher Effect, not to be confused with the International Fisher Effect. Monetary policy influences the Fisher effect because it determines the nominal interest rate.
Fisher believed the pure interest rate model was more of a leading indicator that predicts future spot currency prices 12 months in the future. The minor problem with this assumption is we can't ever know with certainty over time the spot price or the exact interest rate. This is known as uncovered interest parity. The question for modern studies is: Does the International Fisher Effect work now that currencies are allowed to free float? From the 1930s to the 1970s, we didn't have an answer because nations controlled their exchange rates for economic and trade purposes. This begs the question: Has credence been given to a model that hasn't really been fully tested? The vast majority of studies only concentrated on one nation and compared that nation to the United States currency.
The Fisher Effect vs. the IFE
The Fisher Effect model says nominal interest rates reflect the real rate of return and expected rate of inflation. So the difference between real and nominal interest rates is determined by expected inflation rates. The approximate nominal rate of return equals the real rate of return plus the expected rate of inflation. For example, if the real rate of return is 3.5% and expected inflation is 5.4%, then the approximate nominal rate of return is 0.035 + 0.054 = 0.089, or 8.9%. The precise formula is:
\begin{aligned} &RR_{\text{nominal}} = \left(1 + RR_{\text{real}}\right)*\left(1+\text{inflation rate} \right )\\ &\textbf{where:}\\ &RR_{\text{nominal}}=\text{Nominal rate of return}\\ &RR_{\text{real}}=\text{Real rate of return}\\ \end{aligned}
RRnominal=(1+RRreal)∗(1+inflation rate)where:RRnominal=Nominal rate of returnRRreal=Real rate of return
which, in this example, would equal 9.1%. The IFE takes this example one step further to assume appreciation or depreciation of currency prices is proportionally related to differences in nominal interest rates. Nominal interest rates would automatically reflect differences in inflation by a purchasing power parity or no-arbitrage system.
The IFE in Action
For example, suppose the GBP/USD spot exchange rate is 1.5339 and the current interest rate is 5% in the U.S. and 7% in Great Britain. The IFE predicts the country with the higher nominal interest rate (Great Britain in this case) will see its currency depreciate. The expected future spot rate is calculated by multiplying the spot rate by a ratio of the foreign interest rate to the domestic interest rate: 1.5339 x (1.05/1.07) = 1.5052. The IFE expects the GBP to depreciate against USD (it will only cost $1.5052 to purchase one GBP compared to $1.5339 before) so investors in either currency will achieve the same average return (i.e. an investor in USD will earn a lower interest rate of 5% but will also gain from appreciation of the USD).
For the shorter term, the IFE is generally unreliable due to the numerous short-term factors that affect exchange rates and predictions of nominal rates and inflation. Longer-term International Fisher Effects have proven a bit better, but not by much. Exchange rates eventually offset interest rate differentials, but prediction errors often occur. Remember that we are trying to predict the spot rate in the future. IFE fails particularly when purchasing power parity fails. This is defined as when the cost of goods can't be exchanged in each nation on a one-for-one basis after adjusting for exchange-rate changes and inflation. (For related reading, see: 4 Ways to Forecast Currency Changes.)
Countries don't change interest rates by the same magnitude as in the past, so the IFE isn't as reliable as it once was. Instead, the focus for central bankers in the modern day is not an interest rate target, but rather an inflation target where interest rates are determined by the expected inflation rate. Central bankers focus on their nation's consumer price index (CPI) to measure prices and adjust interest rates according to prices in an economy. The Fisher models may not be practical to implement in your daily currency trades, but their usefulness lies in their ability to illustrate the expected relationship between interest rates, inflation and exchange rates. (For more, see: Using Interest Rate Parity To Trade Forex.)
University of Houston. "Chapter 8 - Theories of FX Determination - Part 2," Pages 1-5.
Wiley. "Understanding the Fisher Equation."
|
When selective availability was lifted in 2000, GPS had about a five-meter (16 ft) accuracy. GPS receivers that use the L5 band can have much higher accuracy, pinpointing to within 30 centimeters (11.8 in), while high-end users (typically engineering and land surveying applications) are able to have accuracy on several of the bandwidth signals to within two centimeters, and even sub-millimeter accuracy for long-term measurements.[14][15][16] Consumer devices, like smartphones, can be as accurate as to within 4.9 m (or better with assistive services like Wi-Fi positioning also enabled).[17] As of May 2021[update], 16 GPS satellites are broadcasting L5 signals, and the signals are considered pre-operational, scheduled to reach 24 satellites by approximately 2027.
In 1955, Dutch Naval officer Wijnand Langeraar submitted a patent application for a radio-based Long-Range Navigation System, with the US Patent office on 16 Feb 1955 and was granted Patent US2980907A [22] on 18 April 1961.[original research?]
In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed,[54][55] although it had been previously published [in Navigation magazine], and that the CA code (Coarse/Acquisition code) would be available to civilian users.[citation needed]
Atmosphere: studying the troposphere delays (recovery of the water vapor content) and ionosphere delays (recovery of the number of free electrons).[100] Recovery of Earth surface displacements due to the atmospheric pressure loading.[101]
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010.[149] On February 5th 2016, the 12th and final Block IIF satellite was launched.[150] The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."[151]
{\displaystyle d_{i}=\left({\tilde {t}}_{i}-b-s_{i}\right)c,\;i=1,2,\dots ,n}
{\displaystyle d_{i}={\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}}
{\displaystyle p_{i}=\left({\tilde {t}}_{i}-s_{i}\right)c}
{\displaystyle p_{i}=d_{i}+bc,\;i=1,2,...,n}
{\displaystyle \left({\hat {x}},{\hat {y}},{\hat {z}},{\hat {b}}\right)={\underset {\left(x,y,z,b\right)}{\arg \min }}\sum _{i}\left({\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}+bc-p_{i}\right)^{2}}
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares (NLLS) problems, generally provide more accurate solutions.[170]
A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). This corrects the error that arises because the pulse transition of the PRN is not instantaneous, and thus the correlation (satellite–receiver sequence matching) operation is imperfect. CPGPS uses the L1 carrier wave, which has a period of
{\displaystyle {\frac {1\,\mathrm {s} }{1575.42\times 10^{6}}}=0.63475\,\mathrm {ns} \approx 1\,\mathrm {ns} \ }
, which is about one-thousandth of the C/A Gold code bit period of
{\displaystyle {\frac {1\,\mathrm {s} }{1023\times 10^{3}}}=977.5\,\mathrm {ns} \approx 1000\,\mathrm {ns} \ }
, to act as an additional clock signal and resolve the uncertainty. The phase difference error in the normal GPS amounts to 2–3 m (6 ft 7 in – 9 ft 10 in) of ambiguity. CPGPS working to within 1% of perfect transition reduces this error to 3 cm (1.2 in) of ambiguity. By eliminating this error source, CPGPS coupled with DGPS normally realizes between 20–30 cm (7.9–11.8 in) of absolute accuracy.
The satellite carrier total phase can be measured with ambiguity as to the number of cycles. Let
{\displaystyle \ \phi (r_{i},s_{j},t_{k})}
denote the phase of the carrier of satellite j measured by receiver i at time
{\displaystyle \ \ t_{k}}
. This notation shows the meaning of the subscripts i, j, and k. The receiver (r), satellite (s), and time (t) come in alphabetical order as arguments of
{\displaystyle \ \phi }
and to balance readability and conciseness, let
{\displaystyle \ \phi _{i,j,k}=\phi (r_{i},s_{j},t_{k})}
be a concise abbreviation. Also we define three functions, :
{\displaystyle \ \Delta ^{r},\Delta ^{s},\Delta ^{t}}
, which return differences between receivers, satellites, and time points, respectively. Each function has variables with three subscripts as its arguments. These three functions are defined below. If
{\displaystyle \ \alpha _{i,j,k}}
is a function of the three integer arguments, i, j, and k then it is a valid argument for the functions, :
{\displaystyle \ \Delta ^{r},\Delta ^{s},\Delta ^{t}}
, with the values defined as
{\displaystyle \ \Delta ^{r}(\alpha _{i,j,k})=\alpha _{i+1,j,k}-\alpha _{i,j,k}}
{\displaystyle \ \Delta ^{s}(\alpha _{i,j,k})=\alpha _{i,j+1,k}-\alpha _{i,j,k}}
{\displaystyle \ \Delta ^{t}(\alpha _{i,j,k})=\alpha _{i,j,k+1}-\alpha _{i,j,k}}
{\displaystyle \ \alpha _{i,j,k}\ and\ \beta _{l,m,n}}
are valid arguments for the three functions and a and b are constants then
{\displaystyle \ (a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})}
is a valid argument with values defined as
{\displaystyle \ \Delta ^{r}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{r}(\alpha _{i,j,k})+b\ \Delta ^{r}(\beta _{l,m,n})}
{\displaystyle \ \Delta ^{s}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{s}(\alpha _{i,j,k})+b\ \Delta ^{s}(\beta _{l,m,n})}
{\displaystyle \ \Delta ^{t}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{t}(\alpha _{i,j,k})+b\ \Delta ^{t}(\beta _{l,m,n})}
Receiver clock errors can be approximately eliminated by differencing the phases measured from satellite 1 with that from satellite 2 at the same epoch.[178] This difference is designated as
{\displaystyle \ \Delta ^{s}(\phi _{1,1,1})=\phi _{1,2,1}-\phi _{1,1,1}}
{\displaystyle {\begin{aligned}\Delta ^{r}(\Delta ^{s}(\phi _{1,1,1}))\,&=\,\Delta ^{r}(\phi _{1,2,1}-\phi _{1,1,1})&=\,\Delta ^{r}(\phi _{1,2,1})-\Delta ^{r}(\phi _{1,1,1})&=\,(\phi _{2,2,1}-\phi _{1,2,1})-(\phi _{2,1,1}-\phi _{1,1,1})\end{aligned}}}
{\displaystyle \ \Delta ^{t}(\Delta ^{r}(\Delta ^{s}(\phi _{1,1,1})))}
^ (1) "GPS: Global Positioning System (or Navstar Global Positioning System)" Wide Area Augmentation System (WAAS) Performance Standard, Section B.3, Abbreviations and Acronyms.
(2) "GLOBAL POSITIONING SYSTEM WIDE AREA AUGMENTATION SYSTEM (WAAS) PERFORMANCE STANDARD" (PDF). January 3, 2012. Archived from the original (PDF) on April 27, 2017.
^ "Global Positioning System Standard Positioning Service Performance Standard : 4th Edition, September 2008" (PDF). Archived (PDF) from the original on April 27, 2017. Retrieved April 21, 2017.
^ "What is a GPS?". Library of Congress. Archived from the original on January 31, 2018. Retrieved January 28, 2018.
^ "What is GPS?". February 22, 2021. Archived from the original on May 6, 2021. Retrieved May 5, 2021.
^ "GPS.gov: GPS Accuracy". www.gps.gov. Archived from the original on January 4, 2018. Retrieved January 17, 2018.
^ "index.php". Clove Blog. January 10, 2012. Archived from the original on March 10, 2016. Retrieved October 29, 2016.
^ "China launches final satellite in GPS-like Beidou system". phys.org. Archived from the original on June 24, 2020. Retrieved June 24, 2020.
^ "GPS Accuracy". GPS.gov. National Coordination Office for Space-Based Positioning, Navigation, and Timing. Archived from the original on January 4, 2018. Retrieved September 23, 2021.
^ "GPS will be accurate within one foot in some phones next year". The Verge. Archived from the original on January 18, 2018. Retrieved January 17, 2018.
^ "Superaccurate GPS Chips Coming to Smartphones in 2018". IEEE Spectrum: Technology, Engineering, and Science News. September 21, 2017. Archived from the original on January 18, 2018. Retrieved January 17, 2018.
^ Relativistische Zeitdilatation eines künstlichen Satelliten (Relativistic time dilation of an artificial satellite. Astronautica Acta II (in German) (25). Retrieved 19 October 2014. Archived from the original on July 3, 2014. Retrieved October 20, 2014.
^ "GEODETIC EXPLORER-A Press Kit" (PDF). NASA. October 29, 1965. Archived (PDF) from the original on February 11, 2014. Retrieved October 20, 2015.
^ E. Steitz, David. "National Positioning, Navigation and Timing Advisory Board Named". Archived from the original on January 13, 2010. Retrieved March 22, 2007.
^ 010907 (September 17, 2007). "losangeles.af.mil". losangeles.af.mil. Archived from the original on May 11, 2011. Retrieved October 15, 2010. {{cite web}}: CS1 maint: numeric names: authors list (link)
^ Grewal, Mohinder S.; Weill, Lawrence R.; Andrews, Angus P. (2007). Global Positioning Systems, Inertial Navigation, and Integration (2nd ed.). John Wiley & Sons. pp. 92–93. ISBN 978-0-470-09971-1. , https://books.google.com/books?id=6P7UNphJ1z8C&pg=PA92
^ "Archived copy". Archived from the original on October 22, 2011. Retrieved 2011-10-27. {{cite web}}: CS1 maint: archived copy as title (link). Retrieved October 27, 2011
^ Samama, Nel (2008). Global Positioning: Technologies and Performance. John Wiley & Sons. p. 65. ISBN 978-0-470-24190-5. , https://books.google.com/books?id=EyFrcnSRFFgC&pg=PA65
^ Hadas, T.; Krypiak-Gregorczyk, A.; Hernández-Pajares, M.; Kaplon, J.; Paziewski, J.; Wielgosz, P.; Garcia-Rigo, A.; Kazmierski, K.; Sosnica, K.; Kwasniak, D.; Sierny, J.; Bosy, J.; Pucilowski, M.; Szyszko, R.; Portasiak, K.; Olivares-Pulido, G.; Gulyaeva, T.; Orus-Perez, R. (November 2017). "Impact and Implementation of Higher-Order Ionospheric Effects on Precise GNSS Applications: Higher-Order Ionospheric Effects in GNSS". Journal of Geophysical Research: Solid Earth. 122 (11): 9420–9436. doi:10.1002/2017JB014750. hdl:2117/114538.
^ Sośnica, K.; Bury, G.; Zajdel, R. (March 16, 2018). "Contribution of Multi‐GNSS Constellation to SLR‐Derived Terrestrial Reference Frame". Geophysical Research Letters. 45 (5): 2339–2348. Bibcode:2018GeoRL..45.2339S. doi:10.1002/2017GL076850.
^ Editor (April 30, 2018). "Russia Undermining World's Confidence in GPS". Archived from the original on March 6, 2019. Retrieved March 3, 2019. {{cite web}}: |last= has generic name (help)
^ Attewill, Fred. (2013-02-13) Vehicles that use GPS jammers are big threat to aircraft Archived February 16, 2013, at the Wayback Machine. Metro.co.uk. Retrieved on 2013-08-02.
^ FCC press release "Spokesperson Statement on NTIA Letter – LightSquared and GPS" Archived April 23, 2012, at the Wayback Machine. February 14, 2012. Accessed 2013-03-03.
^ PTI, K. J. M. Varma (December 27, 2018). "China's BeiDou navigation satellite, rival to US GPS, starts global services". livemint.com. Archived from the original on December 27, 2018. Retrieved December 27, 2018.
vteNAVSTAR Global Positioning System satellites
vteSatellite navigation systems
vteSatellite constellations
vteTime signal stations
vteUSAF and USSF space vehicle designations (since 1962)
vteEquipment of the United States Air Force
Uniforms andother equipment
vteUnited States Air Force system numbers
vteOrienteering
|
Gold Melt Calculator
What is a karat and how does it measure the purity of gold?
How to calculate gold melt value using the calculator
Manually calculate gold jewelry melt value
Our gold melt calculator can help you observe the actual monetary value based on your gold's weight and its purity. It will also help you calculate its bidding and asking price based on the spread.
From the following article, you will learn:
How to calculate gold melt value using the calculator;
How to calculate the gold jewelry melt value manually; and
How to calculate gold melt of 18K, 14K, and other karats.
A karat is a unit of comparative measurement for the purity of gold in metal alloys. We consider 24 karats or 24K as the purest form obtainable.
Here's a table showing different karats along with their level of purity.
A table showing purity levels of karats.
💡 A carat is different than a karat. A carat measures the weight of gemstones like diamonds, whereas a karat is a unit measuring the purity of a natural element like gold.
Here's how we calculate the gold coin melt value using the calculator:
👑 Quality
Enter the weight of gold, e.g., 10 grams.
Select the purity of the gold, e.g., 24 karat.
The calculator will automatically set the purity, i.e., 99.9%.
Enter the current market price of the gold, e.g., 2000 USD / troy ounce.
The gold melt calculator will then display your gold's possession value, i.e., 642.37 USD.
🏛️ Trade / Exchange
The spread decreases the bid price and increases the ask price based on the given % margin.
The bid price is the price a buyer would be willing to pay.
The asking price is the price that a seller would be willing to accept.
Now that you understand how to use the gold melt value calculator for jewelry, coins, etc., let's learn how to calculate the gold melt value manually.
We use the following formula to calculate the gold melt value:
\footnotesize G = M \times (Q \div 100) \times P
G
– Value of gold in possession;
M
– Mass of the gold in possession;
Q
– Quality of gold in percentage; and
P
– Price per unit mass of gold.
(Where we measure the price per unit mass and mass of the gold in possession on the same weight scale.)
Let's take an example and calculate a melted gold coin's value, with a weight of 1.0909 troy ounces and purity at 22 karat.
At the time of writing this article, the price of gold is 1,915 USD per troy ounce.
\footnotesize \begin{align*} G &= 1.0909 \times (91.67 \div 100) \times 1,915\\ &= 1,915.05 \end{align*}
Thus, the value of our melted gold coin in possession is 1,915.05 USD.
Pure gold melts at 1,945 °F, about 1,063 °C or 1,336 K. It boils at 2,700 °F, that's almost 1,482 °C or 1,755.3 K. For comparison, an average cooking fire can reach up to 700 degrees Fahrenheit. You'll need a propane torch if you plan on melting gold at home.
How pure is 14 karat gold?
58.33%, is the purity of 14 karat gold. In other words, if we melt 14K gold and calculate its weight, we'll have about 58.33% of pure gold, and the rest will be other elements and impurities.
What is 10 karat gold melt value of 1 gram at 2000 USD per troy ounce?
26.79 USD is the value of 10 karat gold weighing 1 gram at 2,000 USD per troy ounce.
We obtain this value by using the following formula:
Value of gold = Weight of gold × (Quality of gold / 100) × Price per unit mass of gold
How do I calculate gold coin melt value?
Here are the steps for calculating gold coin value:
Measure the weight of the gold.
Note the purity of the gold in percentage.
Find the current price per unit mass of gold.
Multiply the weight with the quality (divided by 100) and the current price, which mathematically is:
|
The elements of a trapezoid
How to calculate the perimeter of a trapezoid
How to use our trapezoid perimeter calculator
Other trapezoid calculators you may find useful
A trapezoid is a quadrangular shape, part of a broad family of convex quadrilaterals: use our trapezoid perimeter calculator to find its perimeter using any possible combinations of parameters!
A trapezoid is a quadrilateral convex shape:
Quadrilateral means that it has four sides (and four angles); and
Convex means that it contains every segment with initial and final points inside the shape.
Trapezoids have at least a pair of parallel sides. The other sides can be either perpendicular or tilted with respect to the bases.
A characteristic set of elements identifies a trapezoid. Starting from the sides, we have:
Two bases: the parallel sides;
Two legs: the oblique sides;
It is possible to identify four angles. The total sum is always
360\ \degree
, which is equally divided between angles associated to the same side (complementary angles), which sum to
180\ \degree
⚠️ Don't mistake a trapezoid for a trapezium: the latter one is a quadrilateral without parallel sides!
It is important to identify one last element, the height of the trapezoid, the distance between the bases. It will be helpful in the calculations for the perimeter.
The formula of a trapezoid's perimeter is extremely simple:
P=a+b+c+d
P
is the perimeter;
and
b
the parallel sides (bases); and
c
and
the oblique sides.
But rarely, you will have all the sides. You may have combinations of sides and angles, and sometimes the situation may also look pretty dire until trigonometry comes to our help. Let's check the possible combinations you may find!
Two sides, an angle, and a base
With this combination you can find easily the height
h
of the trapezoid with the formulas:
h=c\cdot\sin{\alpha}
h=d\cdot\sin{\delta}
Once you know the value of
h
, you can compute the value of the other angle, and then the projection of the sides on the base
a
, which summed equal the difference of the bases:
a-b=c\cdot\cos{\alpha}+d\cdot\cos{\delta}
Rearranging this formula gives you the missing base!
Two sides, a base, and the height
With this combination, we can first find the values of the projections of the sides on the base
a
using the reliable Pythagorean theorem. Once you find them, you can either:
Add them to the base
b
to find the base
a
Subtract them from the base
a
b
You have all the elements to feed into the trapezoid perimeter formula now!
Two angles associated to opposite sides, the height and a base
This case is similar to the previous one; knowing the height and the angles (remember that they satisfy the identities
\alpha+\beta=180\ \degree
\gamma+\delta=180\ \degree
), you can find the sides first and then the projections on the base
a
. From there, you can find the value of the other base.
The two bases, a side, and the relative angle
This combination is our favorite! By subtracting the two bases and the projection of the side on
a
, we can find the other projection. From here, with the height calculated from the side and angle, we can find the other side.
⚠️ Using this combination, you will see the perimeter field in our calculator greyed out. You can't use our trapezoid perimeter calculator in reverse because of a slightly more complex formula running in the background. 🙄 You can try it in reverse in the other cases, however!
Using our trapezoid perimeter calculator is straightforward! If you have the values of each side of the polygon, insert them in the respective field, and you'll get the value of the perimeter.
🙋 You can use our calculators in reverse too! If you have the perimeter but one of the sides is missing, insert the value anyway, we will give you the answer. 😀
If you don't have the sides but some other combination of sides, angles, or height, click on advanced mode, you will then see all of the other variables. Choose the ones you need and insert your values there!
Apart from the trapezoid perimeter calculator, we have many more calculators that will help you with your geometrical problems (only those, though!): try our:
How to calculate the perimeter of a trapezoid?
The perimeter of a trapezoid formula is:
where P is the perimeter, a and b are the bases, while c and d are the sides. There are many ways to find the values of those elements: discover them on omnicalculator.com!
What is the perimeter of a trapezoid with sides c=4 and d=3, base a=10 and α=30°?
First, you have to calculate the height of the trapezoid, using:
h = c * sin(α) = 4 * sin(30°) = 2.
Now using the Pythagorean theorem we can find the values of the projections of the sides on the base a:
sqrt(c² - h²) = sqrt(16 - 4) = 3.46 and sqrt(d² - h²) = 2.24.
Subtract these values from the base a to find b:
b = a - sqrt(c² - h²) - sqrt(d² - h²) = 10 - 3.46 - 2.24 = 4.3
and then the perimeter:
P = 10 + 4 + 3 + 4.3 = 21.3
Can you find the perimeter of a trapezoid knowing only the angles and a base?
No. This combination of parameters would leave too much freedom to the remaining elements of the trapezoid: the other base's length would be able to vary as well as the angles of the trapezoid. It is necessary to know at least an angle in addition to these elements.
How do I find the perimeter of an isosceles trapezoid?
In the case of an isosceles trapezoid, you can find the perimeter just by knowing the bases and one side: the two sides are identical! The formula changes slightly:
where P is the perimeter, a and b the bases, and c the oblique side
c (side)
d (side)
|
You are looking at all articles with the topic "Systems". We found 40 matches.
Computing Physics Systems Systems/Cybernetics
Bremermann's limit, named after Hans-Joachim Bremermann, is a limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe. It is derived from Einstein's mass-energy equivalency and the Heisenberg uncertainty principle, and is c2/h ≈ 1.36 × 1050 bits per second per kilogram. This value is important when designing cryptographic algorithms, as it can be used to determine the minimum size of encryption keys or hash values required to create an algorithm that could never be cracked by a brute-force search.
For example, a computer with the mass of the entire Earth operating at the Bremermann's limit could perform approximately 1075 mathematical computations per second. If one assumes that a cryptographic key can be tested with only one operation, then a typical 128-bit key could be cracked in under 10−36 seconds. However, a 256-bit key (which is already in use in some systems) would take about two minutes to crack. Using a 512-bit key would increase the cracking time to approaching 1072 years, without increasing the time for encryption by more than a constant factor (depending on the encryption algorithms used).
The limit has been further analysed in later literature as the maximum rate at which a system with energy spread
{\displaystyle \Delta E}
can evolve into an orthogonal and hence distinguishable state to another,
{\displaystyle \Delta t={\frac {\pi \hbar }{2\Delta E}}.}
In particular, Margolus and Levitin have shown that a quantum system with average energy E takes at least time
{\displaystyle \Delta t={\frac {\pi \hbar }{2E}}}
to evolve into an orthogonal state. However, it has been shown that access to quantum memory in principle allows computational algorithms that require arbitrarily small amount of energy/time per one elementary computation step.
"Bremermann's limit" | 2016-04-22 | 57 Upvotes 21 Comments
{\displaystyle z_{n+1}=(|\operatorname {Re} \left(z_{n}\right)|+i|\operatorname {Im} \left(z_{n}\right)|)^{2}+c,\quad z_{0}=0}
{\displaystyle \mathbb {C} }
which will either escape or remain bounded. The difference between this calculation and that for the Mandelbrot set is that the real and imaginary components are set to their respective absolute values before squaring at each iteration. The mapping is non-analytic because its real and imaginary parts do not obey the Cauchy–Riemann equations.
Computing Systems Computing/Software Systems/Software engineering
Capability Immaturity Model (CIMM) in software engineering is a parody acronym, a semi-serious effort to provide a contrast to the Capability Maturity Model (CMM). The Capability Maturity Model is a five point scale of capability in an organization, ranging from random processes at level 1 to fully defined, managed and optimized processes at level 5. The ability of an organization to carry out its mission on time and within budget is claimed to improve as the CMM level increases.
The "Capability Im-Maturity Model" asserts that organizations can and do occupy levels below CMM level 1. An original article by Capt. Tom Schorsch USAF as part of a graduate project at the Air Force Institute of Technology provides the definitions for CIMM. He cites Prof. Anthony Finkelstein's ACM paper as an inspiration. The article describes situations that arise in dysfunctional organizations. Such situations are reportedly common in organizations of all kinds undertaking software development, i.e. they are really characterizations of the management of specific projects, since they can occur even in organizations with positive CMM levels.
Kik Piney, citing the original authors, later adapted the model to a somewhat satirical version that attracted a number of followers who felt that it was quite true to their experience.
"Capability Immaturity Model" | 2020-03-02 | 124 Upvotes 19 Comments
|
Section 1.3: Dot Product
The dot product of two vectors is a scalar. Table 1.3.1 summarizes the definition, essential properties, and useful consequences of this product. In the table, a vector A has components
{a}_{k},k=1,\dots ,n
n
can be 2 or 3 or even an integer greater than 3. Likewise for a vector B.
Because the dot product yields a scalar, some texts call it the scalar product of two vectors.
\mathbf{A}·\mathbf{B}=\underset{k=1}{\overset{n}{∑}}{a}_{k}{b}_{k}
\mathbf{A}·\mathbf{B}=\mathbf{B}·\mathbf{A}
Norm of A
\mathbf{A}·\mathbf{A}=\underset{k=1}{\overset{n}{∑}}{a}_{k}^{2}={∥\mathbf{A}∥}^{2}
Points A and B
∥\mathbf{A}\mathbf{-}\mathbf{B}∥
∥\mathbf{B}\mathbf{-}\mathbf{A}∥
, where A and B are position vectors to points A and B
\mathbf{A}·\mathbf{B}=∥\mathbf{A}∥∥\mathbf{B}∥\mathrm{cos}(\mathrm{θ})
\mathrm{θ}
is the angle between A and B
\mathrm{cos}\left(\mathrm{θ}\right)=\frac{\mathbf{A}·\mathbf{B}}{∥\mathbf{A}∥ ∥\mathbf{B}∥}
Orthogonality of A and B
\mathbf{A}·\mathbf{B}=0⇒
A ⊥ B, provided neither A nor B is the zero vector
Direction Cosines for A
\mathrm{cos}\left(\mathrm{α}\right)=\frac{\mathbf{A}·\mathbf{i}}{∥\mathbf{A}∥}
\frac{{a}_{1}}{∥\mathbf{A}∥}
\mathrm{cos}\left(\mathrm{β}\right)=\frac{\mathbf{A}\mathbf{·}\mathbf{j}}{∥\mathbf{A}∥}
\frac{{a}_{2}}{∥\mathbf{A}∥}
\mathrm{cos}\left(\mathrm{γ}\right)=\frac{\mathbf{A}\mathbf{·}\mathbf{k}}{∥\mathbf{A}∥}
\frac{{a}_{3}}{∥\mathbf{A}∥}
Scalar Projection of B on A
∥{\mathbf{B}}_{\mathbf{A}}∥
\frac{\mathbf{B}·\mathbf{A}}{∥\mathbf{A}∥}
Vector Projection of B on A
{\mathbf{B}}_{\mathbf{A}}=\left(\frac{\mathbf{B}·\mathbf{A}}{∥\mathbf{A}∥}\right)\frac{\mathbf{A}}{∥\mathbf{A}∥}
\frac{\mathbf{B}·\mathbf{A}}{\mathbf{A}·\mathbf{A}}\mathbf{A}
Component of B orthogonal to A
{\mathbf{B}}_{\perp \mathbf{A}}=\mathbf{B}-{\mathbf{B}}_{\mathbf{A}}
\left(\mathbf{A}·\mathbf{B}\right)\prime =\mathbf{A}·\mathbf{B}\prime +\mathbf{B}·\mathbf{A}\prime
Table 1.3.1 The dot product, its definition, properties, and consequences
\mathrm{α},\mathrm{β},\mathrm{γ}
are called the direction angles.
A common multiple of the direction cosines generates a set of direction numbers.
This Study Guide will use the notation
{\mathbf{B}}_{\mathbf{A}}
for the vector projection of B upon A.
{\mathbf{B}}_{\perp \mathbf{A}}
for the component of B orthogonal to A.
Some texts will call the dot product the "scalar product."
Some texts will write
{\mathbf{A}}^{2}
\mathbf{A}·\mathbf{A}
Some texts will write A for
∥\mathbf{A}∥
, the magnitude of A.
The product rule for differentiation of a dot product is consistent with that for differentiation of a product of scalars: "first times the derivative of the second plus the second times the derivative of the first", with "times" replaced by "dot".
Implementing the Dot Product in Maple
A DotProduct command appears in four of the five relevant Maple packages listed in Table 1.3.2. Surprisingly, it does not exist in the Student LinearAlgebra package where it is only implemented with the period or with the heavier dot found in the Common Symbols palette. However at top-level, and in all five listed packages, the Context Panel provides an interactive dot product option.
At top-level and in the LinearAlgebra package, Maple takes the dot product over the complex numbers, so the first vector is conjugated. This only affects vectors whose components are variables.
Table 1.3.2 summarizes the location and implementation of the dot product in Maple.
Period or
·
from Common Symbols palette
·
Student LinearAlgebra
·
·
Student VectorCalculus
·
·
Table 1.3.2 Implementing the dot product in Maple
\mathbf{A}=3 \mathbf{i}+7 \mathbf{j}
\mathbf{B}=-4 \mathbf{i}+5 \mathbf{j}
\mathbf{A}·\mathbf{B}
\mathrm{θ}
, the angle between A and B
{∥\mathbf{A}∥}^{2}=\mathbf{A}·\mathbf{A}
Obtain the scalar projection of B on A
Obtain the vector projection of B on A
Obtain the component of B orthogonal to A
\mathbf{A}=3 \mathbf{i}+2 \mathbf{j}+7 \mathbf{k}
\mathbf{B}=4 \mathbf{i}-5 \mathbf{j}+6\mathbf{ }\mathbf{k}
\mathbf{A}·\mathbf{B}
\mathrm{θ}
{∥\mathbf{A}∥}^{2}=\mathbf{A}·\mathbf{A}
For A, obtain direction cosines, angles, and numbers.
Using the law of cosines, verify the equivalence of
\mathbf{A}·\mathbf{B}
∥\mathbf{A}∥∥\mathbf{B}∥\mathrm{cos}(\mathrm{\theta })
\mathrm{xy}
-plane, obtain all vectors that are orthogonal to
\mathbf{A}=3 \mathbf{i}+2 \mathbf{j}
\mathbf{A}=3 \mathbf{i}+2 \mathbf{j}+5 \mathbf{k}
\mathbf{B}=-4 \mathbf{i}+5 \mathbf{j}+\mathrm{λ} \mathbf{k}
, find all values of
\mathrm{λ}
that make B orthogonal to A.
Use vector methods to find the distance between the points A:
\left(4,5\right)
\left(-3,2\right)
\left(4,5,7\right)
\left(-3,2,3\right)
Suppose the components of the planar vectors A and B are functions of
t
, and the derivative of such vectors is defined to be the vector of componentwise derivatives. If the prime denotes differentiation with respect to
t
\left(\mathbf{A}·\mathbf{B}\right)\prime =\mathbf{A}\prime ·\mathbf{B}+\mathbf{A}·\mathbf{B}\prime
\left(\mathbf{A}·\mathbf{A}\right)\prime =2 \mathbf{A}·\mathbf{A}\prime
If A is a unit vector whose components are functions of
t
, show that A and
\mathbf{A}\prime
are necessarily orthogonal.
Verify this for
\mathbf{A}=\mathrm{cos}\left(t\right) \mathbf{i}+\mathrm{sin}\left(t\right) \mathbf{j}
Verify this for A, the normalization of
\mathbf{V}=t \mathbf{i}+{t}^{2} \mathbf{j}
Find a vector of unit length that is orthogonal to the vectors
\mathbf{A}=\mathbf{i}+\mathbf{j}+\mathbf{k}
\mathbf{B}=2 \mathbf{i}+3 \mathbf{j}-\mathbf{k}
\mathbf{A}·\mathbf{A}=0
is a necessary and sufficient condition for A to be the zero vector.
Use vector methods to prove that an angle inscribed in a semicircle is necessarily a right angle.
Do the terminal points of the position vectors to points A:
\left(2,-3,7\right)
\left(1,-1,10\right)
\left(3,-5,4\right)
lie on a straight line?
If A and B are unit vectors with
\mathrm{θ}
the angle between them, show that
\mathrm{sin}\left(\mathrm{θ}/2\right)=\frac{1}{2}∥\mathbf{A}-\mathbf{B}∥
\mathbf{A}=a\left(x\right) \mathbf{i}+b\left(x\right) \mathbf{j}
\mathbf{B}=u\left(x\right) \mathbf{i}+v\left(x\right) \mathbf{j}
, verify the product rule for differentiating the dot product
\mathbf{A}·\mathbf{B}
|
R
is a three-dimensional region, then its volume
V
or its total mass
m
can be computed by one of the integrals in Table 8.3.1. If the density
\mathrm{δ}
R
is 1, the integrals yield the volume of
R
; otherwise, they yield the total mass in
R
∫∫{∫}_{R}\mathrm{δ}\left(x,y,z\right) \mathrm{dv}
∫∫{∫}_{R}\mathrm{δ}\left(r,\mathrm{θ},z\right) r \mathrm{dv}\prime
∫∫∫\mathrm{δ}\left(\mathrm{ρ},\mathrm{φ},\mathrm{θ}\right) {\mathrm{ρ}}^{2}\mathrm{sin}\left(\mathrm{φ}\right) \mathrm{dv}″
Table 8.3.1 Total volume or mass in three-dimensional region
R
If the triple integrals in Tables 8.3.1 and 8.3.2 are iterated in Cartesian coordinates,
\mathrm{dv}
is one of the six orderings of the differentials
\mathrm{dx}, \mathrm{dy}, \mathrm{dz}
; in cylindrical coordinates,
\mathrm{dv}\prime
\mathrm{dr},\mathrm{dz},d\mathrm{θ}
; and in spherical coordinates,
\mathrm{dv}″
d\mathrm{ρ},d\mathrm{φ},d\mathrm{θ}
Table 8.3.2 lists the integrals whose values are the first moments for a three-dimensional region
R
{M}_{\mathrm{yz}}
∫∫{∫}_{R}x \mathrm{δ} \mathrm{dv}
∫∫{∫}_{R}r \mathrm{cos}\left(\mathrm{θ}\right) \mathrm{δ} r \mathrm{dv}\prime
∫∫{∫}_{R}\mathrm{ρ} \mathrm{sin}\left(\mathrm{φ}\right)\mathrm{cos}\left(\mathrm{θ}\right) \mathrm{δ} {\mathrm{ρ}}^{2}\mathrm{sin}\left(\mathrm{φ}\right)\mathrm{dv}″
{M}_{\mathrm{xz}}
∫∫{∫}_{R}y \mathrm{δ} \mathrm{dv}
∫∫{∫}_{R}r \mathrm{sin}\left(\mathrm{\theta }\right) \mathrm{δ} r \mathrm{dv}\prime
∫∫{∫}_{R}\mathrm{\rho } \mathrm{sin}\left(\mathrm{\phi }\right)\mathrm{sin}\left(\mathrm{\theta }\right) \mathrm{δ} {\mathrm{\rho }}^{2}\mathrm{sin}\left(\mathrm{\phi }\right)\mathrm{dv}″
{M}_{\mathrm{xy}}
∫∫{∫}_{R}z \mathrm{δ} \mathrm{dv}
∫∫{∫}_{R}z \mathrm{δ} r \mathrm{dv}\prime
∫∫{∫}_{R}\mathrm{ρ} \mathrm{cos}\left(\mathrm{\phi }\right) \mathrm{δ} {\mathrm{\rho }}^{2}\mathrm{sin}\left(\mathrm{\phi }\right)\mathrm{dv}″
Table 8.3.2 First moments for calculating a centroid (
\mathrm{δ}=\mathrm{constant}
) or a center of mass
For a three-dimensional region
R
, Table 8.3.3 provides the Cartesian coordinates
\left(\stackrel{&conjugate0;}{x},\stackrel{&conjugate0;}{y},\stackrel{&conjugate0;}{z}\right)
of either the centroid or center of mass.
\stackrel{&conjugate0;}{x}
\frac{{M}_{\mathrm{yz}}}{V}
\frac{{M}_{\mathrm{yz}}}{m}
\stackrel{&conjugate0;}{y}
\frac{{M}_{\mathrm{xz}}}{V}
\frac{{M}_{\mathrm{xz}}}{m}
\stackrel{&conjugate0;}{z}
\frac{{M}_{\mathrm{xy}}}{V}
\frac{{M}_{\mathrm{xy}}}{m}
Table 8.3.3 Centroid or Center of Mass
In each of the following examples, find the centroid of the given region
R
. Then, find the center of mass under the assumption that the region has the indicated density
\mathrm{δ}
R
\mathrm{\delta }\left(r,\mathrm{\theta },z\right)=z {r}^{2}\mathrm{sin}\left(\mathrm{\theta }/6\right)
R
\mathrm{\delta }\left(r,\mathrm{\theta },z\right)=r {z}^{2}\mathrm{cos}\left(\mathrm{\theta }/3\right)
R
\mathrm{\delta }\left(x,y,z\right)=3+x+y+z
R
\mathrm{\delta }\left(\mathrm{\rho },\mathrm{\phi },\mathrm{\theta }\right)={\mathrm{\rho }}^{2}
R
\mathrm{\delta }\left(r,\mathrm{\theta },z\right)=\left({r}^{2}+z\right) \mathrm{sin}\left(\mathrm{\theta }/4\right)
R
\mathrm{δ}\left(r,\mathrm{θ},z\right)=z
R
\mathrm{δ}\left(\mathrm{\rho },\mathrm{\phi },\mathrm{\theta }\right)=\sqrt{\mathrm{ρ}}
R
\mathrm{\delta }\left(r ,\mathrm{\theta },z\right)=r z \mathrm{cos}\left(\mathrm{\theta }/6\right)
R
\mathrm{\delta }\left(\mathrm{\rho },\mathrm{\phi },\mathrm{\theta }\right)=2+ \mathrm{\rho } \mathrm{sin}\left(\mathrm{\phi }\right)\mathrm{cos}\left(\mathrm{\theta }\right)
R
\mathrm{\delta }\left(x,y,z\right)=2 {x}^{2}+3 {y}^{2}+4 {z}^{2}
R
\mathrm{\delta }\left(r,\mathrm{\theta },z\right)={z}^{2}r \mathrm{sin}\left(\mathrm{\theta }/3\right)
R
\mathrm{\delta }\left(x,y,z\right)=x {y}^{2}{z}^{3}
|
Raem's Square | Toph
Raem is an eight years old boy. One day his family was visiting Dhaka Shishu Park near Shahbag. Besides enjoying many games, Raem saw a toy car shop. Raem wanted to purchase a toy car, but the shopkeeper decided to make Raem an offer. If Raem could give him the correct square of a given number
N, then the kind shopkeeper would give the toy free to Raem.
Considering on Raem's age, the shopkeeper gave Raem a chance to have a volunteer like you to help him find the correct square of the number
N.
T (
1 < T < 1000
1<T<1000), which denotes the number of test cases. Then for each test case, there will be an integer
N (
1 \le N \le 10^6
For each input, output the square of the number
N.
YupitsrashidEarliest, Sep '19
Tanzim07Fastest, 0.0s
Sakib071Lightest, 0 B
Just multiply an integer with itself, keep the type of the integer as long long in C++.
CSE-JnU January Cookoff
|
A ball is thrown from a window
25
meters above the ground with an initial velocity of
40
meters per second and an angle of inclination of
\frac { \pi } { 6 }
. Let the origin be the point on the (level) ground below the window.
Assume the acceleration due to gravity is
−10 \frac{\text{meters}}{\text{sec}^2}
x(t)
y(t)
\vec{v}(t)=\langle v_0\cos(\theta),v_0\sin(\theta)+a(t)\rangle=\langle 40\cos\Big(\frac{\pi}{6}\Big),40\sin\Big(\frac{\pi}{6}\Big)-10t\rangle
x(t)=40t\cos\Big(\frac{\pi}{6}\Big)+C_1\text{ }y(t)=40t\sin\Big(\frac{\pi}{6}\Big)-5t^2+C_2
Find the angle at which the ball hits the ground.
First, determine when the ball hits the ground. This is when
y(t) = 0
Then angle is the angle created by the slope.
Think of drawing a slope triangle and calculating the angle.
|
Number of days for global cases to double, during peak period: | Metaculus
Number of days for global cases to double, during peak period:
The 2019–20 coronavirus outbreak is an ongoing global outbreak of coronavirus disease. It is caused by the SARS-CoV-2 coronavirus, first identified in Wuhan, Hubei, China. As of 7 March 2020, more than 102,000 cases have been confirmed, of which 7,100 were classified as serious. 96 countries and territories have been affected, with major outbreaks in central China, South Korea, Italy, and Iran.
This question asks what will be the doubling time of COVID-19 cases during the peak growth period in 2020?
The resolution will be based on the following equation that assumes exponential growth:
This question will follow as much as possible the resolution criteria described in the question about the month with the biggest increase of COVID-19 cases. In short, this question will resolve at the end of March 2021 and will use the best available data for the whole world as made available by WHO.
t_1
will be the 20th of January, because the first WHO situation report is for that day and it estimates 282 cases. The
t_2
will be the last day of the month with the biggest increase of COVID-19 cases. The time unit of
t_1
t_2
will be days.
x
will be the total cumulative number of cases globally at the end of the month with the biggest increase. In other words, the number will be counted from the beginning of the outbreak and not during that single month.
For example, assume that February will be the month with the biggest growth then
x = 85403
t_2
will be the 29th February and
t_2 - t_1 = 40
This may differ from the official estimates because COVID-19 cases did not follow the exponential growth in February.
This question will resolve ambiguous if the question "Which month of 2020 will see the biggest increase of COVID-19 cases?" will resolve ambiguous.
You may also want to take a look at: How many human infections of the 2019 novel coronavirus (COVID-19) will be estimated to have occurred before 2021?
|
Design of Robust Output Feedback Guaranteed Cost Control for a Class of Nonlinear Discrete-Time Systems
Yan Zhang, Yali Dong, Tianrui Li, "Design of Robust Output Feedback Guaranteed Cost Control for a Class of Nonlinear Discrete-Time Systems", International Journal of Engineering Mathematics, vol. 2014, Article ID 628041, 9 pages, 2014. https://doi.org/10.1155/2014/628041
Yan Zhang,1 Yali Dong,1 and Tianrui Li1
Academic Editor: Yurong Liu
This paper investigates static output feedback guaranteed cost control for a class of nonlinear discrete-time systems where the delay in state vector is inconsistent with the delay in nonlinear perturbations. Based on the output measurement, the controller is designed to ensure the robust exponentially stability of the closed-loop system and guarantee the performance of system to achieve an adequate level. By using the Lyapunov-Krasovskii functional method, some sufficient conditions for the existence of robust output feedback guaranteed cost controller are established in terms of linear matrix inequality. A numerical example is provided to show the effectiveness of the results obtained.
In control theory and practice, one of the most important open problems is the static output feedback (SOF) problem. The main principle of the SOF control is to utilize the measured output to excite the plant. Since the controller can be easily implemented in practice, the SOF control has attracted a lot of attention over the past few decades and has been applied to many areas such as economic, communication, and biological systems [1, 2]. The goal of design SOF controller is to ensure asymptotically stable or exponential stable of the original system [3]. However, in many practical systems, controller designed is to not only ensure asymptotically or exponentially stable of the system but also guarantee the performance of system to achieve an adequate level. One method of dealing with this problem is the guaranteed cost control first introduced by Chang and Peng [4]. This method has the advantage of providing an upper bound on a given performance index and thus the system performance degradation is guaranteed to be no more than this bound. Based on this idea, a lot of significant results have been addressed for continuous-time systems in [5–7] and for discrete-time systems in [8].
It is well known that time-delays as well as parameter uncertainties frequently lead to instability of systems. Moreover, the existing of time-delays and uncertainties make the system more complex [9, 10].
In the past studies for guaranteed cost control, almost most of the articles considered linear systems [11, 12]. However, in majority dynamic systems, the nonlinear perturbations appear more and more frequently. Therefore, we not only deal with the time-varying delays and uncertainties, but also deal with the nonlinearities. Difficulties then arise when one attempts to derive exponential stabilization conditions. Hence in this case, the methods in linear systems [11, 12] can not be directly applied to nonlinear systems. This calls for a fresh look at the problem with an improved Lyapunov-Krasovskii functionals and a new set of LMI conditions. In this paper, we aim to design robust static output feedback guaranteed cost controller for a class of nonlinear discrete-time systems with time-varying delays. By constructing a set of improved Lyapunov-Krasovskii functionals, a new criterion for the existence of robust static output feedback guaranteed cost controller is established and described in terms of linear matrix inequality. A numerical example is provided to show the effectiveness of the results obtained.
Notations. In this paper, a matrix is symmetric if . ) denotes the maximum (minimum) value of the real parts of eigenvalues of . The symmetric terms in a matrix are denoted by . (resp., ), for , means that the matrix is real symmetric positive definite (resp., positive semidefinite). denotes the set of all real nonnegative integers.
Consider the following control system: where is the state vector, is the observation output, and is the control intput. , , , , , , and are given constant matrices with appropriate dimensions. , are the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition: where , , and are some given constant matrices with appropriate dimensions. The positive integers and are time-varying delays satisfying where , are known positive integers. and are unknown nonlinear functions, assumed as where , are known positive integers and , are known real matrices. The initial condition with the norm where .
The corresponding cost function is as follows: where , , , and are given symmetric positive definite matrices with appropriate dimensions.
Substituting the output feedback controller into system (1), we have where and .
The objective of this paper is to design an output feedback controller for system (1) and cost function (6) such that the resulting closed-loop system is robust exponentially stable with an upper bound for cost function (6).
We first give the following definitions, which will be used in the next theorems and proofs.
Definition 1. Given , the closed-loop system (7) is said to be robust exponentially stable with a decay rate , if there exists scalars such that for every solution of the system satisfies the condition:
Definition 2. For system (1) and cost function (6), if there exist a static output feedback control law and a positive constant such that the closed-loop system (7) is robust exponentially stable with a decay rate and the value (6) satisfies , then is said to be a guaranteed cost index and is said to be a robust output feedback guaranteed cost control law of the system.
The following lemmas are essential in establishing our main results.
Lemma 3 (see [11]). For any , , and positive symmetric definite matrix , we have
Lemma 4 (Schur complement lemma [13]). Given constant matrix , , and with appropriate dimensions satisfying , . Then if and only if .
In this section, by constructing a new set of Lyapunov-Krasovskii functionals, we give a sufficient condition for the existence of robustly output feedback guaranteed cost control for system (1).
Theorem 5. For a given scalar , the control is a robustly static output feedback guaranteed cost controller for nonlinear system (1), if there exist symmetric positive definite matrices , , , , , and , arbitrary matrix , and scalars , , such that the following LMI holds: where and the guaranteed cost value is given by , where
Proof. We first introduce the new variable . The closed-loop system (7) is reduced to where and .
Associated with (2), the above equality is reduced to where Consider a Lyapunov-Krasovskii functional candidate for the closed-loop system (14) as where Calculating the difference of we have
Combine (21) and (22), we have Similarly, we can get Therefore, from (17)–(24), we have Multiplying both sides of the identity (14), Note that for any , , it follows from (16) to (17) Substituting (26)-(27) into (25), we have Dealing with partial idem in (28) using Lemma 3, it follows Similarly, we have Adding the following relation to inequality (28) where , , and , and using we can get where and
By Lemma 4, the condition is equivalent to LMI (10). Therefore, from (33) it follows that which implies that , .
We can easily get where , , From (35) and (36), we can get Using the relation , we can get Therefore, the closed-loop system (7) is exponentially stable. Next we will find the guaranteed cost value, from (33), we can get Summing up both sides of (40) from to , we can get Letting , noting that , we can get associated with (36), and we have .
Remark 6. When time-delay in state vector keeps consistent with the delay in nonlinear perturbations and uncertain items disappear, the system (1) induced to at the same time, the closed-loop system (7), and cost function (6) are reduced to Then we give a sufficient condition for the existence of static output feedback control for system (43).
Theorem 7. For a given scalar , the control is a static output feedback guaranteed cost controller for system (43), if there exist symmetric positive definite matrices , , , and , arbitrary matrix , and scalars , , such that the following LMI holds: where and the guaranteed cost value is given by , where .
Proof. The proof of Theorem 7 is similar to that of Theorem 5, which are omitted.
Remark 8. In this paper, we design the controller directly from the LMI without variable transformation [11] which reduces the amount of calculation. Moreover, based on Theorem 5, one can deduce the criteria for linear discrete-time systems with time-delay and nonlinear discrete-time systems with constant time-delay.
Consider the nonlinear uncertain discrete-time system (1) with the following parameters: Given , , , , , and the initial condition using the LMI Toolbox in MATLAB [14], the LMI (10) in Theorem 5 is satisfied with and the controller parameter Moveover, the solution of the closed-loop system satisfies and the guaranteed cost of the closed-loop system is as follows Simulation result is presented in Figure 1, which shows the convergence behavior of the proposed methods.
In this paper, the problem of robust output feedback guaranteed cost control for nonlinear uncertain disctere system is researched. For all admissible uncertainties, an output feedback guaranteed cost controller has been designed such that the resulting closed-loop system is robust exponentially stable and guarantees an adequate level of system performance. A numerical example has been presented to illustrate the efficiency of the result.
I. Yaesh and U. Shaked, “
{H}_{\infty }
optimization with pole constraints of static output-feedback controllers-a non-smooth optimization approach,” IEEE Transactions on Control Systems Technology, vol. 20, no. 4, pp. 1066–1072, 2012. View at: Publisher Site | Google Scholar
J. X. Dong and G. H. Yang, “Static output feedback control synthesis for linear systems with time-invariant parametric uncertainties,” IEEE Transactions on Automatic Control, vol. 52, no. 10, pp. 1930–1936, 2007. View at: Publisher Site | Google Scholar | MathSciNet
D. W. C. Ho and G. Lu, “Robust stabilization for a class of discrete-time non-linear systems via output feedback: the unified LMI approach,” International Journal of Control, vol. 76, no. 2, pp. 105–115, 2003. View at: Publisher Site | Google Scholar | MathSciNet
S. S. L. Chang and T. Peng, “Adaptive guaranteed cost control of systems with uncertain parameters,” IEEE Transactions on Automatic Control, vol. 17, no. 4, pp. 474–483, 1972. View at: Publisher Site | Google Scholar | MathSciNet
J. X. Dong and G. H. Yang, “Robust static output feedback control synthesis for linear continuous systems with polytopic uncertainties,” Automatica, vol. 49, no. 6, pp. 1821–1829, 2013. View at: Publisher Site | Google Scholar | MathSciNet
C. H. Lien, “Non-fragile guaranteed cost control for uncertain neutral dynamic systems with time-varying delays in state and control input,” Chaos, Solitons and Fractals, vol. 31, no. 4, pp. 889–899, 2007. View at: Publisher Site | Google Scholar | MathSciNet
J. Wei, Y. Dong, and Y. Su, “Guaranteed cost control of uncertain T-S fuzzy systems via output feedback approach,” WSEAS Transactions on Systems, vol. 10, no. 9, pp. 306–317, 2011. View at: Google Scholar
W.-H. Chen, Z.-H. Guan, and X. Lu, “Delay-dependent guaranteed cost control for uncertain discrete-time systems with delay,” IEE Proceedings: Control Theory and Applications, vol. 150, no. 4, pp. 412–416, 2003. View at: Publisher Site | Google Scholar
L. Chen, “Unfragile guaranteed-cost
{H}_{\infty }
control of uncertain state-delay sampling system,” Procedia Engineering, vol. 29, pp. 3359–3363, 2012. View at: Google Scholar
F. Qiu, B. Cui, and Y. Ji, “Further results on robust stability of neutral system with mixed time-varying delays and nonlinear perturbations,” Nonlinear Analysis. Real World Applications, vol. 11, no. 2, pp. 895–906, 2010. View at: Publisher Site | Google Scholar | MathSciNet
M. V. Thuan, V. N. Phat, and H. M. Trinh, “Dynamic output feedback guaranteed cost control for linear systems with interval time-varying delays in states and outputs,” Applied Mathematics and Computation, vol. 218, no. 21, pp. 10697–10707, 2012. View at: Publisher Site | Google Scholar | MathSciNet
J. H. Park, “On dynamic output feedback guaranteed cost control of uncertain discrete-delay systems: LMI optimization approach,” Journal of Optimization Theory and Applications, vol. 121, no. 1, pp. 147–162, 2004. View at: Publisher Site | Google Scholar | MathSciNet
S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishan, Linear Matrix Inequalities in System and Control Theory, vol. 15, SIAM, Philadelphia, Pa, USA, 1994. View at: Publisher Site | MathSciNet
P. Gahinet, A. Nemirovskii, A. J. Laub, and M. Chilali, LMI Control Toolbox: for Use with MATLAB, MathWorks, Inc., Natick, Mass, USA, 1995.
|
Find the derivative for each curve below. (Hint for part (c): Use the natural log function to rewrite this equation.)
y = \int _ { 3 } ^ { x } \sqrt { 1 + e ^ { u } } d u
\frac{d}{dx}y=\frac{d}{dx}\int_3^x\sqrt{1+e^u}du
\frac{dy}{dx}=\sqrt{1+e^x}
\left\{ \begin{array} { l } { x ( t ) = \operatorname { cos } 2 t } \\ { y ( t ) = \operatorname { cos } ^ { - 1 } 2 t } \end{array} \right.
\frac{dy}{dx}=\frac{y^\prime(t)}{x^\prime(t)}
y = x^x
\ln(y)=x\ln(x)
\frac{1}{y}y^\prime=\ln(x)+1
y^\prime=y(\ln(x)+1)
y^\prime=x^x(\ln(x)+1)
y = x csc(ln x)
y^\prime=\csc(\ln(x))+x(-\csc(\ln(x))\cot(\ln(x))\cdot\frac{1}{x}
|
Section 59.66 (03SH): Méthode de la trace—The Stacks project
Section 59.66: Méthode de la trace (cite)
59.66 Méthode de la trace
A reference for this section is [Exposé IX, §5, SGA4]. The material here will be used in the proof of Lemma 59.83.9 below.
Let $f : Y \to X$ be an étale morphism of schemes. There is a sequence
\[ f_!, f^{-1}, f_* \]
of adjoint functors between $\textit{Ab}(X_{\acute{e}tale})$ and $\textit{Ab}(Y_{\acute{e}tale})$. The functor $f_!$ is discussed in Section 59.70. The adjunction map $\text{id} \to f_* f^{-1}$ is called restriction. The adjunction map $f_! f^{-1} \to \text{id}$ is often called the trace map. If $f$ is finite étale, then $f_* = f_!$ (Lemma 59.70.7) and we can view this as a map $f_*f^{-1} \to \text{id}$.
Definition 59.66.1. Let $f : Y \to X$ be a finite étale morphism of schemes. The map $f_* f^{-1} \to \text{id}$ described above and explicitly below is called the trace.
Let $f : Y \to X$ be a finite étale morphism of schemes. The trace map is characterized by the following two properties:
it commutes with étale localization on $X$ and
if $Y = \coprod _{i = 1}^ d X$ then the trace map is the sum map $f_*f^{-1} \mathcal{F} = \mathcal{F}^{\oplus d} \to \mathcal{F}$.
By Étale Morphisms, Lemma 41.18.3 every finite étale morphism $f : Y \to X$ is étale locally on $X$ of the form given in (2) for some integer $d \geq 0$. Hence we can define the trace map using the characterization given; in particular we do not need to know about the existence of $f_!$ and the agreement of $f_!$ with $f_*$ in order to construct the trace map. This description shows that if $f$ has constant degree $d$, then the composition
\[ \mathcal{F} \xrightarrow {res} f_* f^{-1} \mathcal{F} \xrightarrow {trace} \mathcal{F} \]
is multiplication by $d$. The “méthode de la trace” is the following observation: if $\mathcal{F}$ is an abelian sheaf on $X_{\acute{e}tale}$ such that multiplication by $d$ on $\mathcal{F}$ is an isomorphism, then the map
\[ H^ n_{\acute{e}tale}(X, \mathcal{F}) \longrightarrow H^ n_{\acute{e}tale}(Y, f^{-1}\mathcal{F}) \]
is injective. Namely, we have
\[ H^ n_{\acute{e}tale}(Y, f^{-1}\mathcal{F}) = H^ n_{\acute{e}tale}(X, f_*f^{-1}\mathcal{F}) \]
by the vanishing of the higher direct images (Proposition 59.55.2) and the Leray spectral sequence (Proposition 59.54.2). Thus we can consider the maps
\[ H^ n_{\acute{e}tale}(X, \mathcal{F}) \to H^ n_{\acute{e}tale}(Y, f^{-1}\mathcal{F})= H^ n_{\acute{e}tale}(X, f_*f^{-1}\mathcal{F}) \xrightarrow {trace} H^ n_{\acute{e}tale}(X, \mathcal{F}) \]
and the composition is an isomorphism (under our assumption on $\mathcal{F}$ and $f$). In particular, if $H_{\acute{e}tale}^ q(Y, f^{-1}\mathcal{F}) = 0$ then $H_{\acute{e}tale}^ q(X, \mathcal{F}) = 0$ as well. Indeed, multiplication by $d$ induces an isomorphism on $H_{\acute{e}tale}^ q(X, \mathcal{F})$ which factors through $H_{\acute{e}tale}^ q(Y, f^{-1}\mathcal{F})= 0$.
This is often combined with the following.
Lemma 59.66.2. Let $S$ be a connected scheme. Let $\ell $ be a prime number. Let $\mathcal{F}$ be a finite type, locally constant sheaf of $\mathbf{F}_\ell $-vector spaces on $S_{\acute{e}tale}$. Then there exists a finite étale morphism $f : T \to S$ of degree prime to $\ell $ such that $f^{-1}\mathcal{F}$ has a finite filtration whose successive quotients are $\underline{\mathbf{Z}/\ell \mathbf{Z}}_ T$.
Proof. Choose a geometric point $\overline{s}$ of $S$. Via the equivalence of Lemma 59.65.1 the sheaf $\mathcal{F}$ corresponds to a finite dimensional $\mathbf{F}_\ell $-vector space $V$ with a continuous $\pi _1(S, \overline{s})$-action. Let $G \subset \text{Aut}(V)$ be the image of the homomorphism $\rho : \pi _1(S, \overline{s}) \to \text{Aut}(V)$ giving the action. Observe that $G$ is finite. The surjective continuous homomorphism $\overline{\rho } : \pi _1(S, \overline{s}) \to G$ corresponds to a Galois object $Y \to S$ of $\textit{FÉt}_ S$ with automorphism group $G = \text{Aut}(Y/S)$, see Fundamental Groups, Section 58.7. Let $H \subset G$ be an $\ell $-Sylow subgroup. We claim that $T = Y/H \to S$ works. Namely, let $\overline{t} \in T$ be a geometric point over $\overline{s}$. The image of $\pi _1(T, \overline{t}) \to \pi _1(S, \overline{s})$ is $(\overline{\rho })^{-1}(H)$ as follows from the functorial nature of fundamental groups. Hence the action of $\pi _1(T, \overline{t})$ on $V$ corresponding to $f^{-1}\mathcal{F}$ is through the map $\pi _1(T, \overline{t}) \to H$, see Remark 59.65.3. As $H$ is a finite $\ell $-group, the irreducible constituents of the representation $\rho |_{\pi _1(T, \overline{t})}$ are each trivial of rank $1$ (this is a simple lemma on representation theory of finite groups; insert future reference here). Via the equivalence of Lemma 59.65.1 this means $f^{-1}\mathcal{F}$ is a successive extension of constant sheaves with value $\underline{\mathbf{Z}/\ell \mathbf{Z}}_ T$. Moreover the degree of $T = Y/H \to S$ is prime to $\ell $ as it is equal to the index of $H$ in $G$. $\square$
Lemma 59.66.3. Let $\Lambda $ be a Noetherian ring. Let $\ell $ be a prime number and $n \geq 1$. Let $H$ be a finite $\ell $-group. Let $M$ be a finite $\Lambda [H]$-module annihilated by $\ell ^ n$. Then there is a finite filtration $0 = M_0 \subset M_1 \subset \ldots \subset M_ t = M$ by $\Lambda [H]$-submodules such that $H$ acts trivially on $M_{i + 1}/M_ i$ for all $i = 0, \ldots , t - 1$.
Proof. Omitted. Hint: Show that the augmentation ideal $\mathfrak m$ of the noncommutative ring $\mathbf{Z}/\ell ^ n\mathbf{Z}[H]$ is nilpotent. $\square$
Lemma 59.66.4. Let $S$ be an irreducible, geometrically unibranch scheme. Let $\ell $ be a prime number and $n \geq 1$. Let $\Lambda $ be a Noetherian ring. Let $\mathcal{F}$ be a finite type, locally constant sheaf of $\Lambda $-modules on $S_{\acute{e}tale}$ which is annihilated by $\ell ^ n$. Then there exists a finite étale morphism $f : T \to S$ of degree prime to $\ell $ such that $f^{-1}\mathcal{F}$ has a finite filtration whose successive quotients are of the form $\underline{M}_ T$ for some finite $\Lambda $-modules $M$.
Proof. Choose a geometric point $\overline{s}$ of $S$. Via the equivalence of Lemma 59.65.2 the sheaf $\mathcal{F}$ corresponds to a finite $\Lambda $-module $M$ with a continuous $\pi _1(S, \overline{s})$-action. Let $G \subset \text{Aut}(V)$ be the image of the homomorphism $\rho : \pi _1(S, \overline{s}) \to \text{Aut}(M)$ giving the action. Observe that $G$ is finite as $M$ is a finite $\Lambda $-module (see proof of Lemma 59.65.2). The surjective continuous homomorphism $\overline{\rho } : \pi _1(S, \overline{s}) \to G$ corresponds to a Galois object $Y \to S$ of $\textit{FÉt}_ S$ with automorphism group $G = \text{Aut}(Y/S)$, see Fundamental Groups, Section 58.7. Let $H \subset G$ be an $\ell $-Sylow subgroup. We claim that $T = Y/H \to S$ works. Namely, let $\overline{t} \in T$ be a geometric point over $\overline{s}$. The image of $\pi _1(T, \overline{t}) \to \pi _1(S, \overline{s})$ is $(\overline{\rho })^{-1}(H)$ as follows from the functorial nature of fundamental groups. Hence the action of $\pi _1(T, \overline{t})$ on $M$ corresponding to $f^{-1}\mathcal{F}$ is through the map $\pi _1(T, \overline{t}) \to H$, see Remark 59.65.3. Let $0 = M_0 \subset M_1 \subset \ldots \subset M_ t = M$ be as in Lemma 59.66.3. This induces a filtration $0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset \mathcal{F}_ t = f^{-1}\mathcal{F}$ such that the successive quotients are constant with value $M_{i + 1}/M_ i$. Finally, the degree of $T = Y/H \to S$ is prime to $\ell $ as it is equal to the index of $H$ in $G$. $\square$
Comment #3396 by Dongryul Kim on June 19, 2018 at 03:51
At the end of the second paragraph, it is stated that
f_{!} = f_\ast
if
is finite etale. Can we refer to Lemma 03S7 (54.69.5) here? I had a hard time finding out that this is proved later.
OK, yes, good point. I also added some remarks explaining how to define the trace map without using the existence of the functor
f_!
whose construction comes later in the Stacks project. This seems adequate for now, but if there are any complaints then we can always move the sections around. See changes to latex here.
Is the French title intentional?
Yes, it was intentional. Should I change it?
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03SH. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 03SH, in case you are confused.
|
What is the equation of a sphere?
How to derive the equation of a sphere
Expanded form of the sphere equation
Equation of a sphere from the end-points of any diameter
Equation of a sphere from its center and any known point on its surface
How to use this equation of a sphere calculator
Our equation of a sphere calculator will help you write the equation of a sphere in the standard form or expanded form if you know the center and radius of the sphere. Alternatively, you can find the sphere equation if you know its center and any point on its surface or if you know the end-points of any of its diameters. This calculator can also find the center and radius of a sphere if you know its equation.
Are you wondering what is the standard equation of a sphere or how to find the center and radius of a sphere using its equation? Have you perhaps come across a sphere equation that doesn't look like the general equation? Are you curious how knowing the diameter's end-points or the center and a point on the sphere can help you find the sphere's equation? Grab your favorite drink and keep reading this article, and we'll tackle these questions together!
The equation of a sphere in the standard form is given by:
\scriptsize \quad (x-h)^2 + (y-k)^2 + (z-l)^2 = r^2
(x,y,z)
– Coordinates of any point lying on the surface of the sphere;
(h,k,l)
– Coordinates of the center of the sphere; and
r
– Radius of the sphere.
Sphere equation is a collection of all the points in 3-D space that lie equidistant from the center
(h,k,l)
. In this figure, the center and the origin coincide.
If we know the center and radius of the sphere, we can plug them into this standard form to obtain the equation of the sphere. For example, consider a sphere with a radius of
10
and its center at
(3,7,5)
. By inserting this into the equation above, we get the standard equation of the sphere:
\scriptsize \quad \begin{align*} (x-3)^2 + (y-7)^2 + (z-5)^2 &= 10^2\\ =(x-3)^2 + (y-7)^2 + (z-5)^2 &= 100 \end{align*}
Similarly, you can use the standard form of the sphere equation to find the radius and center of the sphere. For example, a sphere with the equation
(x-7)^2 + (y-12)^2 + (z-4)^2 = 36
would have its center at
(7,12,4)
, and its radius is given by:
\qquad \begin{equation*} r = \sqrt{36}=6 \end{equation*}
❗ Note that you have to be careful regarding the signs of the center coordinates in the standard equation. If the sphere equation in the above example was instead
\small(x-7)^2 + (y+12)^2 + (z+4)^2 = 36
, then its center would be
\small(7,-12,-4)
Now that we know the standard equation of a sphere, let's learn how it came to be:
First thing to understand is that the equation of a sphere represents all the points lying equidistant from a center point. In other words, any point that lies at a distance
r
(h,k,l)
lies on the sphere. This concept is similar to how the equation of a circle works.
S(x,y,z)
that lies at a distance
r
from the center
(h,k,l)
. Using distance formula, we get:
\qquad \scriptsize \begin{align*} r &= \sqrt{(x-h)^2 + (y-k)^2 + (z-l)^2}\\ \end{align*}
Squaring on both sides and rearranging this equation, we get:
\scriptsize \begin{align*} \left(\!\!\sqrt{(x-h)^2 + (y-k)^2 + (z-l)^2}\right)^2 &= (r)^2 \\ \implies (x-h)^2 + (y-k)^2 + (z-l)^2 &=r^2 \\ \end{align*}
And, voila! Just like that, we got the standard equation of a sphere! It bears repeating that every point
S
that satisfies this equation lies on the surface of the sphere.
Sometimes, you may come across a sphere equation that appears different from the standard form we've discussed so far. Generally, such equations should look like the following:
\scriptsize x^2+y^2+z^2+Ex+Fy+Gz+H = 0
E
– Sum of all coefficients of the
x
terms;
F
y
G
z
terms; and
H
– Sum of all constants terms.
This equation may seem intimidating at a glance, but if you take a closer look; it is simply the expanded version of the standard form we're used to. Let's use completing the squares method to backtrack our way from the expanded version to the simpler, standard form:
Rearrange the equation for clarity:
\scriptsize \qquad (x^2+Ex) + (y^2+Fy) \\\qquad +\ (z^2+Gz)+H=0
(x^2+Ex)
(a-b)^2 = a^2-2ab+b^2
format, we need to identify
and
b
a=x
-2ab = Ex
\scriptsize \qquad b = \frac{Ex}{-2x} = -\frac{E}{2}
Adding and subtracting the quantity
\textcolor{red}{\left(- \frac{E}{2} \right)^2}
to the equation, we get:
\scriptsize \begin{align*} \left(x^2-(-Ex) \textcolor{red}{+\frac{E^2}{4}} \right) + (y^2+Fy)\\ +\ (z^2+Gz)+H\textcolor{red}{-\frac{E^2}{4}}&=0\\\\ \implies \left(x- \middle(-\frac{E}{2}\middle)\right)^2 + (y^2+Fy)\\ +\ (z^2+Gz)+H-\frac{E^2}{4}&=0 \end{align*}
Similarly, we complete the squares for the y-terms and z-terms by adding and subtracting the quantities
\textcolor{red}{\left(- \frac{F}{2} \right)^2}
\textcolor{red}{\left(- \frac{G}{2} \right)^2}
\!\!\scriptsize \begin{align*} \left(x-\middle(-\frac{E}{2}\middle)\!\!\right)^2 \!+\! \left(y^2-(-Fy) \textcolor{red}{+ \frac{F^2}{4}} \right)\\ +\! \left(z^2-(-Gz) \textcolor{red}{+ \frac{G^2}{4}}\right)\\+\ H-\frac{E^2}{4} \textcolor{red}{-\frac{F^2}{4}-\frac{G^2}{4}}&=0\\\\ \implies\!\! \left(x-\middle(-\frac{E}{2}\middle)\!\!\right)^2 \!+ \!\left(y-\middle(-\frac{F}{2}\middle)\!\!\right)^2\\ + \left(z-\middle(-\frac{G}{2}\middle)\!\!\right)^2\\+\ H-\frac{E^2}{4} -\frac{F^2}{4}-\frac{G^2}{4}&=0 \end{align*}
Rearrange the equation to group all the constant terms on the right-hand side:
\scriptsize \begin{align*} \left(x-\middle(-\frac{E}{2}\middle)\!\!\right)^2 +\left(y-\middle(-\frac{F}{2}\middle)\!\!\right)^2 \\\\ + \left(z-\middle(-\frac{G}{2}\middle)\!\!\right)^2 = \frac{E^2+F^2+G^2}{4} -H \end{align*}
Comparing this equation with the standard form of the sphere equation
(x-h)^2 +(y-k)^2 + (z-l)^2) = r^2\text{:}
\scriptsize\qquad \begin{align*} \left(x-\underbrace{\middle(-\frac{E}{2}\middle)}_{=h}\right)^2 + \left(y-\underbrace{\middle(-\frac{F}{2}\middle)}_{=k}\right)^2 \\\\ +\left(z-\underbrace{\middle(-\frac{G}{2}\middle)}_{=l}\right)^2 = \underbrace{\frac{E^2+F^2+G^2}{4} -H}_{=r^2} \end{align*}
Extracting the center point and radius of the sphere from this equation, we get:
\scriptsize \qquad \begin{align*} h &= -\frac{E}{2}\\\\ k &= -\frac{F}{2}\\\\ l &= -\frac{G}{2}\\\\ r^2 &= \frac{E^2}{4} +\frac{F^2}{4}+\frac{G^2}{4}-H\\\\ \text{or: } r &= \sqrt{\frac{E^2}{4} +\frac{F^2}{4}+\frac{G^2}{4}-H} \end{align*}
In this sphere equation calculator, you can simply select the expanded form option as the type of the equation and enter the corresponding
E
F
G
H
values to obtain the equation in the standard form, its center and radius, and other relevant information.
Now that you know what the equation of a sphere is, let's discuss how to obtain it in cases where you don't have the necessary parameters readily available. Suppose you only know the end-points of any one of the diameters of the sphere and have no information on the center point or radius. You still have a way to frame its equation.
Sphere's diameter AB, with endpoints
A(x_1,y_1,z_1)
B(x_2,y_2,z_2)
Say the diameter
AB
of the sphere has the endpoints
A(x_1,y_1,z_1)
B(x_2,y_2,z_2)
. Clearly, the center
C(h,k,l)
is given by the mid-point of
AB
\qquad \begin{align*} h = \frac{x_1+x_2}{2}\\\\ k = \frac{y_1+y_2}{2}\\\\ l = \frac{z_1+z_2}{2}\\\\ \end{align*}
Once we know the center, we can obtain the radius as the distance
CA
CB
, through the distance formula:
\scriptsize \quad r = \sqrt{(x_1-h)^2 + (y_1-k)^2+ (z_1-l)^2}
We can then insert this center and radius in the standard sphere equation.
Sphere with center
(h,k,l)
P(p_x,p_y,p_z)
on its surface.
Similar to the case above, if we know the center
(h,k,l)
and any point
P(p_x,p_y,p_z)
|
|The word 'marginal' should make you immediately think of a derivative. In this case, the marginal is just the partial derivative with respect to a particular variable.
|The teacher has also added the additional restriction that you should not leave your answer with negative exponents.
{\displaystyle f(x,y)={\frac {2xy}{x-y}}.}
{\displaystyle xy}
{\displaystyle x}
{\displaystyle {\frac {\partial }{\partial x}}\left({\frac {f(x)}{g(x)}}\right)={\frac {f'(x)g(x)-g'(x)f(x)}{g(x)^{2}}},}
{\displaystyle {\frac {\partial }{\partial x}}\left({\frac {x^{2}}{x+1}}\right)={\frac {2x(x+1)-x^{2}}{(x+1)^{2}}}.}
{\displaystyle {\frac {\partial }{\partial x}}f(x)g(x)=f'(x)g(x)+g'(x)f(x).}
{\displaystyle {\frac {\partial }{\partial x}}[x(x+1)]=(x+1)+x.}
{\displaystyle y}
{\displaystyle x}
{\displaystyle x}
{\displaystyle {\frac {\partial }{\partial x}}xy\,=\,y{\frac {\partial }{\partial x}}x\,=\,y.}
{\displaystyle f(x,y)}
{\displaystyle x}
{\displaystyle f(x,y)}
{\displaystyle y}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial }{\partial x}}f(x,y)}&=&\displaystyle {{\frac {\partial }{\partial x}}\left({\frac {2xy}{x-y}}\right)}\\&&\\&=&\displaystyle {\frac {2y(x-y)-2xy}{(x-y)^{2}}}\\&&\\&=&\displaystyle {{\frac {-2y^{2}}{(x-y)^{2}}}.}\end{array}}}
{\displaystyle x}
{\displaystyle y}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial }{\partial y}}f(x,y)}&=&\displaystyle {{\frac {\partial }{\partial y}}\left({\frac {2xy}{x-y}}\right)}\\&&\\&=&\displaystyle {\frac {2x(x-y)+2xy}{(x-y)^{2}}}\\&&\\&=&\displaystyle {{\frac {2x^{2}}{(x-y)^{2}}}.}\end{array}}}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial ^{2}f(x,y)}{\partial x^{2}}}\,=\,{\frac {\partial }{\partial x}}\left({\frac {\partial f(x,y)}{\partial x}}\right)}&=&\displaystyle {{\frac {\partial }{\partial x}}\left({\frac {-2y^{2}}{(x-y)^{2}}}\right)}\\&&\\&=&\displaystyle {\frac {0-2(x-y)(-2y^{2})}{(x-y)^{4}}}\\&&\\&=&\displaystyle {{\frac {4xy^{2}-4y^{3}}{(x-y)^{4}}}.}\end{array}}}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial ^{2}f(x,y)}{\partial y\partial x}}\,=\,{\frac {\partial }{\partial y}}\left({\frac {\partial f(x,y)}{\partial x}}\right)}&=&\displaystyle {{\frac {\partial }{\partial y}}\left({\frac {-2y^{2}}{(x-y)^{2}}}\right)}\\&&\\&=&\displaystyle {\frac {-4y(x-y)^{2}-4y^{2}(x-y)}{(x-y)^{4}}}\\&&\\&=&\displaystyle {\frac {-4y(x^{2}-2xy+y^{2})-4xy^{2}+4y^{3}}{(x-y)^{4}}}\\&=&\displaystyle {{\frac {4xy^{2}-4x^{2}y}{(x-y)^{4}}}.}\end{array}}}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial ^{2}f(x,y)}{\partial x\partial y}}\,=\,{\frac {\partial }{\partial x}}\left({\frac {\partial f(x,y)}{\partial y}}\right)}&=&\displaystyle {{\frac {\partial }{\partial x}}\left({\frac {2x^{2}}{(x-y)^{2}}}\right)}\\&&\\&=&\displaystyle {\frac {4x(x-y)^{2}-2(x-y)2x^{2}}{(x-y)^{4}}}\\&&\\&=&\displaystyle {\frac {4x(x^{2}-2xy+y^{2})-4x^{3}+4x^{2}y}{(x-y)^{4}}}\\&&\\&=&\displaystyle {{\frac {4xy^{2}-4x^{2}y}{(x-y)^{4}}}.}\end{array}}}
{\displaystyle {\begin{array}{rcl}\displaystyle {{\frac {\partial ^{2}f(x,y)}{\partial y^{2}}}\,=\,{\frac {\partial }{\partial y}}\left({\frac {\partial f(x,y)}{\partial y}}\right)}&=&\displaystyle {{\frac {\partial }{\partial y}}\left({\frac {2x^{2}}{(x-y)^{2}}}\right)}\\&&\\&=&\displaystyle {\frac {0+2(x-y)(2x^{2})}{(x-y)^{4}}}\\&&\\&=&\displaystyle {{\frac {4x^{3}-4x^{2}y}{(x-y)^{4}}}.}\end{array}}}
{\displaystyle \displaystyle {{\frac {\partial }{\partial x}}f(x,y)={\frac {-2y^{2}}{(x-y)^{2}}},\qquad {\frac {\partial }{\partial y}}f(x,y)={\frac {2x^{2}}{(x-y)^{2}}}.}}
{\displaystyle {\frac {\partial ^{2}f(x,y)}{\partial x^{2}}}\,=\,{\frac {4xy^{2}-4y^{3}}{(x-y)^{4}}},\qquad {\frac {\partial ^{2}f(x,y)}{\partial x\partial y}}\,=\,{\frac {\partial ^{2}f(x,y)}{\partial y\partial x}}\,=\,{\frac {4xy^{2}-4x^{2}y}{(x-y)^{4}}},\qquad {\frac {\partial ^{2}f(x,y)}{\partial y^{2}}}\,=\,{\frac {4x^{3}-4x^{2y}}{(x-y)^{4}}}.}
|
Relative Strength Index Calculator (RSI)
How to interpret the relative strength index?
What is the RSI formula?
The RSI formula – further calculation
Real example of the RSI indicator and RSI divergence on stocks
The relative strength index calculator (RSI) is an excellent trading tool that can tell you when a stock is overbought and ready for a price decline or undersold and prepared for a price increase. The RSI indicator can help you know when to buy or sell a stock. This article will cover what the relative strength index is, the RSI formula, and how to use this incredible RSI calculator.
The RSI is a trading momentum indicator that averages the price gains and losses during a specific trading period to see if the stock is more likely to keep going up/down or change its direction.
Bonds, commodities, and stocks increase and decrease in price following business cycles of economic growth and shrinkage. We can identify the point where the cycle changes from the positive side to the negative side (or vice versa) by measuring momentum, allowing you to sell the business when it is about to crash.
The daily RSI indicator measures momentum, showing if a reversal in price is about to come or the current trend can stay longer. It considers the specified period's daily stock gains and losses. Then, it provides a value between 0 to 100, with the equilibrium level at 50.
In trading, whenever the RSI is over 50, the momentum is positive, and the price will likely increase or continue increasing. Contrarily, when the RSI is under 50, it shows that the security is losing momentum, and the price is likely to decrease or continue decreasing. Of course, the further we go out of the equilibrium point, the trend reversal becomes more obvious to happen.
For a 14 day period RSI, the upper and bottom levels will be:
RSI = 70, meaning that the stock is overbought for any RSI above this level, and the upward price trend is about to reverse, causing losses if you do not sell.
RSI = 30, indicating that the stock is oversold for any RSI below this level, with the downward price trend ready to increase, generating profits if you buy.
Between October 30th, 2020, and November 12th, 2020, several investors sold their Synnex positions, thinking that the stock was about to crash because it went up too much. The stock made a return of 11.46% during that period.
But a look at the relative strength index on November 12th (RSI = 57.64) showed that the stock still had the momentum to keep rising its price (RSI<70, not underbought). Thus, an investor using our relative strength index calculator would have kept holding the stock even until the current last trading day, in which the price reached 162.82, or a 23.69% positive gain—almost a double return of investment.
Let's take a look at an example. See the following picture:
Investors calculate the RSI indicator using the average changes in closing prices during the specified period, accordingly to the following formulas:
RS stands for relative strength, and we calculate it with the average gains and average losses during the specified period (typically 14 days when we use RSI for trading).
Note we take the difference in price between subsequent days, priceₜ₊₁ - priceₜ, and if we see the result is positive or negative. If,
priceₜ₊₁ - priceₜ > 0, we store the difference in the Gains column and a 0 in the Losses column.
priceₜ₊₁ - priceₜ < 0, we store the difference in the Losses column and a 0 in the Gain column.
Afterward, we sum the values from each column:
Sum of gains = 18.71
Sum of losses = 11.56 (note we use absolute values)
and average them by 14 data points (notice we need 15 prices for getting 14 values of price change):
Average gain = 18.71/14 = 1.34
Average loss = 11.56/14 = 0.83
Consequently, and by using our cool relative strength index calculator:
RS = 1.34 / 0.83 = 1.62
RSI = 100 - 100/(1 + 1.62) = 61.81
Look at the chart above; it shows the same RSI for this stock at its beginning.
Congratulations, you got your first RSI for a stock. Now, we will build the curve to analyze RSI divergence with price.
Considering your oldest data point as the 1st closing price, you will get the first RSI on the 15th closing price (see table above). Then, for calculating the 2nd RSI point, you will need an extra closing price.
New average = (Previous average × 13 + current price change) / 14
This procedure applies to gains and losses. You can follow the chart above for October 21st and 22nd. See the following example for October 21st.
New positive average = (1.34 × 13 + 0) / 14 = 1.24
New negative average = (0.83 × 13 + 4.37) / 14 = 1.08
Then, the new RS:
RSI = 100 - 100 / (1 + 1.14) = 53.49
You can verify these values in our relative strength index calculator and the RSI chart above.
Since this is a momentum indicator, we need to combine it with other trading indicators to get reliable buy/sell signals. In this case, we will use the RSI together with the simple moving average (SMA) of 7, 20, and 50 days.
The SMA's purpose will be to show a price trend reversal based on the change of RSI trend. When prices (light blue line below) cross the SMA, a price trend change might be incoming. The older the SMA we cross, the more significant the new trend change.
Let's check the SPY ETF during March 2020:
The maximum value (338.5 USD) occurred on February 19th, 2020, and from that point, it started to go down, causing millions of dollars in losses for investors. Would it have been possible to avoid all the losses? Together with the 7-SMA, 20-SMA, and 50-SMA, our relative strength index calculator says yes.
Before starting, you must notice that we use RSI (7 days) with upper/lower 80-20 bands.
Check trend line
\#1
in the RSI section. Notice that it has a downwards tendency; meanwhile, the price rises: this event is a bearish RSI divergence. On the other hand, if the RSI shows more momentum with a price still going down: we call it a bullish RSI divergence. In any case, it shows that the current price trend is weakening and about to reverse.
See how the price line crosses below lines 7-SMA and 20-SMA in point
Z_1
Z_2
, respectively. Note also the RSI indicator was below 50 at that time. There we had our first sell signal.
Third, we reach the oversold area (see
a
in the RSI chart), meaning that the trend might change soon, but as we said, we need price confirmation too. Now, check point
A
. Price breaches the long-term 50-SMA, indicating the selling will continue. Here we fail to get price confirmation. Not a buy signal yet.
See the price chart, purple horizontal line
B
where the price arrives the first time, bounces, and then goes through it. The very fact that it breaches the line
B
indicates a crash. Here, at 297 USD, we definitely must have sold our positions.
By keeping an eye on the RSI, we noticed that it has higher low points represented by trend line
\#2
. Then, we reach and pass the equilibrium zone meaning the price has upward momentum. After that, we notice how the RSI breaks the old trend line
\#1
extension in point
c
, indicating that the downward loss of momentum, which started at the beginning of the year, might have ended. We also see how the price starts increasing again.
We can notice that at point
C
, the short-term 7-SMA crosses the medium-term 20-SMA, confirming the buy signal at 278 USD.
Finally, in the RSI chart, at point
d
, we reach the overbought section indicating that a reversal in the trend might come, and again, it does. However, at point
D
, the price does not cross the 20-SMA like before, indicating a false selling signal.
In conclusion, by using both indicators, we would have avoided the biggest crash of the last ten years and enjoyed all the bullish market returns since then. This is why we built our relative strength index calculator!
The relative strength index is a momentum indicator that averages prices over a period. Investors use it with price indicators because it provides reliable stock buy or sell signals.
Is an RSI over 70 good or bad?
An RSI value over 70 indicates that the price trend might be about to reverse and generate losses if you buy. However, it depends on the price timeframe you are checking: An RSI over 70 when using monthly prices shows much more probability of price trend reversal than an RSI over 70 using hourly prices.
Is an RSI below 30 good or bad?
An RSI value below 30 indicates that the price trend might be about to reverse and generate profits if you buy. However, it depends on the price timeframe you are checking: An RSI below 30 when using monthly prices shows much more probability of price trend reversal than an RSI below 30 using hourly prices.
How do I calculate RSI?
To find the relative strength index:
Get the historical values for the last 14 days.
Calculate price change between subsequent day prices. Let's say price on March 5th minus price on March 4th.
Separate the positive price change from the negative ones.
Average positive price changes. Do the same for the negative ones.
Calculate relative strength (RS) by dividing the average of positive price changes by the average of negative price changes.
Obtain RSI by subtracting 100/(1 - RS) from 100.
Closing price of security
Enter at least "Period + 2" price values for building the RSI curve.
|
Staking - Olympus
Staking is the primary value accrual strategy of Olympus. Stakers stake their OHM on the Olympus website to earn rebase rewards. The rebase rewards are minted every 2200 Ethereum blocks (8 hours) as long as there is a corresponding equivalent of 1 DAI in the Treasury to back it. This is guaranteed on the smart-contract level.
Runway displays the number of days Olympus would maintain its current rate of emissions without any inflows to the Treasury or changes to the reward rate. Staking is a passive, long-term strategy. The increase in your stake of OHM translates into a constantly falling cost basis converging on zero. This means even if the market price of OHM drops below your initial purchase price, given a long enough staking period, the increase in your staked OHM balance should eventually outpace the fall in price.
When you stake, you lock OHM and receive an equal amount of sOHM. Your sOHM balance rebases up automatically at the end of every epoch. sOHM is transferable and therefore composable with other DeFi protocols.
When you unstake, you burn sOHM and receive an equal amount of OHM. Unstaking means the user will forfeit the upcoming rebase reward. Note that the forfeited reward is only applicable to the unstaked amount; the remaining staked OHM (if any) will continue to receive rebase rewards.
What is the benefit of staking OHM?
By staking their OHM, users opt in to actively participate in the Olympus network, and become eligible participants in governance. By participating in the network, stakers benefit from a rebasing mechanism that ensures their position scales with OHM emissions and overall the growth of the network.
Every 8 Hours (2200 Ethereum Blocks) the protocol uses two mathematical formulas to calculate the network wide distribution.
sOHM/gOHM explainer
##What is the relationship between staking and reward rate?
The level of OHM staking rewards is determined by the overall reward rate, and was codified by the community (via the OIP-18 vote). The reward yield, which is a function of reward rate, is also dependent on how many other individuals are staking their OHM. When more individuals are staking the reward yield declines and the opposite occurs when the reward rate increases.
Olympus presents our current sOHM (staked OHM) reward yield as an illustrative annual percentage yield (APY) on our app. We do this because sOHM rebases several times a day (about every 8 hours). Given this, rebases have an effect analogous to compounding interest.
The APYs presented by Olympus are a representation of the current rebase rate, number of stakers and existing supply. These calculations are floating and the current rates are not a guarantee of future returns.
APY = ( 1 + rewardYield )^{1095} - 1
rewardYield = OHM_{distributed} / OHM_{totalStaked}
The number of OHM distributed to the staking contract is calculated from OHM total supply using the following equation:
OHM_{distributed} = OHM_{totalSupply} \times rewardRate
|
Klein bottle - Wikipedia
(Redirected from Klein's bottle)
Find sources: "Klein bottle" – news · newspapers · books · scholar · JSTOR (October 2021) (Learn how and when to remove this template message)
Non-orientable mathematical surface
4 Simple-closed curves
5.1 The figure 8 immersion
5.2 4-D non-intersecting
5.3 3D pinched torus / 4D Möbius tube
6 Homotopy classes
8 Klein surface
Dissecting a Klein bottle into halves along its plane of symmetry results in two mirror image Möbius strips, i.e. one with a left-handed half-twist and the other with a right-handed half-twist (one of these is pictured on the right). Remember that the intersection pictured is not really there.[5]
Simple-closed curves[edit]
The figure 8 immersion[edit]
{\displaystyle {\begin{aligned}x&=\left(r+\cos {\frac {\theta }{2}}\sin v-\sin {\frac {\theta }{2}}\sin 2v\right)\cos \theta \\y&=\left(r+\cos {\frac {\theta }{2}}\sin v-\sin {\frac {\theta }{2}}\sin 2v\right)\sin \theta \\z&=\sin {\frac {\theta }{2}}\sin v+\cos {\frac {\theta }{2}}\sin 2v\end{aligned}}}
4-D non-intersecting[edit]
{\displaystyle \ {\begin{aligned}x&=R\left(\cos {\frac {\theta }{2}}\cos v-\sin {\frac {\theta }{2}}\sin 2v\right)\\y&=R\left(\sin {\frac {\theta }{2}}\cos v+\cos {\frac {\theta }{2}}\sin 2v\right)\\z&=P\cos \theta \left(1+{\epsilon }\sin v\right)\\w&=P\sin \theta \left(1+{\epsilon }\sin v\right)\end{aligned}}}
3D pinched torus / 4D Möbius tube[edit]
{\displaystyle {\begin{aligned}x(\theta ,\varphi )&=(R+r\cos \theta )\cos {\varphi }\\y(\theta ,\varphi )&=(R+r\cos \theta )\sin {\varphi }\\z(\theta ,\varphi )&=r\sin \theta \cos \left({\frac {\varphi }{2}}\right)\\w(\theta ,\varphi )&=r\sin \theta \sin \left({\frac {\varphi }{2}}\right)\end{aligned}}}
Bottle shape[edit]
{\displaystyle {\begin{aligned}x(u,v)=-&{\frac {2}{15}}\cos u\left(3\cos {v}-30\sin {u}+90\cos ^{4}{u}\sin {u}\right.-\\&\left.60\cos ^{6}{u}\sin {u}+5\cos {u}\cos {v}\sin {u}\right)\\y(u,v)=-&{\frac {1}{15}}\sin u\left(3\cos {v}-3\cos ^{2}{u}\cos {v}-48\cos ^{4}{u}\cos {v}+48\cos ^{6}{u}\cos {v}\right.-\\&60\sin {u}+5\cos {u}\cos {v}\sin {u}-5\cos ^{3}{u}\cos {v}\sin {u}-\\&\left.80\cos ^{5}{u}\cos {v}\sin {u}+80\cos ^{7}{u}\cos {v}\sin {u}\right)\\z(u,v)=&{\frac {2}{15}}\left(3+5\cos {u}\sin {u}\right)\sin {v}\end{aligned}}}
Homotopy classes[edit]
{\displaystyle D^{2}\times S^{1}.}
Klein surface[edit]
^ Cutting a Klein Bottle in Half – Numberphile on YouTube
Retrieved from "https://en.wikipedia.org/w/index.php?title=Klein_bottle&oldid=1088873721"
|
Directed line segment partitioning and formula
How do I partition a line segment with a given ratio?
How to use this ratios of directed line segments calculator
The ratios of the directed line segments calculator will help you calculate the coordinates of the point that partition the line segment in a given proportion. This article will explore what a directed line segment is, how to partition a line segment with a given ratio with some examples, segment partition formula, and some frequently asked questions.
A line segment
AB
is a part of a line bound by two endpoints
A
B
A \neq B
. A directed line segment
\overrightharpoon{AB}
is a line segment with a definite direction – it is the line segment directed from
A
B
A directed line segment has both length and direction.
\overrightharpoon{AB}
has the same length as the line segment
AB
, and is along the direction
A\rightarrow B
\overrightharpoon{AB} \neq \overrightharpoon{BA}
While the line segment
AB
BA
, the same is not true for the directed line segment
\overrightharpoon{AB}
\overrightharpoon{AB}
is directed from
A
B
\overrightharpoon{BA}
B
A
🔎 Notice the similarities between a directed line segment and a vector? Not all directed line segments are vectors, but you can use a directed line segment to geometrically represent a vector with the same direction if the length of the line segment matches the magnitude of the vector.
In the next section, we shall answer that burning question in your mind: how do you divide a segment into given ratios?
P
lying on the directed line segment
\overrightharpoon{AB}
will divide into two line segments. There are two ways to divide a line segment:
Internally, when the point
P
lies somewhere within the segment
\overrightharpoon{AB}
Externally, when the point
P
lies somewhere on the extended line segment
\overrightharpoon{AB}
Internal partition of
\overrightharpoon{AB}
into the ratio
m:n
by a point
P(p_x,p_y)
\overrightharpoon{AB}
To partition the line segment
\overrightharpoon{AB}
internally into the ratio
m:n
P(p_x,p_y)
must lie on
\overrightharpoon{AB}
such that it is
\frac{m}{m+n}
A
\frac{n}{m+n}
B
External partition of
\overrightharpoon{AB}
m:n
P(p_x,p_y)
lying on the extended line segment
\overrightharpoon{AB}
On the other hand, to partition the line segment
\overrightharpoon{AB}
externally into the ratio
m:n
P(p_x,p_y)
must lie on the extended line segment
\overrightharpoon{AB}
\frac{m}{m-n}
A
\frac{n}{m-n}
B
Now that you know the concept of breaking a line segment into a ratio let's put together a formula for a directed line segment divided by any point
P
For the internal partition of
\overrightharpoon{AB}
\scriptsize P(p_x,p_y) = \left(\frac{mx_2 + nx_1}{m+n}, \frac{my_2 + ny_1}{m+n}\right)
And for the external partition of
\overrightharpoon{AB}
\scriptsize P(p_x,p_y) = \left(\frac{mx_2 - nx_1}{m-n}, \frac{my_2 - ny_1}{m-n}\right)
P
– Any point that partitions the directed line segment
\overrightharpoon{AB}
p_x,p_y
– The x- and y-coordinates of the point
P
m,n
– Ratio
m:n
into which the point
P
\overrightharpoon{AB}
x_1,y_1
– The x- and y-coordinates of the endpoint
A
\overrightharpoon{AB}
x_2,y_2
B
\overrightharpoon{AB}
❗ Keep in mind that in case of external division of the line segment, the ratios
m
n
cannot be equal and must be distinct to avoid a division by zero in the formula.
Now that you know the ratio of line segments formula, let's discuss it further, along with some examples of segment partition calculation.
To find the point P(pₓ, pᵧ) that internally divides the line segment AB into the ratio m:n, follow these steps:
Calculate pₓ using pₓ = (mx₂ + nx₁)/(m + n), where x₁ and x₂ are the x-coordinates of A and B respectively.
Determine pᵧ using pᵧ = (my₂ + ny₁)/(m + n), where y₁ and y₂ are the y-coordinates of A and B respectively.
To find the point P(pₓ, pᵧ) that externally divides the line segment AB into the ratio m:n, follow these steps:
Compute pₓ using pₓ = (mx₂ - nx₁)/(m - n), where x₁ and x₂ are the x-coordinates of A and B respectively.
Find pᵧ using pᵧ = (my₂ - ny₁)/(m - n), where y₁ and y₂ are the y-coordinates of A and B respectively.
For example, consider a line segment
\overrightharpoon{AB}
with the endpoints
A(1,2)
B(4,6)
. The direction of this segment would be from
A
B
. To find a point that divides this segment internally in the ratio
2:3
, we can use the internal partition formula as follows:
\scriptsize \begin{align*} P(p_x, p_y) &= \left(\frac{2 \cdot 4 + 3 \cdot 1}{2+3}, \frac{2 \cdot 6 + 3\cdot 2}{2+3}\right) \\\\ &= \left(\frac{11}{5}, \frac{18}{5}\right) \\\\ &= \left(2.2, 3.6\right) \end{align*}
P(2.2, 3.6)
\overrightharpoon{AB}
2:3
Note that to say that the point
P(2.2,3.6)
\overrightharpoon{AB}
2:3
is the same as saying that the point
P(2.2,3.6)
{\frac{2}{5}}^{th}
away from the endpoint
A(1,2)
{\frac{3}{5}}^{th}
B(4,6)
Now, if we want to divide the same line segment externally in the same ratio, then we shall employ the formula for external partition of the line segment:
\scriptsize \begin{align*} P(p_x, p_y) &= \left(\frac{2 \cdot 4 - 3 \cdot 1}{2-3}, \frac{2 \cdot 6 - 3\cdot 2}{2-3}\right) \\\\ &= \left(\frac{8-3}{-1}, \frac{12-6}{-1}\right) \\\\ &= \left(-5,-6\right) \end{align*}
P(-5,-6)
\overrightharpoon{AB}
2:3
See how easy it is to partition a line segment in a given ratio?😉 Go ahead and try some practice problems and master this method! You can always verify your results using this calculator to divide line segments.
This ratios of directed line segments calculator is beneficial to find the point that divides a directed line segment in a given ratio or to find the ratio in which a given point splits the line segment.
Choose the type of partition between internal and external in the The line is paritioned... field. By default, it is set to internal partition.
Enter the coordinates of the endpoints of the segment. Ensure that you're getting the direction of the line segment correct. In this calculator, the direction is always from
A(x_1,y_1)
B(x_2,y_2)
This step depends on whether you want to find the point or the ratio:
If the ratio is the given value, enter the given ratio in the corresponding fields, and the coordinates of the point appear in their respective fields, along with a helpful graph. Otherwise, leave the ratio fields empty.
If the coordinates of the point are the given value, enter these coordinates in the corresponding fields, and the resulting ratio will appear at the bottom.
You now have a simple tool to help you calculate the partitioning of a line segment anytime you need! You can also use our other calculators to find out more interesting math:
How do I find a point that divides a segment in half?
If you know the coordinates of the endpoints of the line segment, you can easily find its mid-point (xₘ, yₘ) using these steps:
Calculate the average of the x-coordinates of the end-points to get the x-coordinate of the mid-point. xₘ = (x₁ + x₂)/2.
Determine the average of the y-coordinates of the end-points to obtain the y-coordinate of the mid-point. yₘ = (y₁ + y₂)/2.
Verify these results using our Midpoint Calculator or Ratios of Directed Line Segments Calculator.
How can I divide a line segment into three equal parts?
To divide a line segment AB into three equal parts, you need to find two points P(pₓ, pᵧ) and Q(Qₓ, Qᵧ) on AB, such that they each divide AB into the ratios 1:2 and 2:1:
Calculate the x-coordinate pₓ of the point P using the formula pₓ = (2x₂ + x₁)/3, where x₁ and x₂ are the x-coordinates of A and B respectively.
Compute the y-coordinate pᵧ of the point P using pᵧ = (2y₂ + y₁)/3, where y₁ and y₂ are the y-coordinates of A and B respectively.
Determine the x-coordinate qₓ of the point Q using qₓ = (x₂ + 2x₁)/3.
Find the y-coordinate qᵧ of the point Q using qᵧ = (y₂ + 2y₁)/3.
Put these coordinates together to get the points P(pₓ, pᵧ) and Q(Qₓ, Qᵧ).
How to find a point lying one-third of the way from an endpoint?
A point P lying one-third of the way from the endpoint A on the line segment AB will divide it in the ratio 1:2. To find this point, follow these simple steps:
Calculate the x-coordinate pₓ of this point using the formula pₓ = (2x₂ + x₁)/3, where x₁ and x₂ are the x-coordinates of A and B respectively.
Compute the y-coordinate pᵧ of this point using pᵧ = (2y₂ + y₁)/3, where y₁ and y₂ are the y-coordinates of A and B respectively.
Put them together to obtain the desired point P(pₓ, pᵧ).
The line is partitioned...
Coordinates of line segment's end points
Ratio (m : n)
Coordinates of the point P(pₓ,pᵧ)
pₓᵢ
pᵧᵢ
This circle perimeter calculator finds the perimeter (p) of a circle if you know its radius (r) or its diameter (d), and vice versa.
|
Section 59.61 (03R1): Brauer groups—The Stacks project
Section 59.61: Brauer groups (cite)
59.61 Brauer groups
Brauer groups of fields are defined using finite central simple algebras. In this section we review the relevant facts about Brauer groups, most of which are discussed in the chapter Brauer Groups, Section 11.1. For other references, see [SerreCorpsLocaux], [SerreGaloisCohomology] or [Weil].
Theorem 59.61.1. Let $K$ be a field. For a unital, associative (not necessarily commutative) $K$-algebra $A$ the following are equivalent
$A$ is finite central simple $K$-algebra,
$A$ is a finite dimensional $K$-vector space, $K$ is the center of $A$, and $A$ has no nontrivial two-sided ideal,
there exists $d \geq 1$ such that $A \otimes _ K \bar K \cong \text{Mat}(d \times d, \bar K)$,
there exists $d \geq 1$ such that $A \otimes _ K K^{sep} \cong \text{Mat}(d \times d, K^{sep})$,
there exist $d \geq 1$ and a finite Galois extension $K'/K$ such that $A \otimes _ K K' \cong \text{Mat}(d \times d, K')$,
there exist $n \geq 1$ and a finite central skew field $D$ over $K$ such that $A \cong \text{Mat}(n \times n, D)$.
The integer $d$ is called the degree of $A$.
Proof. This is a copy of Brauer Groups, Lemma 11.8.6. $\square$
Lemma 59.61.2. Let $A$ be a finite central simple algebra over $K$. Then
\[ \begin{matrix} A \otimes _ K A^{opp} & \longrightarrow & \text{End}_ K(A) \\ \ a \otimes a' & \longmapsto & (x \mapsto a x a') \end{matrix} \]
is an isomorphism of algebras over $K$.
Proof. See Brauer Groups, Lemma 11.4.10. $\square$
Definition 59.61.3. Two finite central simple algebras $A_1$ and $A_2$ over $K$ are called similar, or equivalent if there exist $m, n \geq 1$ such that $\text{Mat}(n \times n, A_1) \cong \text{Mat}(m \times m, A_2)$. We write $A_1 \sim A_2$.
By Brauer Groups, Lemma 11.5.1 this is an equivalence relation.
Definition 59.61.4. Let $K$ be a field. The Brauer group of $K$ is the set $\text{Br} (K)$ of similarity classes of finite central simple algebras over $K$, endowed with the group law induced by tensor product (over $K$). The class of $A$ in $\text{Br}(K)$ is denoted by $[A]$. The neutral element is $[K] = [\text{Mat}(d \times d, K)]$ for any $d \geq 1$.
The previous lemma implies that inverses exist and that $-[A] = [A^{opp}]$. The Brauer group of a field is always torsion. In fact, we will see that $[A]$ has order dividing $\deg (A)$ for any finite central simple algebra $A$ (see Lemma 59.62.2). In general the Brauer group is not finitely generated, for example the Brauer group of a non-Archimedean local field is $\mathbf{Q}/\mathbf{Z}$. The Brauer group of $\mathbf{C}(x, y)$ is uncountable.
Lemma 59.61.5. Let $K$ be a field and let $K^{sep}$ be a separable algebraic closure. Then the set of isomorphism classes of central simple algebras of degree $d$ over $K$ is in bijection with the non-abelian cohomology $H^1(\text{Gal}(K^{sep}/K), \text{PGL}_ d(K^{sep}))$.
Sketch of proof.. The Skolem-Noether theorem (see Brauer Groups, Theorem 11.6.1) implies that for any field $L$ the group $\text{Aut}_{L\text{-Algebras}}(\text{Mat}_ d(L))$ equals $\text{PGL}_ d(L)$. By Theorem 59.61.1, we see that central simple algebras of degree $d$ correspond to forms of the $K$-algebra $\text{Mat}_ d(K)$. Combined we see that isomorphism classes of degree $d$ central simple algebras correspond to elements of $H^1(\text{Gal}(K^{sep}/K), \text{PGL}_ d(K^{sep}))$. For more details on twisting, see for example [SilvermanEllipticCurves]. $\square$
If $A$ is a finite central simple algebra of degree $d$ over a field $K$, we denote $\xi _ A$ the corresponding cohomology class in $H^1(\text{Gal}(K^{sep}/K), \text{PGL}_ d(K^{sep}))$. Consider the short exact sequence
\[ 1 \to (K^{sep})^* \to \text{GL}_ d(K^{sep}) \to \text{PGL}_ d(K^{sep}) \to 1, \]
which gives rise to a long exact cohomology sequence (up to degree 2) with coboundary map
\[ \delta _ d : H ^1(\text{Gal}(K^{sep}/K), \text{PGL}_ d(K^{sep})) \longrightarrow H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*). \]
Explicitly, this is given as follows: if $\xi $ is a cohomology class represented by the 1-cocycle $(g_\sigma )$, then $\delta _ d(\xi )$ is the class of the 2-cocycle
\begin{equation} \label{etale-cohomology-equation-two-cocycle} (\sigma , \tau ) \longmapsto \tilde g_\sigma ^{-1} \tilde g_{\sigma \tau } \sigma (\tilde g_\tau ^{-1}) \in (K^{sep})^* \end{equation}
where $\tilde g_\sigma \in \text{GL}_ d(K^{sep})$ is a lift of $g_\sigma $. Using this we can make explicit the map
\[ \delta : \text{Br}(K) \longrightarrow H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*), \quad [A] \longmapsto \delta _{\deg A} (\xi _ A) \]
as follows. Assume $A$ has degree $d$ over $K$. Choose an isomorphism $\varphi : \text{Mat}_ d(K^{sep}) \to A \otimes _ K K^{sep}$. For $\sigma \in \text{Gal}(K^{sep}/K)$ choose an element $\tilde g_\sigma \in \text{GL}_ d(K^{sep})$ such that $\varphi ^{-1} \circ \sigma (\varphi )$ is equal to the map $x \mapsto \tilde g_\sigma x \tilde g_\sigma ^{-1}$. The class in $H^2$ is defined by the two cocycle (59.61.5.1).
Theorem 59.61.6. Let $K$ be a field with separable algebraic closure $K^{sep}$. The map $\delta : \text{Br}(K) \to H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*)$ defined above is a group isomorphism.
Sketch of proof. To prove that $\delta $ defines a group homomorphism, i.e., that $\delta (A \otimes _ K B) = \delta (A) + \delta (B)$, one computes directly with cocycles.
Injectivity of $\delta $. In the abelian case ($d = 1$), one has the identification
\[ H^1(\text{Gal}(K^{sep}/K), \text{GL}_ d(K^{sep})) = H_{\acute{e}tale}^1(\mathop{\mathrm{Spec}}(K), \text{GL}_ d(\mathcal{O})) \]
the latter of which is trivial by fpqc descent. If this were true in the non-abelian case, this would readily imply injectivity of $\delta $. (See [SGA4.5].) Rather, to prove this, one can reinterpret $\delta ([A])$ as the obstruction to the existence of a $K$-vector space $V$ with a left $A$-module structure and such that $\dim _ K V = \deg A$. In the case where $V$ exists, one has $A \cong \text{End}_ K(V)$.
For surjectivity, pick a cohomology class $\xi \in H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*)$, then there exists a finite Galois extension $K^{sep}/K'/K$ such that $\xi $ is the image of some $\xi ' \in H^2(\text{Gal}(K'|K), (K')^*)$. Then write down an explicit central simple algebra over $K$ using the data $K', \xi '$. $\square$
Comment #4087 by cristian d.gonzalez-aviles on April 01, 2019 at 09:38
Please, where can I find a proof of the statement
The Brauer group of C(x,y) is uncountable?
Comment #4092 by Pieter Belmans on April 02, 2019 at 04:01
This would follow from the Artin-Mumford sequence: for every 2-torsion point on a plane elliptic curve we get an element of
\mathrm{Br}(\mathbb{C}(x,y))
(and they are different), but there are uncountably many of such curves because they are parametrised by the
j
-line.
Thanks Pieter! Another way to see this is to show that the quaternion algebras
Q_\lambda = \mathbf{C}(x, y)\langle u, v \rangle/(u^2 - x, v^2 - y + \lambda, uv + vu)
are all pairwise not isomorphic for
\lambda \in \mathbf{C}
. To do this you can try to show that the quadratic extension of
\mathbf{C}(x, y)
gotten by adjoining the square root of
y - \mu
Q_\lambda
\lambda = \mu
Comment #5066 by Tom Graber on April 30, 2020 at 19:01
The statement of Thm 03R7 here claims that the map is a group isomorphism, but the sketch of proof gives no hint of how to compare the group law on the two sides.
@#5066: Yes, this is missing. I have added a sentence saying this can be done directly using cocyles. See changes here.
|
Plant Nutrition | Biology for Majors II | Course Hero
Describe how symbiotic relationships help autotrophic plants obtain nutrients
Describe how heterotrophic plants obtain nutrients
Figure 1. Water is absorbed through the root hairs and moves up the xylem to the leaves.
Figure 2. Cellulose, the main structural component of the plant cell wall, makes up over thirty percent of plant matter. It is the most abundant organic compound on earth.
Figure 3. Nutrient deficiency is evident in the symptoms these plants show. This (a) grape tomato suffers from blossom end rot caused by calcium deficiency. The yellowing in this (b) Frangula alnus results from magnesium deficiency. Inadequate magnesium also leads to (c) intervenal chlorosis, seen here in a sweetgum leaf. This (d) palm is affected by potassium deficiency. (credit c: modification of work by Jim Conrad; credit d: modification of work by Malcolm Manners)
In Summary: Nutritional Requirements
\text{N}_2+16\text{ ATP}+8\text{e}^{-}+8\text{H}^{+}\longrightarrow2\text{NH}_{3}+16\text{ ADP}+16\text{Pi}+\text{H}_2
Figure 4. Some common edible legumes—like (a) peanuts, (b) beans, and (c) chickpeas—are able to interact symbiotically with soil bacteria that fix nitrogen. (credit a: modification of work by Jules Clancy; credit b: modification of work by USDA)
Soybeans are able to fix nitrogen in their roots, which are not harvested at the end of the growing season. The belowground nitrogen can be used in the next season by the corn.
Figure 5. Soybean roots contain (a) nitrogen-fixing nodules. Cells within the nodules are infected with Bradyrhyzobium japonicum, a rhizobia or “root-loving” bacterium. The bacteria are encased in (b) vesicles inside the cell, as can be seen in this transmission electron micrograph. (credit a: modification of work by USDA; credit b: modification of work by Louisa Howard, Dartmouth Electron Microscope Facility; scale-bar data from Matt Russell)
Figure 6. Root tips proliferate in the presence of mycorrhizal infection, which appears as off-white fuzz in this image. (credit: modification of work by Nilsson et al., BMC Bioinformatics 2005)
Figure 7. (a) The dodder is a holoparasite that penetrates the host’s vascular tissue and diverts nutrients for its own growth. Note that the vines of the dodder, which has white flowers, are beige. The dodder has no chlorophyll and cannot produce its own food. (b) Saprophytes, like this Dutchmen’s pipe (Monotropa hypopitys), obtain their food from dead matter and do not have chlorophyll. (a credit: "Lalithamba"/Flickr; b credit: modification of work by Iwona Erskine-Kellie)
Figure 8. (a) Lichens, which often have symbiotic relationships with other plants, can sometimes be found growing on trees. (b) These epiphyte plants grow in the main greenhouse of the Jardin des Plantes in Paris. (credit: a "benketaro"/Flickr)
Figure 9. A Venus flytrap has specialized leaves to trap insects. (credit: "Selena N. B. H."/Flickr)
Effectiveness_of_Fertilizers_for_Chinese_White_Cabbage.docx
NURSING BS BSN • Mabini Colleges
Plant Growth Experiment.docx
InstructorPlan_BIOL 1409_87_Spring 2022.pdf
BIOL 1409 • McLennan Community College
04 Plant Nutrition Biology Notes IGCSE 2014.pdf
BIOLOGY BIOL3040 • Boston College
14.2 Plant Nutrition 1 Novemeber 16, 2005
BIOL 1001 • Memorial University of Newfoundland
Course Notes - Lecture 8 Plant Nutrition
BIOLOGY 260 • University of British Columbia
15-Plant Nutrition II.pptx
BIO 2106 • Villanova Preparatory School
Lecture on Plant Nutrition
27Plant nutrition2018.pptx
Plant Nutrition Analysis of Tomatoes
37-Soil and Plant Nutrition.ppt
Chapter 37- Soil and Plant Nutrition
Aspects of Plant Nutrition and Transport.pptx
37 Soil and Plant Nutrition.pdf
BIO AP • Irvine High School
plant nutrition handouts
BIOLOGY IDK • University of Iowa
Soil and Plant Nutrition Notes
BIOL 115 • Christopher Newport University
Plant_nutrition_-_Wikipedia.pdf
Chapter 37- Flowering Plant Nutrition.docx
BIOLOGY 111 • University of Louisiana, Lafayette
LBL5 Homework - Plant Nutrition Chapter 29.4(1) (1).docx
|
Zeller's congruence - Wikipedia
Algorithm to calculate the day of the week
Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
4 Implementations in software
4.1 Basic modification
4.2 Common simplification
For the Gregorian calendar, Zeller's congruence is
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +K+\left\lfloor {\frac {K}{4}}\right\rfloor +\left\lfloor {\frac {J}{4}}\right\rfloor -2J\right){\bmod {7}},}
for the Julian calendar it is
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +K+\left\lfloor {\frac {K}{4}}\right\rfloor +5-J\right){\bmod {7}},}
h is the day of the week (0 = Saturday, 1 = Sunday, 2 = Monday, ..., 6 = Friday)
q is the day of the month
m is the month (3 = March, 4 = April, 5 = May, ..., 14 = February)
K the year of the century (
{\displaystyle year{\bmod {1}}00}
J is the zero-based century (actually
{\displaystyle \lfloor year/100\rfloor }
) For example, the zero-based centuries for 1995 and 2000 are 19 and 20 respectively (not to be confused with the common ordinal century enumeration which indicates 20th for both cases).
{\displaystyle \lfloor ...\rfloor }
is the floor function or integer part
mod is the modulo operation or remainder after division
In this algorithm January and February are counted as months 13 and 14 of the previous year. E.g. if it is 2 February 2010, the algorithm counts the date as the second day of the fourteenth month of 2009 (02/14/2009 in DD/MM/YYYY format)
For an ISO week date Day-of-Week d (1 = Monday to 7 = Sunday), use
{\displaystyle d=((h+5){\bmod {7}})+1}
These formulas are based on the observation that the day of the week progresses in a predictable manner based upon each subpart of that date. Each term within the formula is used to calculate the offset needed to obtain the correct day of the week.
For the Gregorian calendar, the various parts of this formula can therefore be understood as follows:
{\displaystyle q}
represents the progression of the day of the week based on the day of the month, since each successive day results in an additional offset of 1 in the day of the week.
{\displaystyle K}
represents the progression of the day of the week based on the year. Assuming that each year is 365 days long, the same date on each succeeding year will be offset by a value of
{\displaystyle 365{\bmod {7}}=1}
Since there are 366 days in each leap year, this needs to be accounted for by adding another day to the day of the week offset value. This is accomplished by adding
{\displaystyle \left\lfloor {\frac {K}{4}}\right\rfloor }
to the offset. This term is calculated as an integer result. Any remainder is discarded.
Using similar logic, the progression of the day of the week for each century may be calculated by observing that there are 36524 days in a normal century and 36525 days in each century divisible by 400. Since
{\displaystyle 36525{\bmod {7}}=6}
{\displaystyle 36524{\bmod {7}}=5}
{\displaystyle \left\lfloor {\frac {J}{4}}\right\rfloor -2J}
accounts for this.
{\displaystyle \left\lfloor {\frac {13(m+1)}{5}}\right\rfloor }
adjusts for the variation in the days of the month. Starting from January, the days in a month are {31, 28/29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}. February's 28 or 29 days is a problem, so the formula rolls January and February around to the end so February's short count will not cause a problem. The formula is interested in days of the week, so the numbers in the sequence can be taken modulo 7. Then the number of days in a month modulo 7 (still starting with January) would be {3, 0/1, 3, 2, 3, 2, 3, 3, 2, 3, 2, 3}. Starting in March, the sequence basically alternates 3, 2, 3, 2, 3, but every five months there are two 31-day months in a row (July–August and December–January).[1] The fraction 13/5 = 2.6 and the floor function have that effect; the denominator of 5 sets a period of 5 months.
The overall function,
{\displaystyle \operatorname {mod} \,7}
, normalizes the result to reside in the range of 0 to 6, which yields the index of the correct day of the week for the date being analyzed.
The reason that the formula differs for the Julian calendar is that this calendar does not have a separate rule for leap centuries and is offset from the Gregorian calendar by a fixed number of days each century.
Since the Gregorian calendar was adopted at different times in different regions of the world, the location of an event is significant in determining the correct day of the week for a date that occurred during this transition period. This is only required through 1929, as this was the last year that the Julian calendar was still in use by any country on earth, and thus is not required for 1930 or later.
The formulae can be used proleptically, but "Year 0" is in fact year 1 BC (see astronomical year numbering). The Julian calendar is in fact proleptic right up to 1 March AD 4 owing to mismanagement in Rome (but not Egypt) in the period since the calendar was put into effect on 1 January 45 BC (which was not a leap year). In addition, the modulo operator might truncate integers to the wrong direction (ceiling instead of floor). To accommodate this, one can add a sufficient multiple of 400 Gregorian or 700 Julian years.
For 1 January 2000, the date would be treated as the 13th month of 1999, so the values would be:
{\displaystyle q=1}
{\displaystyle m=13}
{\displaystyle K=99}
{\displaystyle J=19}
So the formula evaluates as
{\displaystyle (1+36+99+24+4-38){\bmod {7}}=126{\bmod {7}}=0={\text{Saturday}}}
(The 36 comes from
{\displaystyle (13+1)\times 13/5=182/5}
, truncated to an integer.)
However, for 1 March 2000, the date is treated as the 3rd month of 2000, so the values become
{\displaystyle q=1}
{\displaystyle m=3}
{\displaystyle K=0}
{\displaystyle J=20}
{\displaystyle (1+10+0+0+5-40){\bmod {7}}=-24{\bmod {7}}=4={\text{Wednesday}}}
Implementations in software[edit]
Basic modification[edit]
Further information: Modulo operation § Variants of the definition
The formulas rely on the mathematician's definition of modulo division, which means that −2 mod 7 is equal to positive 5. Unfortunately, in the truncating way most computer languages implement the remainder function, −2 mod 7 returns a result of −2. So, to implement Zeller's congruence on a computer, the formulas should be altered slightly to ensure a positive numerator. The simplest way to do this is to replace − 2J by + 5J and − J by + 6J. So the formulas become:
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +K+\left\lfloor {\frac {K}{4}}\right\rfloor +\left\lfloor {\frac {J}{4}}\right\rfloor +5J\right){\bmod {7}},}
for the Gregorian calendar, and
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +K+\left\lfloor {\frac {K}{4}}\right\rfloor +5+6J\right){\bmod {7}},}
for the Julian calendar.
One can readily see that, in a given year, March 1 (if that is a Saturday, then March 2) is a good test date; and that, in a given century, the best test year is that which is a multiple of 100.
Common simplification[edit]
Zeller used decimal arithmetic, and found it convenient to use J and K in representing the year. But when using a computer, it is simpler to handle the modified year Y and month m, which are Y - 1 and m + 3 during January and February:
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +Y+\left\lfloor {\frac {Y}{4}}\right\rfloor -\left\lfloor {\frac {Y}{100}}\right\rfloor +\left\lfloor {\frac {Y}{400}}\right\rfloor \right){\bmod {7}},}
for the Gregorian calendar (in this case there is no possibility of overflow because
{\displaystyle \left\lfloor Y/4\right\rfloor \geq \left\lfloor Y/100\right\rfloor }
{\displaystyle h=\left(q+\left\lfloor {\frac {13(m+1)}{5}}\right\rfloor +Y+\left\lfloor {\frac {Y}{4}}\right\rfloor +5\right){\bmod {7}},}
The algorithm above is mentioned for the Gregorian case in RFC 3339, Appendix B, albeit in an abridged form that returns 0 for Sunday.
At least three other algorithms share the overall structure of Zeller's congruence in its "common simplification" type, also using an m ∈ [3, 14] ∩ Z and the "modified year" construct.
Michael Keith published a piece of very short C code in 1990 for Gregorian dates. The month-length component (
{\displaystyle \left\lfloor {\frac {13(m+1)}{5}}\right\rfloor }
{\displaystyle \left\lfloor {\frac {23m}{9}}\right\rfloor +4}
J R Stockton provides a Sunday-is-0 version with
{\displaystyle \left\lfloor {\frac {13(m-2)}{5}}\right\rfloor +2}
, calling it a variation of Zeller.[2]
Claus Tøndering describes
{\displaystyle \left\lfloor {\frac {31(m-2)}{12}}\right\rfloor }
as a replacement.[3]
Both expressions can be shown to progress in a way that is off by one compared to the original month-length component over the required range of m, resulting in a starting value of 0 for Sunday.
^ The every five months rule only applies to the twelve months of a year commencing on 1 March and ending on the last day of the following February.
^ a b Stockton, J R. "Material Related to Zeller's Congruence". "Merlyn", archived at NCTU Taiwan.
^ Tøndering, Claus. "Week-related questions". www.tondering.dk.
Each of these four similar imaged papers deals firstly with the day of the week and secondly with the date of Easter Sunday, for the Julian and Gregorian calendars. The pages link to translations into English.
Zeller, Christian (1882). "Die Grundaufgaben der Kalenderrechnung auf neue und vereinfachte Weise gelöst". Württembergische Vierteljahrshefte für Landesgeschichte (in German). V: 313–314. Archived from the original on January 11, 2015.
Zeller, Christian (1883). "Problema duplex Calendarii fundamentale". Bulletin de la Société Mathématique de France (in Latin). 11: 59–61. Archived from the original on January 11, 2015.
Zeller, Christian (1885). "Kalender-Formeln". Mathematisch-naturwissenschaftliche Mitteilungen des mathematisch-naturwissenschaftlichen Vereins in Württemberg (in German). 1 (1): 54–58. Archived from the original on January 11, 2015.
Zeller, Christian (1886). "Kalender-Formeln". Acta Mathematica (in German). 9: 131–136. doi:10.1007/BF02406733.
The Calendrical Works of Rektor Chr. Zeller: The Day-of-Week and Easter Formulae by J R Stockton, near London, UK. The site includes images and translations of the above four papers, and of Zeller's reference card "Das Ganze der Kalender-Rechnung".
This article incorporates public domain material from the NIST document: Black, Paul E. "Zeller's congruence". Dictionary of Algorithms and Data Structures.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Zeller%27s_congruence&oldid=1081460709"
|
Understanding Quadrilaterals - Practically Study Material
You know that the paper is a model for a plane surface. When you join a number of points without lifting a pencil from the paper (and without retracing any portion of the drawing other than single points), you get a plane curve.
Try to give a few more examples and non-examples for a polygon.
Draw a rough figure of a polygon and identify its sides and vertices.
I. Classification of polygons
We classify polygons according to the number of sides (or vertices) they have.
II. Diagonals
In figures given below, the shaded region shows the interior and exterior of a closed curve. The interior has a boundary and the exterior has no boundary at all
The interior has a boundary. Does the exterior have a boundary?
III. Convex and concave polygons
Here are some convex polygons and some concave polygons.
Polygons that are convex have no portions of their diagonals in their exteriors.
Polygons that may have portions of their diagonals in their exteriors are called concave polygons.
IV. Regular and irregular polygons
A regular polygon is both ‘equiangular’ and ‘equilateral’. For example, a square has sides of equal length and angles of equal measure. Hence it is a regular polygon. A rectangle is equiangular but not equilateral. Is a rectangle a regular polygon? Is an equilateral triangle a regular polygon? Why?
[Note: Use of indicates segments of equal length].
Pentagon ( 5 sides) , Hexagon ( 6 sides), heptagon (7 sides), etc.
V. Angle sum property
3.3. Sum of the Measures of the Exterior Angles of a Polygon
Draw a polygon on the floor, using a piece of chalk. (In the figure, a pentagon ABCDE is shown). We want to know the total measure of angles, i.e., m
\angle
1 + m
\angle
\angle
\angle
\angle
5. Start at A. Walk along
\overline{)AB}
. On reaching B, you need to turn through an angle of m
\angle
1, to walk along
\overline{)BC}
. When you reach at C, you ned to turn through an angle of m
\angle
2 to walk along
\overline{)CD}
. You continue to move in this manner, until you return to side AB. You would have in fact made one complete turn. Therefore, m
\angle
\angle
\angle
\angle
\angle
5 = 360°. This is true whatever be the number of sides of the polygon. Therefore, the sum of the measure of the external angles of any polygon is 360°.
3.4. Kinds of Quadrilaterals
Based on the nature of the sides or angles of a quadrilateral, it gets special names.
I. Trapezium
Trapezium is a quadrilateral with a pair of parallel sides.
II. Kite
Kite is a special type of a quadrilateral. The sides with the same markings in each figure are equal. For example AB = AD and BC = CD.
(i) A kite has 4 sides (It is a quadrilateral).
(ii) There are exactly two distinct consecutive pairs of sides of equal length.
III. Parallelogram
A parallelogram is a quadrilateral. As the name suggests, it has something to do with parallel lines.
IV. Elements of a parallelogram
There are four sides and four angles in a parallelogram. Some of these are equal. There are some terms associated with these elements that you need to remember.
Given a parallelogram ABCD.
\overline{)AB}
\overline{)DC}
, are opposite sides.
\overline{)AD}
\overline{)BC}
form another pair of opposite sides.
\angle A
\angle C
are a pair of opposite angles; another pair of opposite angles would be
\angle B
\angle D
\overline{)AB}
\overline{)BC}
are adjacent sides. This means, one of the sides starts where the other ends. Are
\overline{)BC}
\overline{)CD}
adjacent sides too? Try to find two more pairs of adjacent sides.
\angle A
\angle B
are adjacent angles. They are at the ends of the same side.
\angle B
\angle C
are also adjacent. Identify other pairs of adjacent angles of the parallelogram.
You can further strengthen this idea through a logical argument also.
Consider a parallelogram ABCD (as shown in the figure).
Draw any one diagonal, say
\overline{)AC}
Looking at the angles,
\angle 1
\angle 2
\angle 3
\angle 4
Since in triangles ABC and ADC,
\angle 1=\angle 2,\angle 3=\angle 4
\overline{)AC}
is common, so, by ASA congruency condition,
\Delta \mathrm{ABC}\cong \Delta \mathrm{CDA}
(How is ASA used here?)
V. Angles of a parallelogram
You can further justify this idea through logical arguments.
\overline{\mathrm{AC}}\text{ and }\overline{\mathrm{BD}}
re the diagonals of the parallelogram, you find that
\angle 1=\angle 2\text{ and }\angle 3=\angle 4
Studying separately,
\Delta \mathrm{ABC}\text{ and }\Delta \mathrm{ADC}
will help you to see that by ASA congruency condition,
\Delta \mathrm{ABC}\cong \Delta \mathrm{CDA}
(HOW?)
\angle B
\angle D
have same measure. In the same way you can get
\mathrm{m}\angle \mathrm{A}=\mathrm{m}\angle \mathrm{C}
VI. Diagonals of a parallelogram
The diagonals of a parallelogram, in general, are not of equal length. However, the diagonals of a parallelogram have an interesting property.
Property: The diagonals of a parallelogram bisect each other (at the point of their intersection, of course!)
To argue and justify this property is not very difficult. From Fig, applying ASA criterion, it is easy to see that
\Delta \mathrm{AOC}\cong \Delta \mathrm{COD}
This gives AO = CO and BO = DO
3.5. Some Special Parallelograms
I. Rhombus
We obtain a Rhombus (which, you will see, is a parallelogram) as a special case of kite (which is not a a parallelogram).
The sides of rhombus are all of same length; this is not the case with the kite.
Since the opposite sides of a rhombus have the same length, it is also a parallelogram. So, a rhombus has all the properties of a parallelogram and also that of a kite.
The most useful property of a rhombus is that of its diagonals.
Property: The diagonals of a rhombus are perpendicular bisectors of one another
Here is an outline justifying this property using logical steps.
ABCD is a rhombus (Fig).
Therefore it is a parallelogram too.
Since diagonals bisect each other, OA = OC and OB = OD.
m\angle \mathrm{AOD}=m\angle \mathrm{COD}=90°
It can be seen that by SAS congruency criterion
\Delta \mathrm{AOD}\cong \Delta \mathrm{COD}
m\angle \mathrm{AOD}=m\angle \mathrm{COD}\left[\begin{array}{ll}\text{ Since }& \mathrm{AO}=\mathrm{CO} \left(Why ?\right)\\ & \mathrm{AD}=\mathrm{CD} \left(Why ?\right)\\ & \mathrm{OD}=\mathrm{OD}\end{array}\right
\angle AOD
\angle COD
are a linear pair,
m\angle \mathrm{AOD}=m\angle \mathrm{COD}=90°
Example: RICE is a rhombus (Fig). Find x, y, z. Justify your findings.
Solution: x = OE
y = OR
z = side of the rhombus
= OI (diagonals bisect) = OC (diagonals bisect) = 13 (all sides are equal )
II. Rectangle
A rectangle is a parallelogram with equal angles (Fig ).
If the rectangle is to be equiangular, what could be the measure of each angle?
Then 4x° = 360°
Therefore, x° = 90°
Thus each angle of a rectangle is a right angle.
So, a rectangle is a parallelogram in which every angle is a right angle.
Being a parallelogram, the rectangle has opposite sides of equal length and its diagonals bisect each other.
In a parallelogram, the diagonals can be of different lengths. (Check this); but surprisingly the rectangle (being a special case) has diagonals of equal length.
Property : The diagonals of a rectangle are of equal length.
This is easy to justify. If ABCD is a rectangle (Fig ), then looking at triangles
ABC and ABD separately [(Fig) and (Fig ) respectively], we have
\Delta \mathrm{ABC}\cong \Delta \mathrm{ABD}
This is because AB = AB (Common)
m\angle \mathrm{A}=m\angle \mathrm{B}=90°
The congruency follows by SAS criterion.
Thus AC = BD
and in a rectangle the diagonals, besides being equal in length bisect each other
BELT is a square, BE=EL=LT=TB
\angle \mathrm{B},\angle \mathrm{E},\angle \mathrm{L},\angle \mathrm{T}
BL = ET and
\overline{)\mathrm{BL}}\perp \overline{)\mathrm{ET}}
OB = OL and OE = OT.
This means a square has all the properties of a rectangle with an additional requirement that all the sides have equal length.
The square, like the rectangle, has diagonals of equal length.
In a rectangle, there is no requirement for the diagonals to be perpendicular to one another.
(i) bisect one another (square being a parallelogram)
(ii) are of equal length (square being a rectangle) and
Hence, we get the following property.
We can justify this also by arguing logically:
ABCD is a square whose diagonals meet at O (Fig).
OA = OC (Since the square is a parallelogram)
By SSS congruency condition, we now see that
\Delta \mathrm{AOD}\cong \Delta \mathrm{COD}
m\angle \mathrm{AOD}=m\angle \mathrm{COD}
These angles being a linear pair, each is right angle.
← Rational NumbersPractical Geometry →
|
Bayesian First Aid: One Sample and Paired Samples t-test - Publishable Stuff
Bayesian First Aid: One Sample and Paired Samples t-test
Student’s t-test is a staple of statistical analysis. A quick search on Google Scholar for “t-test” results in 170,000 hits in 2013 alone. In comparison, “Bayesian” gives 130,000 hits while “box plot” results in only 12,500 hits. To be honest, if I had to choose I would most of the time prefer a notched boxplot to a t-test. The t-test comes in many flavors: one sample, two-sample, paired samples and Welch’s. We’ll start with the two most simple; here follows the Bayesian First Aid alternatives to the one sample t-test and the paired samples t-test.
Bayesian First Aid is an attempt at implementing reasonable Bayesian alternatives to the classical hypothesis tests in R. For the rationale behind Bayesian First Aid see the original announcement and the description of the alternative to the binomial test. The development of Bayesian First Aid can be followed on GitHub. Bayesian First Aid is a work in progress and I’m grateful for any suggestion on how to improve it!
A straight forward alternative to the t-test would be to assume normality, add some non-informative priors to the mix and be done with it. However, one of great things with Bayesian data analysis is that it is easy to not assume normality. One alternative to the normal distribution, that still will fit normally distributed data well but that is more robust against outliers, is the t distribution. Hang on, you say, isn’t the t-test already using the t-distribution?. Right, the t-test uses the t-distribution as the distribution of the sample mean divided by the sample SD, here the trick is to assume it as the distribution of the data.
Instead of reinventing the wheel I’ll here piggyback on the work of John K. Kruschke who has developed a Bayesian estimation alternative to the t-test called Bayesian Estimation Supersedes the T-test, or BEST for short. The rationale and the assumptions behind BEST are well explained in a paper published 2013 in the Journal of Experimental Psychology (the paper is also a very pedagogical introduction to Bayesian estimation in general). That paper and more information regarding BEST is available on John Kruschkes web page. He has also made a nice video based on the paper (mostly focused on the two sample version though):
All information regarding BEST is given in the paper and the video, here is just a short rundown of the model for the one sample BEST: BEST assumes the data ($x$) is distributed as a t distribution which is more robust than a normal distribution due to its wider tails. Except for the mean ($\mu$) and the scale ($\sigma$) the t has one additional parameter, the degree-of-freedoms ($\nu$), where the lower $\nu$ is the wider the tails become. When $\nu$ gets larger the t distribution approaches the normal distribution. While it would be possible to fix $\nu$ to a single value BEST instead estimates $\nu$ allowing the t-distribution to become more or less normal depending on the data. Here is the full model for the one sample BEST:
The prior on $\nu$ is an exponential distribution with mean 29 shifted 1 to the right keeping $\nu$ away from zero. From the JEP 2013 paper: “This prior was selected because it balances nearly normal distributions ($\nu$ > 30) with heavy tailed distributions ($\nu$ < 30)”. The priors on $\mu$ and $\sigma$ are decided by the hyper parameters
M_\mu
S_\mu
L_\sigma
H_\sigma
. By taking a peek at the data these parameters are set so that the resulting priors are extremely wide. While having a look at the data pre-analysis is generally not considered kosher, in practice this gives the same results as putting $\mathrm{Uniform}(-\infty,\infty)$ distributions on $\mu$ and $\sigma$.
The Model for Paired Samples
Here I use the simple solution. Instead of modeling the distribution of both groups and the paired differences the Bayesian First Aid alternative uses the same trick as the original paired samples t-test: Take the difference between each paired sample and model just the paired differences using the one sample procedure. Thus the alternative to the paired samples t-test is the same as the one sample alternative, the only difference is in how the data is prepared and the how the result is presented.
The bayes.t.test Function
The t.test function is used to run all four versions of the t-test. Here I’ll just show the one sample and paired samples alternatives. The bayes.t.test runs the Bayesian First Aid alternative to the t-test and has a function signature that is compatible with t.test function. That is, if you just ran a t-test, say t.test(x, conf.int=0.8, mu=1), just prepend bayes. and it should work out of the box.
The example data I’ll use to show off bayes.t.test is from the 2002 Nature article The Value of Bees to the Coffee Harvest (doi:10.1038/417708a, pdf). In this article David W. Roubik argues that bees are important to the coffee harvest despite that the “self-pollinating African shrub Coffea arabica, a pillar of tropical agriculture, was considered to gain nothing from insect pollinators”. Supporting the argument is a data set of the mean average coffee yield (in kg / 10,000 m) for new world countries in 1961-1980, before the establishment of African honeybees, and in 1981-2001, when African honeybees had been more or less been naturalized. This data shows an increased yield after the introduction of bees and when analyzed using a paired t-test results in p = 0.04. This is compared with the increase in yield in old world countries, where the bees been busy buzzing all along, were a paired t-test gives p = 0.232 interpreted as “no change”. The full dataset is given in the table below and in this csv file (to match the analysis in the paper the csv does not include the Caribbean islands)
There are a couple of reasons for why it is not proper to use a t-test to analyze this data set. A t-test does not consider the geographical location of the countries nor is it clear what “population” the sample of countries is drawn from. I also feel tempted to mutter the old cliché “correlation does not imply causation”, surely there must have been many things except for the introduction of bees that changed in Bolivia between 1961 and 2001. Being aware of these objections I’m going to use it to nevertheless show off the paired bayes.t.test.
Let’s first run the original analysis from the paper:
d <- read.csv("roubik_2002_coffe_yield.csv")
new_yield_80 <- d$yield_61_to_80[d$world == "new"]
t.test(new_yield_01, new_yield_80, paired = TRUE, alternative = "greater")
## data: new_yield_01 and new_yield_80
## 1470
In the paper they used one tailed t-test hence the unhelpfully wide confidence interval. Now the Bayesian First Aid alternative:
bayes.t.test(new_yield_01, new_yield_80, paired = TRUE, alternative = "less")
## Warning: The argument 'alternative' is ignored by bayes.binom.test
## Bayesian estimation supersedes the t test (BEST) - paired samples
## data: new_yield_01 and new_yield_80, n = 13
## Estimates [95% credible interval]
## mean paired difference: 1422 [381, 2478]
## sd of the paired differences: 1755 [925, 2738]
## The mean difference is more than 0 by a probability of 0.994
## and less than 0 by a probability of 0.006
So here we get estimates for the mean and SD with credible intervals on the side. We also get to know that the probability that the mean increase is more than zero is 99.4%. We could also take a look at the data from the old world:
old_yield_80 <- d$yield_61_to_80[d$world == "old"]
bayes.t.test(old_yield_01, old_yield_80, paired = TRUE)
## data: old_yield_01 and old_yield_80, n = 15
## mean paired difference: 716 [-1223, 2850]
## sd of the paired differences: 3551 [1104, 5761]
So the trend here is also increasing yields, though the parameter estimate is much less precise and the 95% CI includes zero. Also notable is the much larger SD compare to the new world countries.
Every Bayesian First Aid test have corresponding plot, summary, diagnostics and model.code functions. Here follows examples of these using the new world data.
Using plot we get to look at the posterior distributions directly. We also get a posterior predictive check in the form of a histogram with a smattering of t-distributions drawn from the posterior. If there is a large discrepancy between the model fit and the data then we need to think twice before proceeding. As it is now I would say the post. pred. check looks OK.
fit <- bayes.t.test(new_yield_01, new_yield_80, paired = TRUE)
summary gives us a detailed summary of the posterior. Note that the number of decimals places in this summary is a bit excessive, due to the posterior being approximated using MCMC the numbers will jump around slightly between runs. If this worries you increase the number of MCMC iterations by using the argument n.iter when calling bayes.t.test.
## Data
## new_yield_01, n = 13
## Model parameters and generated quantities
## mu_diff: the mean pairwise difference between new_yield_01 and new_yield_80
## sigma_diff: the standard deviation of the pairwise difference
## nu: the degrees-of-freedom for the t distribution fitted to the pairwise difference
## eff_size: the effect size calculated as (mu_diff - 0) / sigma_diff
## diff_pred: predicted distribution for a new datapoint generated
## as the pairwise difference between new_yield_01 and new_yield_80
## Measures
## mean sd HDIlo HDIup %<comp %>comp
## mu_diff 1408.417 525.511 360.853 2451.763 0.005 0.995
## sigma_diff 1747.749 470.010 906.023 2708.192 0.000 1.000
## nu 29.084 28.125 1.001 84.001 0.000 1.000
## eff_size 0.859 0.363 0.163 1.585 0.005 0.995
## diff_pred 1418.054 4250.459 -2762.415 5458.705 0.221 0.779
## 'HDIlo' and 'HDIup' are the limits of a 95% HDI credible interval.
## '%<comp' and '%>comp' are the probabilities of the respective parameter being
## smaller or larger than 0.
## Quantiles
## q2.5% q25% median q75% q97.5%
## mu_diff 366.882 1072.550 1404.144 1737.091 2458.905
## sigma_diff 1000.814 1428.718 1688.636 1991.766 2857.139
## nu 2.325 9.417 20.370 39.545 103.615
## eff_size 0.182 0.613 0.846 1.087 1.608
## diff_pred -2724.139 180.476 1386.223 2600.898 5520.726
diagnostics prints and plots MCMC diagnostics (similar to the example in bayes.binom.test). Finally model.code prints out R and JAGS code that replicates the analysis and that can be directly copy-n-pasted into an R script and modified from there. Here goes:
## Model code for Bayesian estimation supersedes the t test - paired samples ##
x <- new_yield_01
y <- new_yield_80
pair_diff <- x - y
comp_mu <- 0
for(i in 1:length(pair_diff)) {
pair_diff[i] ~ dt( mu_diff , tau_diff , nu )
diff_pred ~ dt( mu_diff , tau_diff , nu )
eff_size <- (mu_diff - comp_mu) / sigma_diff
mu_diff ~ dnorm( mean_mu , precision_mu )
tau_diff <- 1/pow( sigma_diff , 2 )
sigma_diff ~ dunif( sigmaLow , sigmaHigh )
# A trick to get an exponentially distributed prior on nu that starts at 1.
# Setting parameters for the priors that in practice will result
# in flat priors on mu and sigma.
mean_mu = mean(pair_diff, trim=0.2)
precision_mu = 1 / (mad(pair_diff)^2 * 1000000)
sigmaLow = mad(pair_diff) / 1000
sigmaHigh = mad(pair_diff) * 1000
# Initializing parameters to sensible starting values helps the convergence
# of the MCMC sampling. Here using robust estimates of the mean (trimmed)
# and standard deviation (MAD).
inits_list <- list(
mu_diff = mean(pair_diff, trim=0.2),
sigma_diff = mad(pair_diff),
nuMinusOne = 4)
pair_diff = pair_diff,
comp_mu = comp_mu,
mean_mu = mean_mu,
precision_mu = precision_mu,
sigmaLow = sigmaLow,
sigmaHigh = sigmaHigh)
# The parameters to monitor.
params <- c("mu_diff", "sigma_diff", "nu", "eff_size", "diff_pred")
model <- jags.model(textConnection(model_string), data = data_list,
inits = inits_list, n.chains = 3, n.adapt=1000)
update(model, 500) # Burning some samples to the MCMC gods....
samples <- coda.samples(model, params, n.iter=10000)
If we believe, for example, that robustness is not such a big issue and would like to assume that the data is normally distributed rather than t distributed we just have to make some small adjustments to this script. In the model code we change dt into dnorm and remove the nu parameter resulting in this model:
pair_diff[i] ~ dnorm( mu_diff , tau_diff)
diff_pred ~ dnorm( mu_diff , tau_diff)
We also has to remove the monitoring of the nu parameter.
params <- c("mu_diff", "sigma_diff", "eff_size", "diff_pred")
The bayes.t.test does not include all the functionality of BEST. Check out the BEST package by John K. Kruschke and Mike Meredith for an R and JAGS implementation of BEST including power analysis and more. T-testable data was not that easy to get by and I browsed through a lot of papers before I found the examples with the bees. Wouldn’t it be nice if people actually published some data, well well… I did find two other fun examples of t-testable data if someone would be interested:
Biomechanics: Rubber bands reduce the cost of carrying loads. The data is in the supplementary material.
Chimpanzees Recruit the Best Collaborators. The data is in the paper.
Posted by Rasmus Bååth Feb 4th, 2014 Bayesian, R, Statistics
« Announcing pingr: The R Package that Sounds as it is Called Bayesian Mugs Galore! »
|
Section 43.2 (0AZ8): Conventions—The Stacks project
We fix an algebraically closed ground field $\mathbf{C}$ of any characteristic. All schemes and varieties are over $\mathbf{C}$ and all morphisms are over $\mathbf{C}$. A variety $X$ is nonsingular if $X$ is a regular scheme (see Properties, Definition 28.9.1). In our case this means that the morphism $X \to \mathop{\mathrm{Spec}}(\mathbf{C})$ is smooth (see Varieties, Lemma 33.12.6).
I find the choice of the letter
\mathbb{C}
for the algebraically closed field quite confusing. I happened to be reading parts of this chapter without prior knowledge of the conventions section, and it really puzzled me. Why not just
k
@#4864: Yes, this wasn't a good choice. The idea was that we should not use
k
as this is usually a random (not algebraically closed) field.
Related comment: with relatively little work we can extend almost all the discussion in this chapter to nonalgebraically closed fields. The only tricky thing is to extend the moving lemma to the case where the ground field is finite where you have to do a bit of work. Anway, if we do this, then we can use
k
|
ShapValues - Feature analysis | CatBoost
v
The rows are sorted in the same order as the order of objects in the input dataset.
Each row contains information related to one object from the input dataset.
<contribution of feature 1><\t><contribution of feature 2><\t> .. <\t><contribution of feature N><\t><expected value of the model prediction>
-0.0001401524197<\t>0.0001269417313<\t>0.004920700379<\t>0,00490749
|
Deck - Tears of Themis Wiki
The Deck Menu is where players can select cards to put together a team to use for Debates. The Deck Menu consists of the Primary Deck and the Support Deck.
1 Primary Deck
2 Support Deck
3 Power Calculations
3.1 Support Deck Attack Rate Calculations
3.2 Support Deck Defense Rate Calculations
Primary Deck[edit | edit source | hide | hide all]
These are the cards that are used during debates.
Support Deck[edit | edit source | hide]
These cards provide additional stats to the team but are not used during debate stages. Additionally, some cards have skills which activate only when equipped from the Support Deck, such as the Formidable skill.
See also: Cards#Power Calculations, Game mechanics#Damage Calculations
attackCoefficient 1
defenceCoefficient 1
hpCoefficient 1
talentCoefficient 500
{\displaystyle deckInfluence = primaryDeckTotalInfluence * attackCoefficient * (1 + supportDeckTotalInfluence)}
{\displaystyle deckDefense = primaryDeckTotalDefense * defenceCoefficient * (1 + supportDeckTotalDefense)}
{\displaystyle power = hp * hpCoefficient + skillLevel * talentCoefficient + deckInfluence + deckDefense + primaryDeckTotalSkillValue}
Support Deck Attack Rate Calculations[edit | edit source | hide]
{\displaystyle levelRate = 0.0202 * (cardLevel -1) * (1 + cardStar * 0.1)}
{\displaystyle skillRate = skillLevel * 0.0666}
cardRateValue
{\displaystyle supportDeckInfluence = (1 + levelRate + skillRate) * cardRateValue}
Support Deck Defense Rate Calculations[edit | edit source | hide]
{\displaystyle supportDeckDefense = supportDeckInfluence / 2}
Retrieved from "https://tot.wiki/w/index.php?title=Deck&oldid=11933"
|
24V Wire Size Calculator - AWG for 24 V
Formula to calculate the wire size for a 24V system.
Example: What wire size for a 24V trolling motor is needed
Other interesting wire size tools
Calculating the proper wire size can prevent you from wasting money on unnecessary big cables, so we've created this 24V wire size calculator so you get the optimal size.
As a calculation example, we'll see what wire size for a 24V trolling motor is needed.
You can also look at the FAQ list for other common electrical problems, like the wire size needed for a 20 amp 220-volt circuit.
The equation varies depending on the type of electrical system used.
DC/ single-phase systems
To calculate a 24 V wire size in a direct current (DC) or an alternating current (AC) single-phase system, use this formula:
\footnotesize A = \frac{Iϱ2L}{V}
V
— Voltage drop between the source and the farthest end of the wire, measured in volts;
I
— Electric current through the wire, in amperes;
ϱ
— Resistivity of the conductor material, in Ω⋅m;
L
— Length of the wire (one-way), in meters;
A
— Cross-sectional area of the wire, in square meters; and
For a three-phase AC system, three cables are used instead of one. The calculator accepts the total line voltage and current of the combined three cables.
The equation for a single wire is modified with a √3 factor, which is needed to convert between the system's phase current and line current. For each of the three cables, the area is given by:
\footnotesize A = \frac{\sqrt{3}IϱL}{V}
The factor of 2 disappears, as three-phase systems don't possess a return cable.
🙋 Important:
The result given by the three-phase formula accounts for the area of only one cable. Therefore, you'll need three wires of that size to build your three-phase cable.
This 24V wire size calculator has the three-phase system option enabled, but, for most applications, 24 V systems will work in DC or AC single-phase.
V is the voltage drop, not the source voltage magnitude.
ϱ varies with temperature.
Suppose you're choosing the wire size of a trolling motor operating at 48 amps. The one-way distance (from source to load) is 25 ft, and the maximum operating temperature is 50°C. What is the required size for this operation?
Use our 24V wire size calculator following these steps to know the answer:
As trolling motors and 24 V systems operate in DC, choose "DC/AC Single-phase" as the electrical system.
As recommended in most applications, choose a 3% allowable voltage drop.
Copper is the most common wire material. Therefore, you can choose it as your conductor material.
Type 56 A in the "Current (I)" box.
Type 25 ft in the "One-way distance" box.
Type 50°C in the "Maximum wire temperature" box.
That's it. The required wire size should be 97.95 mm² or 0000 (4/0) AWG.
We've also created other tools to ease your life and solve similar problems:
12-volt wire size calculator
Amp to wire size calculator
220-volt wire size calculator
What wire size for a 20 amp 220-volt circuit should I use?
The wire size for 20 amp 220-volt circuits is 10 AWG for cable lengths below 40 m. This answer assumes using a copper cable at a maximum operating temperature of 100°C and an allowable 3% voltage drop. For longer wires, use our wire size calculator.
What wire size for a 24V trolling motor should I use?
Usually, a 4 AWG will be more than enough for 24 V trolling motors. Even so, to calculate the optimum wire size, follow these steps:
Determine the trolling motor electric current (in amps), cable length L (in meters), conductor resistivity ϱ (in Ω⋅m), and allowable voltage drop V (usually 3% of the source voltage).
Input the values in this formula:
A = (2IϱL) / V
Now you know the wire size for your 24V trolling motor!!! The formula provides the wire area in m². Multiply it by 1,000,000 to convert it to square millimeters.
These results are solely for educational purposes. Before beginning any electrical project, always get the advice of an experienced electrician.
DC/AC Single-phase
Recommended wire size per cable
This wind load calculator will show you how much force wind exerts on your structure at a specific velocity, helping you build roofs, windows, and signs safely.
|
Depth of Field Calculator - Easy to Use
Created by Jasmine J Mah and Kenneth Alambra
What is circle of confusion?
How to adjust depth of field?
How to use the DoF calculator?
How to calculate the depth of field?
Alternate depth of field formula
This depth of field calculator, or DoF calculator, will help you take more fantastic portrait and landscape shots by understanding your camera better when it comes to the depth of field. Take your depth of field photography to the next level with a solid understanding of:
What depth of field is;
What circle of confusion is;
How to adjust the depth of field;
The different depth of field formulas; and
How to calculate the depth of field.
Depth of field is the distance between two planes, a closer one and a farther one, in which we can position objects to have an "acceptably sharp" image formed in a camera. Objects beyond the depth of field will appear blurred or out of focus. On a manual camera, we can set a wide or deep depth of field to capture more details of a scene, or we can have a narrow or shallow depth of field to focus on a particular object while blurring out the background or the foreground as shown in the image comparison below:
We can achieve these depths of field by changing:
Our camera's aperture area;
The lens we use to explore different focal lengths; and
Our distance to our subject or our focusing distance.
But first, how come we see blurred areas in the images we capture? We can explain that using the concept of the circle of confusion.
Imagine an arbitrary point where we focus our camera. As light bounces off this point and travels towards our camera's aperture opening, it spreads out and starts to get blurry. The more it spreads out, the blurrier it gets. The maximum size spot of a circle this point can spread out before we consider it out of focus is called the circle of confusion.
The diameter of the circle of confusion, which we also call the circle of confusion limit, defines how deep the depth of field is. We can observe the circle of confusion in two instances: one between the camera and the focusing distance and one beyond the focusing distance. We call the distance from the camera toward the first circle of confusion the depth of field near limit, while the distance from the camera towards the second circle of confusion is the depth of field far limit, as shown in this illustration:
The circle of confusion limit depends on various factors such as the camera's sensor size, the viewer's visual acuity, and the enlargement of the image produced by the camera to print size. We'll learn more about this in the following sections of this text.
Using a small aperture opening, we can achieve a deep depth of field where we can capture an acceptably sharp image of near and far objects, as illustrated below:
When taking pictures, we almost always want our subject to be within these two limits or the depth of field. We may also choose to keep our subject's foreground and background in focus or not.
From our previous example, if we shorten the focusing distance while maintaining the same aperture size, we decrease the depth of field, as shown below:
We still have a deep depth of field; however, the kitchen countertop now lies beyond the depth of field and, therefore, out of focus. Now, let us increase the size of our aperture. Doing so allows light to spread wider, which results in a shorter distance to reach the circle of confusion limit, giving us a narrow depth of field, as we can see in the image below:
With the same large aperture opening, we can also change our camera's focusing distance towards our foreground object (in this case, the electric fan) and make the rest of the scene blurry, as shown in the image below:
By having a short focusing distance towards our subject and using a large aperture opening, we can see that the light spreads faster, resulting in a very narrow depth of field. The same thing happens when we take macro or close-up photos.
As a rule of thumb, we use longer lenses when we want to take shallow depths of field shots. On the other hand, wide-angle lenses and lenses with short focal lengths are great for deep depth of field photography.
💡 If we want to focus on our subject and blur its surroundings, e.g., when taking portrait shots, we need a shallow depth of field. If, instead, we want to capture more objects in our scene, like when capturing an entire landscape view or a massive group photo, a deep depth of field is preferable.
In the next section of this text, we'll discuss how to use this DoF calculator. Then, we'll dive deeper into calculating the depth of field manually.
Here are the steps you can follow when using our DoF calculator:
Select your camera's sensor size from the list. You can enter custom sensor width and height measurements by selecting Custom sensor size from the options.
Enter the focal length of the lens you are using.
Pick the aperture size you prefer to use.
Enter your approximate focusing distance to your subject.
Upon doing these steps, you'll get the depth of field and depth of field limits for your camera's settings. If you think the calculated depth of field is either too narrow or too wide for your liking, you can adjust your camera settings to meet your preference.
Our DoF uses a default value of 0.029 mm for the circle of confusion limit of a 35mm full-frame sensor size. You can click on the Advanced mode button below our calculator to change this value or modify the values of the factors that affect the circle of confusion limit. In the advanced mode, our DoF calculator will also display the corresponding focal ratio of your selected aperture f-stop, the approximate hyperfocal distance, and the hyperfocal near limit of your entered settings. We'll get more into these parameters in the next section of this text.
We have two depth of field calculation formulas that we can use. In the previous section of this text, we mentioned that the depth of field is the distance between the depth of field far limit and the depth of field near limit. We can express that in an equation form like this:
\small{DoF = DoF_{\text{far limit}}\ - DoF_{\text{near limit}}}
That would be easy if we can right away physically measure the depth of field far and near limits. However, if we cannot measure them, we can calculate them using these formulas:
DoF_{\text{far limit}} = \frac{H\ \times\ u}{H\ -\ (u\ -\ f)}
DoF_{\text{near limit}} = \frac{H\ \times\ u}{H\ +\ (u\ -\ f)}
H
- Hyperfocal distance;
u
- Focusing distance or the camera's distance to the subject;
f
- Focal length of lens used.
Hyperfocal distance is the focusing distance in which we get the maximum depth of field, and we can calculate its value using this equation:
\small{H = f + \frac{f^2}{N\ \times\ C}}
f
- Focal length of lens used;
N
- Aperture f-number; and
C
- Circle of confusion limit.
The circle of confusion limit, which we know determines the depth of field, depends on several factors, as shown in the equation below:
\small{C = \frac{\left(\frac{d_{\text{av}}}{d_{\text{sv}}\ \times \text{visual\ acuity}}\right)}{\text{enlargement}}}
d_{\text{av}}
- Actual viewing distance of a printed photo version of an image; and
d_{\text{sv}}
- Standard viewing distance that a person can observe the said printed photo through a defined visual acuity;
\text{visual\ acuity}
- Resolution at which a typical viewer can distinguish details in the printed photo at the standard viewing distance in terms of line pairs per mm (lp/mm); and
\text{enlargement}
- Enlargement factor of the image produced on the film or camera sensor into the printed image.
Enlargement factor is essentially the ratio of the diagonal of the printed image (
\small{\text{diagonal}_\text{p}}
) and the diagonal of the camera's sensor (
\small{\text{diagonal}_\text{s}}
). Expressed in equation form:
\small{\text{enlargement} = \frac{\text{diagonal}_{\text{p}}}{\text{diagonal}_{\text{s}}}\times1000}
To calculate these diagonals, we can use the Pythagorean theorem, as shown in the equations below:
\text{diagonal}_{\text{p}} = \sqrt{w_{\text{p}}^2 + h_{\text{p}}^2}
\text{diagonal}_{\text{s}} = \sqrt{w_{\text{s}}^2 + h_{\text{s}}^2}
w_{\text{p}}
- Width of print;
h_{\text{p}}
- Height of print;
w_{\text{s}}
- Width of sensor; and
h_{\text{s}}
- Height of sensor.
🙋 Although we provided the formulas needed to find the circle of confusion limit, we usually approximate its value around 0.01 mm to 0.20 mm. You can also keep in mind that we also get a smaller circle of confusion limit with a smaller sensor size, therefore a shallower depth of field. But we would also have to shorten our focal length to capture the same shot. That results in an overall effect of a deeper depth of field. You can learn more about the impact of using different sensor sizes in our crop factor calculator.
Alternatively, we can also use this simplified depth of field calculation formula:
DoF = \frac{2\ \times\ u^2\times\ N\times\ C}{f^2}
u
N
- Aperture f-number;
C
- Circle of confusion limit; and
f
🙋 Please note that this simplified depth of field formula has some limitations in giving accurate results. Nevertheless, you can still use this in approximating different depths of fields.
Understanding depth of field can help you take great pictures, whether of people, still objects, or landscapes. But remember that as we change our camera's settings, especially the aperture size, we may also need to adjust our camera's exposure and shutter speed settings. You can check our exposure calculator and our shutter speed calculator to learn more about these other settings. 📸
What is the depth of field of a 50 mm lens?
Let's say our camera has a circle of confusion, c, of 0.029 mm, and we set our camera with an f-stop of f/4 (focal ratio, N, of 4) and focus at a distance, u, around 1,200 mm. We can approximate DoF using: DoF = u² × 2 × N × C / f² or by following these steps:
Square u: 1,200 × 1,200 = 1,440,000.
Multiply it by 2, N, and c: 1,440,000 × 2 × 4 × 0.029 = 334,080.
Divide that by the square of the focal length: 334,080 / 50² = 133.632 mm ≈ 0.13 m.
The depth of field depends mainly on the camera's aperture size. The smaller it is, the deeper the depth of field gets. The focal length of the lens used also affects the depth of field. Using a long lens narrows the depth of field. Moving to a closer focusing distance to your subject results in a narrower or shallower depth of field.
How is the depth of field related to aperture size?
The smaller the aperture is, the deeper the depth of field becomes. That is because light rays are only allowed to scatter in slight deviations due to the small aperture opening for the light to enter. Expanding the aperture opening allows the light rays getting into our camera to spread wider, resulting in more parts of the image blurring.
To get a narrow or shallow depth of field, you can make one or more of these changes to your camera setup:
Widen your aperture opening;
Use a long focal length lens; or
Does shutter speed affect the depth of field?
No, shutter speed does not affect the depth of field. However, you may have to widen your aperture to let more light in when you increase your camera's shutter speed. Or, you may have to reduce your aperture opening when taking with a long shutter speed. In those cases, the changes in aperture size could affect the depth of field. But changes in shutter speed settings by itself does not affect the depth of field.
Jasmine J Mah and Kenneth Alambra
Lens focal length (f)
Focusing distance (u)
|
Score functions - Algorithm details | CatBoost
Types of score functions
Finding the optimal tree structure
Second-order score functions
CatBoost score functions
Per-object and per-feature penalties
The common approach to solve supervised learning tasks is to minimize the loss function
L
L\left(f(x), y\right) = \sum\limits_{i} w_{i} \cdot l \left(f(x_{i}), y_{i}\right) + J(f){ , where}
l\left( f(x), y\right)
is the value of the loss function at the point
(x, y)
w_{i}
i
-th object
J(f)
is the regularization term.
For example, these formulas take the following form for linear regression:
l\left( f(x), y\right) = w_{i} \left( (\theta, x) - y \right)^{2}
(mean squared error)
J(f) = \lambda \left| | \theta | \right|_{l2}
(L2 regularization)
Boosting is a method which builds a prediction model
F^{T}
as an ensemble of weak learners
F^{T} = \sum\limits_{t=1}^{T} f^{t}
f^{t}
is a decision tree. Trees are built sequentially and each next tree is built to approximate negative gradients
g_{i}
l
at predictions of the current ensemble:
g_{i} = -\frac{\partial l(a, y_{i})}{\partial a} \Bigr|_{a = F^{T-1}(x_{i})}
Thus, it performs a gradient descent optimization of the function
L
. The quality of the gradient approximation is measured by a score function
Score(a, g) = S(a, g)
Let's suppose that it is required to add a new tree to the ensemble. A score function is required in order to choose between candidate trees. Given a candidate tree
f
a_{i}
f(x_{i})
w_{i}
— the weight of
i
-th object, and
g_{i}
– the corresponding gradient of
l
. Let’s consider the following score functions:
L2 = - \sum\limits_{i} w_{i} \cdot (a_{i} - g_{i})^{2}
Cosine = \displaystyle\frac{\sum w_{i} \cdot a_{i} \cdot g_{i}}{\sqrt{\sum w_{i}a_{i}^{2}} \cdot \sqrt{\sum w_{i}g_{i}^{2}}}
Let's suppose that it is required to find the structure for the tree
f
of depth 1. The structure of such tree is determined by the index
j
of some feature and a border value
c
x_{i, j}
be the value of the
j
-th feature on the
i
-th object and
a_{left}
a_{right}
be the values at leafs of
f(x_{i})
a_{left}
x_{i,j} \leq c
a_{right}
x_{i,j} > c
. Now the goal is to find the best
j
c
in terms of the chosen score function.
For the L2 score function the formula takes the following form:
S(a, g) = -\sum\limits_{i} w_{i} (a_{i} - g_{i})^{2} = - \left( \displaystyle\sum\limits_{i:x_{i,j}\leq c} w_{i}(a_{left} - g_{i})^{2} + \sum\limits_{i: x_{i,j}>c} w_{i}(a_{right} - g_{i})^{2} \right)
Let's denote
W_{left} = \displaystyle\sum_{i: x_{I,j} \leq c} w_{i}
W_{right} = \displaystyle\sum_{i: x_{i,j} >c} w_{i}
The optimal values for
a_{left}
a_{right}
are the weighted averages:
a^{*}_{left} =\displaystyle\frac{\sum\limits_{i: x_{i,j} \leq c} w_{i} g_{i}}{W_{left}}
a^{*}_{right} =\displaystyle\frac{\sum\limits_{i: x_{i,j} > c} w_{i} g_{i}}{W_{right}}
After expanding brackets and removing terms, which are constant in the optimization:
j^{*}, c^{*} = argmax_{j, c} W_{left} \cdot (a^{*}_{left})^{2} + W_{right} \cdot (a^{*}_{right})^{2}
The latter argmax can be calculated by brute force search.
The situation is slightly more complex when the tree depth is bigger than 1:
L2 score function: S is converted into a sum over leaves
S(a,g) = \sum_{leaf} S(a_{leaf}, g_{leaf})
. The next step is to find
j*, c* = argmax_{j,c}{S(\bar a, g)}
\bar a
are the optimal values in leaves after the
j*, c*
split.
Depthwise and Lossguide methods:
j, c
are sets of
\{j_k\}, \{c_k\}
k
stands for the index of the leaf, therefore the score function
S
S(\bar a, g) = \sum_{l = leaf}S(\bar a(j_l, c_l), g_l)
S(leaf)
is a convex function, different
j_{k1}, c_{k1}
j_{k2}, c_{k2}
(splits for different leaves) can be searched separately by finding the optimal
j*, c* = argmax_{j,c}\{S(leaf_{left}) + S(leaf_{right}) - S(leaf_{before\_split})\}
SymmetricTree method: The same
j, c
are attempted to be found for each leaf, thus it's required to optimize the total sum over all leaves
S(a,g) = \sum_{leaf} S(leaf)
Let's apply the Taylor expansion to the loss function at the point
a^{t-1} = F^{t-1}(x)
L(a^{t-1}_{i} + \phi , y) \approx \displaystyle\sum w_{i} \left[ l_{i} + l^{'}_{i} \phi + \frac{1}{2} l^{''}_{i} \phi^{2} \right] + \frac{1}{2} \lambda ||\phi||_{2}{ , where:}
l_{i} = l(a^{t-1}_{i}, y_{i})
l'_{i} = -\frac{\partial l(a, y_{i})}{\partial a}\Bigr|_{a=a^{t-1}_{i}}
l''_{i} = -\frac{\partial^{2} l(a, y_{i})}{\partial a^{2}}\displaystyle\Bigr|_{a=a^{t-1}_{i}}
\lambda
is the l2 regularization parameter
Since the first term is constant in optimization, the formula takes the following form after regrouping by leaves:
\sum\limits_{leaf=1}^{L} \left( \sum\limits_{i \in leaf} w_{i} \left[ l_{i} + l^{'}_{i} \phi_{leaf} + \frac{1}{2} l^{''}_{i} \phi^{2} \right] + \frac{1}{2} \lambda \phi_{leaf}^{2} \right) \to min
Then let's minimize this expression for each leaf independently:
\sum\limits_{i \in leaf} w_{i} \left[ l_{i} + l^{'}_{i} \phi_{leaf} + \frac{1}{2} l^{''}_{i} \phi^{2}_{leaf} \right] + \frac{1}{2} \lambda \phi_{leaf}^2 \to min
Differentiate by leaf value
\phi_{leaf}
\sum\limits_{i \in leaf} w_{i} \left[ l^{'}_{i} + l^{''}_{i} \phi_{leaf} \right] + \lambda \phi_{leaf} = 0
So, the optimal value of
\phi_{leaf}
- \displaystyle\frac{\sum_{i}w_{i}l^{'}_{i}}{\sum_{i}w_{i}l^{''}_{i}+\lambda}
The summation is over
i
such that the object
x_{i}
gets to the considered leaf. Then these optimal values of
\phi_{leaf}
can be used instead of weighted averages of gradients (
a^{*}_{left}
a^{*}_{right}
in the example above) in the same score functions.
CatBoost provides the following score functions:
Score function: L2
Use the first derivatives during the calculation.
Score function: Cosine (can not be used with the Lossguide tree growing policy)
Score function: NewtonL2
Use the second derivatives during the calculation. This may improve the resulting quality of the model.
Score function: NewtonCosine (can not be used with the Lossguide tree growing policy)
CatBoost provides the following methods to affect the score with penalties:
Score' = Score \cdot \prod_{f\in S}W_{f} - \sum_{f\in S}P_{f} \cdot U(f) - \sum_{f\in S}\sum_{x \in L}EP_{f} \cdot U(f, x)
W_{f}
is the feature weight
P_{f}
is the per-feature penalty
EP_{f}
is the per-object penalty
S
is the current split
L
is the current leaf
U(f) = \begin{cases} 0,& \text{if } f \text{ was used in model already}\\ 1,& \text{otherwise} \end{cases}
U(f, x) = \begin{cases} 0,& \text{if } f \text{ was used already for object } x\\ 1,& \text{otherwise} \end{cases}
Use the corresponding parameter to set the score function during the training:
Python package: score_function
R package: score_function
Command-line interface: --score-function
|
Sum of squares - Wikipedia
In mathematics, statistics and elsewhere, sums of squares occur in a number of contexts:
3 Algebra and algebraic geometry
4 Euclidean geometry and other inner-product spaces
For partitioning of variance, see Partition of sums of squares
For the "sum of squared deviations", see Least squares
For the "sum of squared differences", see Mean squared error
For the "sum of squared error", see Residual sum of squares
For the "sum of squares due to lack of fit", see Lack-of-fit sum of squares
For sums of squares relating to model predictions, see Explained sum of squares
For sums of squares relating to observations, see Total sum of squares
For sums of squared deviations, see Squared deviations from the mean
For modelling involving sums of squares, see Analysis of variance
For modelling involving the multivariate generalisation of sums of squares, see Multivariate analysis of variance
For the sum of squares of consecutive integers, see Square pyramidal number
For representing an integer as a sum of squares of 4 integers, see Lagrange's four-square theorem
Legendre's three-square theorem states which numbers can be expressed as the sum of three squares
Jacobi's four-square theorem gives the number of ways that a number can be represented as the sum of four squares.
For the number of representations of a positive integer as a sum of squares of k integers, see Sum of squares function.
Fermat's theorem on sums of two squares says which primes are sums of two squares.
The sum of two squares theorem generalizes Fermat's theorem to specify which composite numbers are the sums of two squares.
Pythagorean triples are sets of three integers such that the sum of the squares of the first two equals the square of the third.
A Pythagorean prime is a prime that is the sum of two squares; Fermat's theorem on sums of two squares states which primes are Pythagorean primes.
Pythagorean triangles with integer altitude from the hypotenuse have the sum of squares of inverses of the integer legs equal to the square of the inverse of the integer altitude from the hypotenuse.
Pythagorean quadruples are sets of four integers such that the sum of the squares of the first three equals the square of the fourth.
The Basel problem, solved by Euler in terms of
{\displaystyle \pi }
, asked for an exact expression for the sum of the squares of the reciprocals of all positive integers.
Rational trigonometry's triple-quad rule and triple-spread rule contain sums of squares, similar to Heron's formula.
Squaring the square is a combinatorial problem of dividing a two-dimensional square with integer side length into smaller such squares.
Algebra and algebraic geometry[edit]
For representing a polynomial as the sum of squares of polynomials, see Polynomial SOS.
For computational optimization, see Sum-of-squares optimization.
For representing a multivariate polynomial that takes only non-negative values over the reals as a sum of squares of rational functions, see Hilbert's seventeenth problem.
The Brahmagupta–Fibonacci identity says the set of all sums of two squares is closed under multiplication.
The sum of squared dimensions of a finite group's pairwise nonequivalent complex representations is equal to cardinality of that group.
Euclidean geometry and other inner-product spaces[edit]
The Pythagorean theorem says that the square on the hypotenuse of a right triangle is equal in area to the sum of the squares on the legs. The sum of squares is not factorable.
The Squared Euclidean distance (SED) is defined as the sum of squares of the differences between coordinates.
Heron's formula for the area of a triangle can be re-written as using the sums of squares of a triangle's sides (and the sums of the squares of squares)
The British flag theorem for rectangles equates two sums of two squares
The parallelogram law equates the sum of the squares of the four sides to the sum of the squares of the diagonals
Descartes' theorem for four kissing circles involves sums of squares
The sum of the squares of the edges of a rectangular cuboid equals the square of any space diagonal
Retrieved from "https://en.wikipedia.org/w/index.php?title=Sum_of_squares&oldid=1081564202"
|
Inscribed_angle Knowpia
In geometry, an inscribed angle is the angle formed in the interior of a circle when two chords intersect on the circle. It can also be defined as the angle subtended at a point on the circle by two given points on the circle.
The inscribed angle θ is half of the central angle 2θ that subtends the same arc on the circle. The angle θ does not change as its vertex is moved around on the circle.
Equivalently, an inscribed angle is defined by two chords of the circle sharing an endpoint.
The inscribed angle theorem relates the measure of an inscribed angle to that of the central angle subtending the same arc.
The inscribed angle theorem appears as Proposition 20 on Book 3 of Euclid's Elements.
For fixed points A and B, the set of points M in the plane for which the angle AMB is equal to α is an arc of a circle. The measure of ∠ AOB, where O is the center of the circle, is 2α.
The inscribed angle theorem states that an angle θ inscribed in a circle is half of the central angle 2θ that subtends the same arc on the circle. Therefore, the angle does not change as its vertex is moved to different positions on the circle.
Inscribed angles where one chord is a diameterEdit
Case: One chord is a diameter
Let O be the center of a circle, as in the diagram at right. Choose two points on the circle, and call them V and A. Draw line VO and extended past O so that it intersects the circle at point B which is diametrically opposite the point V. Draw an angle whose vertex is point V and whose sides pass through points A and B.
Draw line OA. Angle BOA is a central angle; call it θ. Lines OV and OA are both radii of the circle, so they have equal lengths. Therefore, triangle VOA is isosceles, so angle BVA (the inscribed angle) and angle VAO are equal; let each of them be denoted as ψ.
Angles BOA and AOV add up to 180°, since line VB passing through O is a straight line. Therefore, angle AOV measures 180° − θ.
It is known that the three angles of a triangle add up to 180°, and the three angles of triangle VOA are:
180° − θ
{\displaystyle 2\psi +180^{\circ }-\theta =180^{\circ }.}
{\displaystyle (180^{\circ }-\theta )}
{\displaystyle 2\psi =\theta ,}
where θ is the central angle subtending arc AB and ψ is the inscribed angle subtending arc AB.
Inscribed angles with the center of the circle in their interiorEdit
Case: Center interior to angle
Given a circle whose center is point O, choose three points V, C, and D on the circle. Draw lines VC and VD: angle DVC is an inscribed angle. Now draw line VO and extend it past point O so that it intersects the circle at point E. Angle DVC subtends arc DC on the circle.
Suppose this arc includes point E within it. Point E is diametrically opposite to point V. Angles DVE and EVC are also inscribed angles, but both of these angles have one side which passes through the center of the circle, therefore the theorem from the above Part 1 can be applied to them.
{\displaystyle \angle DVC=\angle DVE+\angle EVC.}
{\displaystyle \psi _{0}=\angle DVC,}
{\displaystyle \psi _{1}=\angle DVE,}
{\displaystyle \psi _{2}=\angle EVC,}
{\displaystyle \psi _{0}=\psi _{1}+\psi _{2}.\qquad \qquad (1)}
Draw lines OC and OD. Angle DOC is a central angle, but so are angles DOE and EOC, and
{\displaystyle \angle DOC=\angle DOE+\angle EOC.}
{\displaystyle \theta _{0}=\angle DOC,}
{\displaystyle \theta _{1}=\angle DOE,}
{\displaystyle \theta _{2}=\angle EOC,}
{\displaystyle \theta _{0}=\theta _{1}+\theta _{2}.\qquad \qquad (2)}
From Part One we know that
{\displaystyle \theta _{1}=2\psi _{1}}
{\displaystyle \theta _{2}=2\psi _{2}}
. Combining these results with equation (2) yields
{\displaystyle \theta _{0}=2\psi _{1}+2\psi _{2}=2(\psi _{1}+\psi _{2})}
therefore, by equation (1),
{\displaystyle \theta _{0}=2\psi _{0}.}
Inscribed angles with the center of the circle in their exteriorEdit
Case: Center exterior to angle
The previous case can be extended to cover the case where the measure of the inscribed angle is the difference between two inscribed angles as discussed in the first part of this proof.
Suppose this arc does not include point E within it. Point E is diametrically opposite to point V. Angles EVD and EVC are also inscribed angles, but both of these angles have one side which passes through the center of the circle, therefore the theorem from the above Part 1 can be applied to them.
{\displaystyle \angle DVC=\angle EVC-\angle EVD}
{\displaystyle \psi _{0}=\angle DVC,}
{\displaystyle \psi _{1}=\angle EVD,}
{\displaystyle \psi _{2}=\angle EVC,}
{\displaystyle \psi _{0}=\psi _{2}-\psi _{1}.\qquad \qquad (3)}
Draw lines OC and OD. Angle DOC is a central angle, but so are angles EOD and EOC, and
{\displaystyle \angle DOC=\angle EOC-\angle EOD.}
{\displaystyle \theta _{0}=\angle DOC,}
{\displaystyle \theta _{1}=\angle EOD,}
{\displaystyle \theta _{2}=\angle EOC,}
{\displaystyle \theta _{0}=\theta _{2}-\theta _{1}.\qquad \qquad (4)}
{\displaystyle \theta _{1}=2\psi _{1}}
{\displaystyle \theta _{2}=2\psi _{2}}
{\displaystyle \theta _{0}=2\psi _{2}-2\psi _{1}}
{\displaystyle \theta _{0}=2\psi _{0}.}
By a similar argument, the angle between a chord and the tangent line at one of its intersection points equals half of the central angle subtended by the chord. See also Tangent lines to circles.
The inscribed angle theorem is used in many proofs of elementary Euclidean geometry of the plane. A special case of the theorem is Thales' theorem, which states that the angle subtended by a diameter is always 90°, i.e., a right angle. As a consequence of the theorem, opposite angles of cyclic quadrilaterals sum to 180°; conversely, any quadrilateral for which this is true can be inscribed in a circle. As another example, the inscribed angle theorem is the basis for several theorems related to the power of a point with respect to a circle. Further, it allows one to prove that when two chords intersect in a circle, the products of the lengths of their pieces are equal.
Inscribed angle theorems for ellipses, hyperbolas and parabolasEdit
Inscribed angle theorems exist for ellipses, hyperbolas and parabolas, too. The essential differences are the measurements of an angle. (An angle is considered a pair of intersecting lines.)
Ogilvy, C. S. (1990). Excursions in Geometry. Dover. pp. 17–23. ISBN 0-486-26530-7.
Gellert W, Küstner H, Hellwich M, Kästner H (1977). The VNR Concise Encyclopedia of Mathematics. New York: Van Nostrand Reinhold. p. 172. ISBN 0-442-22646-2.
Moise, Edwin E. (1974). Elementary Geometry from an Advanced Standpoint (2nd ed.). Reading: Addison-Wesley. pp. 192–197. ISBN 0-201-04793-4.
Weisstein, Eric W. "Inscribed Angle". MathWorld.
Munching on Inscribed Angles at cut-the-knot
Arc Central Angle With interactive animation
Arc Peripheral (inscribed) Angle With interactive animation
Arc Central Angle Theorem With interactive animation
At bookofproofs.org
|
Smart Grid and Renewable Energy > Vol.10 No.3, March 2019
Decentralized Power Control Strategy in Microgrid for Smart Homes ()
1Punjab Institute of Complementary Sciences, Lahore, Pakistan.
2Higher College of Technology, Dubai, UAE.
3University of Technology & Management, Lahore, Pakistan.
Ahmed, M., Nawaz, A., Ahmed, M. and Farooq, M.S. (2019) Decentralized Power Control Strategy in Microgrid for Smart Homes. Smart Grid and Renewable Energy, 10, 43-53. doi: 10.4236/sgre.2019.103004.
E\left(\%\right)=\left(1-\frac{{E}_{\text{fail}}}{{E}_{\text{demand}}}\right)\times 100\%
{\epsilon }_{\text{e}}
{\epsilon }_{\text{e_conv}}
{\epsilon }_{\text{e_store}}
{\epsilon }_{e}={\epsilon }_{\text{e_conv}}-{\epsilon }_{\text{e_store}}
{\epsilon }_{\text{p_conv}}
{\epsilon }_{\text{e_conv}}=\frac{1}{{t}_{\text{end}}-{t}_{\text{start}}}{\int }_{{t}_{\text{start}}}^{{t}_{\text{end}}}p-\text{convd}t
{\epsilon }_{\text{e_co}}=1-\text{D}{t}_{\text{stored}}
{t}_{\text{stored}}
[1] Mishra, S.K. (2013) Nanogrid DC Based Energy Distribution and Control. Indian Institute of Kanpar.
[2] Goyal, S. (2017) Nano Based Smart Homes with Electricity Production & Trading Facility. International Journal of Innovative Research in Electrical, Electronic, Instrumentation and Control, 5.
[3] Mohan, N., Undeland, T. and Robbins, W. (2002) Power Electronics Converters, Applications and Design. John Wiley, New York, 19.
[4] Farret, F.A. and Simoes, M.G. (2006) Integration of Alternative Sources of Energy. John Wiley & Sons.
[5] Masters, G.M. (2004) Renewable and Efficient Electric Power Systems. John Wiley & Sons, Inc., Hoboken, 217-225, 230, 408-420.
[6] Kakigano, H., Miura, Y., Ise, T., Momose, T. and Hayakawa, H. (2008) Fundamental Characteristic of DC Microgrid for Residential Houses with Cogeneration System in Each House. IEEE Power & Energy Society 2008 General Meeting, No. 08GM0500.
[7] Bryan, J., Duke, R. and Round, S. (2003) Distributed Generation—Nanogrid Transmission and Control Options. International Power Engineering Conference.
|
What is the mass formula in physics?
How do you calculate mass from density and volume?
Are you trying to find a density to mass calculator? Look no further. Our density to mass calculator is just what you need. It is simple and easy to use.
Keep reading if you would like to learn:
What mass is;
The formula used to find mass;
The SI unit of mass;
The difference between mass and density; and
How to calculate mass from density and volume.
🙋 In physics, we define mass as the amount of matter an object contains. The SI unit of mass is kg.
There are several formulas used to calculate mass in physics. Some are:
m = F/a
m = F\times g
m = \rho \times v
m = E/c^2
F
represents force,
a
represents acceleration,
g
is the gravitational constant acceleration,
\rho
E
is energy, and
c
represents the speed of light.
The formula you choose to use is dependent on the information given. For instance, if you know the force and acceleration instead of gravity and volume, this is the formula you will use
m = F/a
🔎 If you would like to learn more about gravity and acceleration, check out our acceleration due to gravity calculator.
The formula used in physics to calculate mass from density and volume is:
m = \rho \times v
m
represents mass;
ρ
v
is volume.
So if you need to calculate the mass of an object whose density is
54\ \text{kg}/\text{m}^3
and volume is
80\ \text{m}^3
, here is how you will proceed:
m = \rho \times v
Substitute the values for density and volume:
m = 54 \times 80
Do the math to obtain
m = 4,\!320\ \text{kg}
What is the difference between mass and density?
Mass is the amount of stuff (molecules) something contains, whereas density measures how tightly packed the mass of a substance is in a given space.
What is the relationship between force, mass and acceleration?
The force, mass, and acceleration of an object are related because they directly impact each other:
The greater the mass of an object, the higher the force it will need to accelerate it.
If we were to increase the mass of an object, but the force remained the same, the acceleration would decrease.
If we were to decrease the force but keep the same mass, the acceleration would also decrease.
|
Flashcard Editor | Theo Chu's Docs
The Flashcard Editor is an unofficial editor for Standard Notes, a free, open-source, and end-to-end encrypted notes app. It is currently in development and not ready for use. 😄
You can find the beta demo at demo.flashcardeditor.com.
\LaTeX/ \KaTeX
, emoji codes, syntax highlighting, inline HTML, table of contents, footnotes, auto-linking, and more.
Markdown support provided by Unified/Remark
\LaTeX/\KaTeX
provided by hosted KaTeX
Emojis provided by emoji codes
Google Code and GitHub Gist flavored Syntax Highlighting provided by highlight.js stylesheets
Paste this into the box:
https://raw.githubusercontent.com/TheodoreChu/flashcard-editor/develop/demo.ext.json
Alternatively, paste this link:
https://notes.theochu.com/p/FV2A4HJFRN
At the top of your note, click Editor, then click Flashcard Editor - Beta.
Type cd flashcard-editor
Create ext.json as shown here with url: "http://localhost:8002". Optionally, create your ext.json as a copy of sample.ext.json.
Install http-server using sudo npm install -g http-server
Start the server at http://localhost:8002 using npm run server
To build the editor, open another command window and run npm run build. For live builds, use npm run watch. You can also run npm run start and open the editor at http://localhost:8080.
GitHub for the source code of the Flashcard Editor
GitHub Issues for reporting issues concerning the Flashcard Editor
Docs for how to use the Flashcard Editor
Contact for how to contact the developer of the Flashcard Editor
Flashcard Editor To do List for the roadmap of the Flashcard Editor
|
A supernova in the Milky Way before 2050 | Metaculus
A supernova in the Milky Way before 2050
Records of astronomical observations of supernovae date millennia, with the most recent supernova in the Milky Way unquestionably observed by the naked eye being SN1604, in 1604 CE. Since the invention of the telescope, tens of thousands of supernovae have been observed, but they were all in other galaxies, leaving a disappointing gap of more than 400 years without observations in our own galaxy.
The closest and brightest observed supernova in recent times was SN1987A in the Large Magellanic Cloud, a dwarf satellite galaxy of the Milky Way. It was the first observed in every band of the electromagnetic spectrum and first detected via neutrinos. Its proximity allowed detailed observations and the test of models for supernovae formation.
Betelgeuse kindled speculations if it would go supernova when it started dimming in luminosity in later 2019. Later studies suggested that occluding dust may be the most likely culprit for the dimming and the star is unlikely to go supernova anytime soon. (see a Metaculus question about it)
The rate of supernovae per century in the Milky Way Galaxy is not well constrained, being frequently estimated between 1 and 10 SNe/century (see a list of estimates in Dragicevich et al., 1999 and Adams et al., 2013), but a recent estimate is of
4.6^{+7.4}_{-2.7}
SNe/century by Adams et al. (2013). Most of these may be core-collapse supernovae, happening in the thin disk, and potentially obscured in the visible by gas and dust, but still observable in other parts of the spectrum, by gravitational waves or by neutrinos.
The observation of a supernova in the Milky Way Galaxy with the current multi-message astronomy technology could hugely improve our understanding of supernovae.
Will we observe a supernova in the Milky Way Galaxy before 2050?
This question resolves positively if one reliable media outlet reports about the observation of a supernova in the Milky Way Galaxy before 2050.
This question should retroactively close 24 hours before the resolution criterion is met.
|
Department of Philosophy, Bartin University, Bartin, Turkey.
Abstract: At the end of 17th century and during 18th century, after devastating Russian wars the Ottomans realized that they fell behind of the war technology of the Western militaries. In order to catch up with their Western rivals, they decided that they had to reform their education systems. For that they sent several students to abroad for education and started to translate Western books. They established Western style military academies of engineering. They also invited foreign teachers in order to give education in these institutes and they consulted them while there were preparing the curriculum. For all these, during the modernization period, the reform (nizama-1 cedid) planers modelled mainly France, especially for teaching the applied disciplines. Since their main aim was to grasp the technology of the West, their interest focused on the applied part of sciences and mathematics. For that reason, they had a very keen interest on mathematical instruments. The main objective in this article is to examine mathematical instruments commonly used in the Ottoman empire. We examined that by dividing the subject in two periods: The Classical period (Medieval Islamic Echole) and the Modernization period (Western Echole). Our main focus in this article will be the Modernization period and the instruments used in the Ottoman military academies.
Keywords: Mathematical Instruments, Ottoman Mathematics, Ottoman Military Engineering Academies, Military Engineering Academies, Compendium of Mathematical Sciences, Chief Instructor Ishaq Efendi
\frac{|\text{AE}|}{|100|}=\frac{|7|}{|21|}
\left|x\right|=33.\stackrel{¯}{3}
Cite this paper: Seyhan, I. (2019) Mathematical Instruments Commonly Used among the Ottomans. Advances in Historical Studies, 8, 36-57. doi: 10.4236/ahs.2019.81003.
[1] Acar, S., Bir, A., & Kacar, M. (2014). Rubu Tahtasi Yapim Klavuzu (Rubu Dairenin Esasi ve Usul-i Tersimi). Istanbul: Ofset Yapimevi Yayinlari.
[2] Adivar, A. (1970). Osmanli Türklerinde Ilim. Istanbul: Remzi Kitapevi.
[3] Beydilli, K. (1995). Türk Bilim ve Matbaacilik Tarihinde Mühendishane ve Mühendishane Matbaasi ve Kütüphanesi (pp. 375-409, 1776-1826). Istanbul: Eren Yayincilik.
[4] Demir, R. (1992). Eski Bir Hesap Aleti: Rub’u’l-Müceyyeb ve Takiyuddin ibn Ma’ruf’un Rub’u’l-Müceyyeble Yapilan Islemler Manzumesi Adli Risalesi: Essay on Bilim ve Felsefe Metinleri 1 (pp. 29-52). Ankara: Oncü Kitap.
[5] Efendi, I. (1257). Mecmua-i Ulum-i Riyaziye, Vol.1, 2. Cairo: Bulak Matbaasi.
[6] Gokmen, F. (1948). Rubu Tahtasi, Nazariyati ve Tersimi. Istanbul: Milli Egitim Basimevi.
[7] Gunter, E. (1624). The Description and Use of the Sector. London: Willia Jones.
[8] Polat, A. (2016). Treatises on Pergar-i nisbe (Sector) in the Manuscript Collections of Turkey. XXVth Scientific Instrument Symposium: Instruments between East and West, Istanbul.
[9] Sezgin, F. (2010). Science and Technology in Islam (Vol 3, p. 77). Publications of the Institute for the History of Arabic-Islamic Science, Frankfurt am Main.
[10] Stone, E. (1723). The Construction and Principal Uses of Mathematical Instruments; Translated from the French of M. Bion Chief Instrument-Maker to the French King, H. W for J. Senex and W. Taylor, London.
[11] Unat, Y. (2001). Seyyid Ali Pasa, Miratü’l-Alem (Evrenin Aynasi), Ali Kuscu’nun Feythiyye Adli Eserinin Cevirisi. Ankara: Kültür Bakanligi Yayinlari.
[12] Ziya, A. (1338). Rubu Dairenin Suret-i Istimali. Istanbul: Kandilli Bogazici.
|
How to Tell if a Function Is Even or Odd: 8 Steps (with Pictures)
1 Testing the Function Algebraically
2 Testing the Function Graphically
One way to classify functions is as either “even,” “odd,” or neither. These terms refer to the repetition or symmetry of the function. The best way to tell is to manipulate the function algebraically. You can also view the function’s graph and look for symmetry. Once you know how to classify functions, you can then predict the appearance of certain combinations of functions.
Testing the Function Algebraically Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/8\/8a\/Tell-if-a-Function-Is-Even-or-Odd-Step-1-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-1-Version-2.jpg","bigUrl":"\/images\/thumb\/8\/8a\/Tell-if-a-Function-Is-Even-or-Odd-Step-1-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-1-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Review opposite variables. In algebra, the opposite of a variable is written as a negative. This is true whether the variable in the function is
{\displaystyle x}
or anything else. If the variable in the original function already appears as a negative (or a subtraction), then its opposite will be a positive (or addition). The following are examples of some variables and their opposites:[1] X Research source
the opposite of
{\displaystyle x}
{\displaystyle -x}
{\displaystyle q}
{\displaystyle -q}
{\displaystyle -w}
{\displaystyle w}
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/e\/ed\/Tell-if-a-Function-Is-Even-or-Odd-Step-2-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-2-Version-2.jpg","bigUrl":"\/images\/thumb\/e\/ed\/Tell-if-a-Function-Is-Even-or-Odd-Step-2-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-2-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Replace each variable in the function with its opposite. Do not alter the original function other than the sign of the variable. For example:[2] X Research source
{\displaystyle f(x)=4x^{2}-7}
{\displaystyle f(-x)=4(-x)^{2}-7}
{\displaystyle g(x)=5x^{5}-2x}
{\displaystyle g(-x)=5(-x)^{5}-2(-x)}
{\displaystyle h(x)=7x^{2}+5x+3}
{\displaystyle h(-x)=7(-x)^{2}+5(-x)+3}
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/2\/29\/Tell-if-a-Function-Is-Even-or-Odd-Step-3-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-3-Version-2.jpg","bigUrl":"\/images\/thumb\/2\/29\/Tell-if-a-Function-Is-Even-or-Odd-Step-3-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-3-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Simplify the new function. At this stage, you are not concerned with solving the function for any particular numerical value. You simply want to simplify the variables to compare the new function, f(-x), with the original function, f(x). Remember the basic rules of exponents which say that a negative base raised to an even power will be positive, while a negative base raised to an odd power will be negative.[3] X Research source
{\displaystyle f(-x)=4(-x)^{2}-7}
{\displaystyle f(-x)=4x^{2}-7}
{\displaystyle g(-x)=5(-x)^{5}-2(-x)}
{\displaystyle g(-x)=5(-x^{5})+2x}
{\displaystyle g(-x)=-5x^{5}+2x}
{\displaystyle h(-x)=7(-x)^{2}+5(-x)+3}
{\displaystyle h(-x)=7x^{2}-5x+3}
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/a\/a6\/Tell-if-a-Function-Is-Even-or-Odd-Step-4-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-4-Version-2.jpg","bigUrl":"\/images\/thumb\/a\/a6\/Tell-if-a-Function-Is-Even-or-Odd-Step-4-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-4-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Compare the two functions. For each example that you are testing, compare the simplified version of f(-x) with the original f(x). Line up the terms with each other for easy comparison, and compare the signs of all terms.[4] X Research source
If the two results are the same, then f(x)=f(-x), and the original function is even. An example is:
{\displaystyle f(x)=4x^{2}-7}
{\displaystyle f(-x)=4x^{2}-7}
These two are the same, so the function is even.
If each term in the new version of the function is the opposite of the corresponding term of the original, then f(x)=-f(-x), and the function is odd. For example:
{\displaystyle g(x)=5x^{5}-2x}
{\displaystyle g(-x)=-5x^{5}+2x}
Notice that if you multiply each term of the first function by -1, you will create the second function. Thus, the original function g(x) is odd.
If the new function does not meet either of these two examples, then it is neither even nor odd. For example:
{\displaystyle h(x)=7x^{2}+5x+3}
{\displaystyle h(-x)=7x^{2}-5x+3}
. The first term is the same in each function, but the second term is an opposite. Therefore, this function is neither even nor odd.
Testing the Function Graphically Download Article
Graph the function. Using graph paper or a graphing calculator, draw the graph of the function. Choose several numerical values for
{\displaystyle x}
and insert them into the function to calculate the resulting
{\displaystyle y}
value. Plot these points on the graph and, after you have plotted several points, connect them to see the graph of the function.[5] X Research source
When plotting points, check positive and corresponding negative values for
{\displaystyle x}
. For example, if working with the function
{\displaystyle f(x)=2x^{2}+1}
, plot the following values:
{\displaystyle f(1)=2(1)^{2}+1=2+1=3}
. This gives the point
{\displaystyle (1,3)}
{\displaystyle f(2)=2(2)^{2}+1=2(4)+1=8+1=9}
{\displaystyle (2,9)}
{\displaystyle f(-1)=2(-1)^{2}+1=2+1=3}
{\displaystyle (-1,3)}
{\displaystyle f(-2)=2(-2)^{2}+1=2(4)+1=8+1=9}
{\displaystyle (-2,9)}
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/e\/e2\/Tell-if-a-Function-Is-Even-or-Odd-Step-6-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-6-Version-2.jpg","bigUrl":"\/images\/thumb\/e\/e2\/Tell-if-a-Function-Is-Even-or-Odd-Step-6-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-6-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Test for symmetry across the y-axis. When looking at a function, symmetry suggests a mirror image. If you see that the part of the graph on the right (positive) side of the y-axis matches the part of the graph on the left (negative) side of the y-axis, then the graph is symmetrical across the y-axis. If a function is symmetrical across the y-axis, then the function is even.[6] X Research source
You can test symmetry by selecting individual points. If the y-value for any selected x is the same as the y-value for -x, then the function is even. The points that were chosen above for plotting
{\displaystyle f(x)=2x^{2}+1}
gave the following results:
(2,9) and (-2,9).
The matching y-values for x=1 and x=-1 and for x=2 and x=-2 indicate that this is an even function. For a true test, selecting two points is not enough proof, but it is a good indication.
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/3e\/Tell-if-a-Function-Is-Even-or-Odd-Step-7-Version-2.jpg\/v4-460px-Tell-if-a-Function-Is-Even-or-Odd-Step-7-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/3e\/Tell-if-a-Function-Is-Even-or-Odd-Step-7-Version-2.jpg\/aid4844543-v4-728px-Tell-if-a-Function-Is-Even-or-Odd-Step-7-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Test for origin symmetry. The origin is the central point (0,0). Origin symmetry means that a positive result for a chosen x-value will correspond to a negative result for -x, and vice versa. Odd functions display origin symmetry.[7] X Research source
If you select some sample values for x and their opposite corresponding -x values, you should get opposite results. Consider the function
{\displaystyle f(x)=x^{3}+x}
. This function would provide the following points:
{\displaystyle f(1)=1^{3}+1=1+1=2}
. The point is (1,2).
{\displaystyle f(-1)=(-1)^{3}+(-1)=-1-1=-2}
. The point is (-1,-2).
{\displaystyle f(2)=2^{3}+2=8+2=10}
. The point is (2,10).
{\displaystyle f(-2)=(-2)^{3}+(-2)=-8-2=-10}
. The point is (-2,-10).
Thus, f(x)=-f(-x), and you can conclude that the function is odd.
Look for no symmetry. The final example is a function that has no symmetry from side to side. If you look at the graph, it will not be a mirror image either across the y-axis or around the origin. Consider the function
{\displaystyle f(x)=x^{2}+2x+1}
Select some values for x and -x, as follows:
{\displaystyle f(1)=1^{2}+2(1)+1=1+2+1=4}
. The point to plot is (1,4).
{\displaystyle f(-1)=(-1)^{2}+2(-1)+(-1)=1-2-1=-2}
. The point to plot is (-1,-2).
{\displaystyle f(2)=2^{2}+2(2)+2=4+4+2=10}
. The point to plot is (2,10).
{\displaystyle f(-2)=(-2)^{2}+2(-2)+(-2)=4-4-2=-2}
. The point to plot is (2,-2).
These should give you enough points already to note that there is no symmetry. The y-values for opposing pairs of x-values are neither the same nor are they opposites. This function is neither even nor odd.
You may recognize that this function,
{\displaystyle f(x)=x^{2}+2x+1}
{\displaystyle f(x)=(x+1)^{2}}
. Written in this form, it appears to be an even function because there is only one exponent, and that is an even number. However, this sample illustrates that you cannot determine whether a function is even or odd when it is written in a parenthetical form. You must expand the function into individual terms and then examine the exponents.
Determine if the function is even, odd, or neither. G(x)=x^10+x^3
It is neither. A quick way to verify that is to evaluate G (1) = 2 and G (-1)= 0.
Is f(x)=4 even or odd?
It's even, since f(x) (which equals 4) = f(-x) (which equals -4) and an even function is when f(x) = f(-x) an an odd function is when the statement above does not hold.
Log (x-3) is an even or odd function?
It is even if Log (x-3) = Log (3-x) and odd if not. An even function is when f(x) = f(-x) and an odd function is when the aforementioned statement is not true.
If all the appearance of a variable in the function have even exponents, then the function will be even. If all the exponents are odd, then the overall function will be odd.
This article applies only to functions with two variables, which can be graphed on a two-dimensional coordinate grid.
↑ http://www.purplemath.com/modules/fcnnot3.htm
↑ https://www.mathsisfun.com/algebra/functions-odd-even.html
In order to tell if a function is even or odd, replace all of the variables in the equation with its opposite. For example, if the variable in the function is x, replace it with -x instead. Simplify the new function as much as possible, then compare that to the original function. If each term in the new version is the opposite of the corresponding term of the original, the function is odd. If they’re the same, then it’s even. If neither of these is true, the function is neither even nor odd. Keep reading to learn how to test the function on a graph!
Русский:определять четные и нечетные функции
Español:saber si una función es par o impar
Français:savoir si une fonction est paire ou impaire
Italiano:Determinare se una Funzione è Pari o Dispari
Bahasa Indonesia:Menentukan Fungsi Genap atau Ganjil
Nederlands:Weten of een functie even is of oneven
"It literally shows you the difference between even and odd with clear examples."
"The tip of how to tell if it is either odd or even helped me. "
"Well written and easy-to-understand steps."
|
Density to Weight Calculator
Density to weight equation
How to use the density to weight calculator
This density to weight calculator gives you a quick way to go from the density of an object made from a known substance to its weight, as long as you know its volume.
In this brief article, we will show you:
The density to weight equation;
How to calculator weight from density; and
The difference between density and weight?
So how do we calculate weight using density and volume? By rearranging the equation for density, which is defined as the mass per unit volume:
\rho = \frac{m}{V}\ \implies \boxed{m = \rho V}
\rho
m
– Weight (or mass); and
V
So if we know the density and volume of an object, we now know how to calculate its weight using density and volume.
It's pretty straightforward to use the density to weight calculator by following these steps:
Input the density of the material, making first sure that the unit is correct. You can change the units by clicking on the unit and selecting from a wide range of density units. For example, you know that the object is made of lead, so you would enter 11,340 kg/m³ as the density.
Enter the volume of space that the object occupies. If you don't know the volume but do know the object's dimensions, the advanced mode of the calculator allows you to enter the object's length, width, and height. Let's say the lead artifact is 250 cm³.
And that's all there is to it. You now know the object's weight, which in our example is 2.835 kg.
Need to do some more density calculations? Then please check out our other density calculators:
Density mass volume calculator;
How do I calculate weight from density?
To calculate weight from the density of an object, given you know its volume, you would:
Multiply the density by the volume, making sure the volume and density units match.
Enjoy your weight result, all without having to use a set of scales.
What's the difference between density and weight?
The density of an object is its weight per unit volume and does not change with the object's size. It is an intrinsic property of the material the object is made from.
Whereas the weight of an object depends on the object's size. The bigger it is, the more it will weigh.
What is the weight of an object if its density is 19.32 g/cm³?
The answer depends on the volume of the object. Let's say its volume is 60 cm³, then:
Multiply 19.32 g/cm³ by 60 cm³.
The result is 1,159 grams.
The weight is proportional to the volume; double the volume, you double the weight.
Check this capacitor energy calculator to find the values of energy and electric charge stored in a capacitor.
|
Newton’s Method for the Matrix Nonsingular Square Root
Chun-Mei Li, Shu-Qian Shen, "Newton’s Method for the Matrix Nonsingular Square Root", Journal of Applied Mathematics, vol. 2014, Article ID 267042, 7 pages, 2014. https://doi.org/10.1155/2014/267042
Chun-Mei Li1 and Shu-Qian Shen2
1Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
Two new algorithms are proposed to compute the nonsingular square root of a matrix A. Convergence theorems and stability analysis for these new algorithms are given. Numerical results show that these new algorithms are feasible and effective.
Consider the following nonlinear matrix equation: where is an nonsingular complex matrix. A solution of (1) is called a square root of . The matrix square roots have many applications in the boundary value problems [1] and the computation of the matrix logarithm [2, 3].
In the last few years there has been a constantly increasing interest in developing the theory and numerical methods for the matrix square roots. The existence and uniqueness of the matrix square root can be found in [4–6]. Here, it is worthwhile to point out that any nonsingular matrix has a square root, and the square root is also nonsingular [4]. A number of methods have been proposed for computing square root of a matrix [5, 7–16]. The computational methods for the matrix square root can be generally separated into two classes. The first class is the so-called direct methods, for example, Schur algorithm developed by Björck and Hammarling [7]. The second class is the iterative methods. Matrix iterations , where is a polynomial or a ration function, are attractive alternatives for computing square roots [9, 11–13, 15, 17]. A well-known iterative method for computing matrix square root is Newton’s method. It has nice numerical behavior, for example, quadratic convergence. Newton’s method for solving (1) was proposed in [18]. Later, some simplified Newton’s methods were developed in [11, 19, 20]. Unfortunately, these simplified Newton’s methods have poor numerical stability.
In this paper, we propose two new algorithms to compute the nonsingular square root of a matrix, which have good numerical stability. We first apply Samanskill technique, especially, proposed in [21] to compute the matrix square root. Convergence theorems and stability analysis for these new algorithms are given in Sections 3 and 4. In Section 5, we use some numerical examples to show that these new algorithms are more effective than the known ones in some aspects. And the final conclusions are given in Section 6.
2. Two New Algorithms
In order to compute the square root of matrix , a natural approach is to apply Newton’s method to (1), and this can be stated as follows.
Algorithm 1 (see [11, 19] (Newton’s method for (1))). We consider the following.
Step 0. Given and , set .
Step 1. Let . If , stop.
Step 2. Solve for in Sylvester equation:
Step 3. Update , , and go to Step 1.
Applying the standard local convergence theorem to Algorithm 1 [19, P. 148], we deduce that the sequence generated by Algorithm 1 converges quadratically to a square root of if the starting matrix is sufficiently close to .
In this paper, we propose two new algorithms to compute the nonsingular square root of the matrix . Our idea can be stated as follows. If (1) has a nonsingular solution , then we can transform (1) into an equivalent nonlinear matrix equation: Then we apply Newton’s method to (3) for computing the nonsingular square root of .
By the definition of F-differentiable and some simple calculations, we obtain that if the matrix is nonsingular, then the mapping is F-differentiable at and Thus Newton’s method for (3) can be written as Combining (4), the iteration (5) is equivalent to the following.
Algorithm 2 (Newton’s method for (3)). We consider the following.
Step 2. Solve for in generalized Sylvester equation:
Step 3. Update , , and go to Step 1, where .
By using Samanskii technique [21] to Newton’s method (5), we get the following algorithm.
Algorithm 3 (Newton’s method for (3) with Samanskii technique). We consider the following.
Step 0. Given , , and , set .
Step 2. Let .
Step 3. If , go to Step 6.
Remark 4. In this paper, we only consider the case that . If , then Algorithm 3 is Algorithm 2.
Remark 5. Iteration (5) is more suitable for theoretical analysis such as the convergence theorems and stability analysis in Sections 3 and 4, while Algorithms 2 and 3 are more convenient for numerical computation in Section 5. In actual computations, the Sylvester equation may be solved by the algorithms developed in [22].
Although Algorithms 2 and 3 are also Newton’s methods, Algorithms 2 and 3 are more effective than Algorithm 1. Algorithm 3, especially, with has cubic convergence rate.
In this section, we establish local convergence theorems for Algorithms 2 and 3. We begin with some lemmas.
Lemma 6 (see [23, P. 21]). Let be an (nonlinear) operator from a Banach space into itself and let be a solution of . If is Frechet differentiable at with , then the iteration , , converges to , provided that is sufficiently close to .
Lemma 7 (see [17, P. 45]). Let and assume that is invertible with . If and , then is also invertible, and
Lemma 8. If the matrix is nonsingular, then there exist and such that, for all , it holds that where and , are the F-derivative of the mapping defined by (4) at , .
Proof. Let , and we select .
From Lemma 7 it follows that is nonsingular and for each . Then is well defined, and so does , where . According to (4), we have where .
Theorem 9. If (3) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 2 converges at least quadratically to the solution .
Proof. Let . By Taylor formula in Banach space [24, P. 67], we have
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by the iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 2 converges to .
Let , according to and Lemma 7; for large enough , we have By Lemma 8, we have
By making use of Taylor formula once again, for all , we have Hence, Combining (13)–(16), we have which implies that the sequence generated by Algorithm 2 converges at least quadratically to the solution .
Theorem 10. If (1) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 3 converges at least cubically to the solution .
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 3 converges to .
By making use of Taylor formula once again, for all , we have Hence, Combining (19)–(22) and Theorem 9, we have where . Therefore, the sequence generated by Algorithm 3 converges at least cubically to the solution .
In accordance with [2] we define an iteration to be stable in a neighborhood of a solution , if the error matrix satisfies where is a linear operator that has bounded power; that is, there exists a constant such that, for all and arbitrary of unit norm, . This means that a small perturbation introduced in a certain step will not be amplified in the subsequent iterations.
Note that this definition of stability is an asymptotic property and is different from the usual concept of numerical stability, which concerns the global error propagation, aiming to bound the minimum relative error over the computed iterates.
Now we consider the iteration (5) and define the error matrix ; that is, For the sake of simplicity, we perform a first order error analysis; that is, we omit all the terms that are quadratic in the errors. Equality up to second order terms is denoted with the symbol .
Substituting (25) into (5) we get combining (4) we have which implies that Omitting all terms that are quadratic in the errors, we have By using , we have that is, which means that iteration (5) is self-adaptive; that is to say, the error in the th iteration does not propagate to the ()st iteration. When and have no eigenvalue in common, especially, the matrix equation has a unique solution [17, P. 194]. Therefore, under the condition that and have no eigenvalue in common, the iteration (5) has optimal stability; that is, the operator defined in (24) coincides with the null operator.
In this section, we compare our algorithms with the following.
Algorithm 11 (the Denman-Beavers iteration [9]). Consider
Algorithm 12 (the scaled Denman-Beavers iteration [13]). Consider
Algorithm 13 (the Pade iteration [13]). Consider where is a chosen integer:
Algorithm 14 (the scaled Pade iteration [13]). Consider
All tests are performed by using MATLAB 7.1 on a personal computer (Pentium IV/2.4 G), with machine precision . The stopping criterion for these algorithms is the relative residual error: where is the current, say the th, iteration value.
Example 1. Consider the matrix We use Algorithms 1, 2, and 3 with and Algorithms 11–14 to compute the nonsingular square root of . We list the iteration steps (denoted by “IT”), CPU time (denoted by “CPU”), and the relative residual error (denoted by “ERR”) in Table 1.
IT CPU ERR
Algorithm 1 7 0.0086
Algorithm 11 9 0.0172
Algorithm 13 with 11 0.0094
Algorithm 13 with 9 0.0101
Example 2. Consider the matrix We use Algorithms 1, 2, and 3 with the starting matrix and Algorithms 11–14 to compute the nonsingular square root of . We list the numerical results in Table 2.
Algorithm 11 8 13.2301
Algorithm 14 with 9 10.3571
From Tables 1 and 2, we can see that Algorithms 2 and 3 outperform Algorithms 1, 11, 12, and 13 in both iteration steps and approximation accuracy, and Algorithm 3 outperforms Algorithms 1, 2, and 11–14 in both iteration steps and approximation accuracy. Therefore, our algorithms are more effective than the known ones in some aspects.
In this paper, we propose two new algorithms for computing the nonsingular square root of a matrix by applying Newton’s method to nonlinear matrix equation . Convergence theorems and stability analysis for these new algorithms are given. Numerical examples show that our methods are more effective than the known one in some aspects.
The authors wish to thank the editor and anonymous referees for providing very useful suggestions as well as Professor Xuefeng Duan for his insightful and beneficial discussion and suggestions. This work was supported by National Natural Science Fund of China (nos. 11101100, 11301107, and 11261014) and Guangxi Provincial Natural Science Foundation (nos. 2012GXNSFBA053006, 2013GXNSFBA019009).
B. A. Schmitt, “On algebraic approximation for the matrix exponential in singularly perturbed bounded value problems,” SIAM Journal on Numerical Analysis, vol. 57, pp. 51–66, 2010. View at: Google Scholar
S. H. Cheng, N. J. Higham, C. S. Kenney, and A. J. Laub, “Approximating the logarithm of a matrix to specified accuracy,” SIAM Journal on Matrix Analysis and Applications, vol. 22, no. 4, pp. 1112–1125, 2001. View at: Publisher Site | Google Scholar | MathSciNet
L. Dieci, B. Morini, and A. Papini, “Computational techniques for real logarithms of matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 3, pp. 570–593, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
G. W. Cross and P. Lancaster, “Square roots of complex matrices,” Linear and Multilinear Algebra, vol. 1, pp. 289–293, 1974. View at: Google Scholar | MathSciNet
W. D. Hoskins and D. J. Walton, “A faster method of computing square roots of a matrix,” IEEE Transactions on Automatic Control, vol. 23, no. 3, pp. 494–495, 1978. View at: Google Scholar
C. R. Johnson and K. Okubo, “Uniqueness of matrix square roots under a numerical range condition,” Linear Algebra and Its Applications, vol. 341, no. 1–3, pp. 195–199, 2002. View at: Publisher Site | Google Scholar | MathSciNet
Å. Björck and S. Hammarling, “A Schur method for the square root of a matrix,” Linear Algebra and Its Applications, vol. 52-53, pp. 127–140, 1983. View at: Publisher Site | Google Scholar | MathSciNet
S. G. Chen and P. Y. Hsieh, “Fast computation of the nth root,” Computers & Mathematics with Applications, vol. 17, no. 10, pp. 1423–1427, 1989. View at: Publisher Site | Google Scholar | MathSciNet
E. D. Denman and A. N. Beavers Jr., “The matrix sign function and computations in systems,” Applied Mathematics and Computation, vol. 2, no. 1, pp. 63–94, 1976. View at: Google Scholar | MathSciNet
L. P. Franca, “An algorithm to compute the square root of a
3×3
positive definite matrix,” Computers & Mathematics with Applications, vol. 18, no. 5, pp. 459–466, 1989. View at: Publisher Site | Google Scholar | MathSciNet
N. J. Higham, “Newton’s method for the matrix square root,” Mathematics of Computation, vol. 46, no. 174, pp. 537–549, 1986. View at: Publisher Site | Google Scholar | MathSciNet
M. A. Hasan, “A power method for computing square roots of complex matrices,” Journal of Mathematical Analysis and Applications, vol. 213, no. 2, pp. 393–405, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
N. J. Higham, “Stable iterations for the matrix square root,” Numerical Algorithms, vol. 15, no. 2, pp. 227–242, 1997. View at: Publisher Site | Google Scholar | MathSciNet
Z. Liu, Y. Zhang, and R. Ralha, “Computing the square roots of matrices with central symmetry,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 715–726, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Y. N. Zhang, Y. W. Yang, B. H. Cai, and D. S. Guo, “Zhang neural network and its application to Newton iteration for matrix square root estimation,” Neural Computing and Applications, vol. 21, no. 3, pp. 453–460, 2012. View at: Publisher Site | Google Scholar
Z. Liu, H. Chen, and H. Cao, “The computation of the principal square roots of centrosymmetric H-matrices,” Applied Mathematics and Computation, vol. 175, no. 1, pp. 319–329, 2006. View at: Publisher Site | Google Scholar | MathSciNet
J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, SIAM, Philadephia, Pa, USA, 2008. View at: MathSciNet
P. Laasonen, “On the iterative solution of the matrix equation
{AX}^{2}-I=0
,” Mathematical Tables and Other Aids to Computation, vol. 12, pp. 109–116, 1958. View at: Publisher Site | Google Scholar | MathSciNet
J. M. Ortega, Numerical Analysis, Academic Press, New York, NY, USA, 2nd edition, 1972. View at: MathSciNet
B. Meini, “The matrix square root from a new functional perspective: theoretical results and computational issues,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 362–376, 2004. View at: Publisher Site | Google Scholar | MathSciNet
V. Samanskii, “On a modification of the Newton’s method,” Ukrainian Mathematical Journal, vol. 19, pp. 133–138, 1967. View at: Google Scholar
J. D. Gardiner, A. J. Laub, J. J. Amato, and C. B. Moler, “Solution of the Sylvester matrix equation
{AXB}^{T}+{CXD}^{T}=E
,” Association for Computing Machinery: Transactions on Mathematical Software, vol. 18, no. 2, pp. 223–231, 1992. View at: Publisher Site | Google Scholar | MathSciNet
M. A. Krasnoselskii, G. M. Vainikko, P. P. Zabreiko, Y. B. Rutitskii, and V. Y. Stetsenko, Approximate Solution of Operator Equations, Wolters-Noordhoff Publishing, Groningen, The Netherlands, 1972.
D. J. Guo, Nonlinear Functional Analysis, Shandong Science Press, Shandong, China, 2009.
Copyright © 2014 Chun-Mei Li and Shu-Qian Shen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
Variables used in formulas - Objectives and metrics | CatBoost
The following common variables are used in formulas of the described metrics:
t_{i}
is the label value for the i-th object (from the input data for training).
a_{i}
is the result of applying the model to the i-th object.
p_{i}
is the predicted success probability
\left(p_{i} = \frac{1}{1 + e^{-a_{i}}}\right)
N
is the total number of objects.
M
c_{i}
is the class of the object for binary classification.
\begin{cases} c_{i} = 0{ , } & t_{i} \leqslant border \\ c_{i} = 1{ , } & t_{i} > border \end{cases}
w_{i}
is the weight of the i-th object. It is set in the dataset description in columns with the Weighttype (if otherwise is not stated) or in the sample_weight parameter of the Python package. The default is 1 for all objects.
P
TP
TN
FP
FN
are abbreviations for Positive, True Positive, True Negative, False Positive and False Negative.
P
TP
TN
FP
FN
use weights. For example,
TP = \sum\limits_{i=1}^{N} w_{i} [p_{i} > 0.5] c_{i}
Pairs
is the array of pairs specified in the Pairs description or in the pairs parameter of the Python package.
N_{Pairs}
is the number of pairs for the Pairwise metrics.
a_{p}
is the value calculated using the resulting model for the winner object for the Pairwise metrics.
a_{n}
is the value calculated using the resulting model for the loser object for the Pairwise metrics.
w_{pn}
is the weight of the (
p
n
) pair for the Pairwise metrics.
Group
is the array of object identifiers from the input dataset with a common GroupId. It is used to calculate theGroupwise metrics.
Groups
is the set of all arrays of identifiers from the input dataset with a common GroupId. It is used to calculate the Groupwise metrics.
|
DevOps Coding Practices - DEV Community
#devops #enterprise #discuss #monitoring
DevOps in the Enterprise (2 Part Series)
1 The Enterprise DevOps Mindset 2 DevOps Coding Practices
Following on from the previous post in this series The Enterprise DevOps Mindset, this article explores some of the more practical things you can do within your organisation (or startup) to enable faster, less risky releases.
Tiny changes are the best
Small changesets can have massive benefits.
Smaller reviews can:
Allow developers to give better focus and feedback to reviews
Reduce the overhead of any one review
Enable faster reviews
Lessen the cost (and risk) of merge conflicts
Get changes into the hands of users (or QAs) earlier for earlier feedback
Can have lower impact, allowing other devs in the same code base to deploy and launch their features without being impacted by your changes
Contribute to a better, more collaborative design
Whilst funny, there is a huge amount of truth in the tweet above! We've all seen it happen, where in order to review, developers are forced to context-switch away from their own work only to find that the pull request is enormous. They take a cursory glance at it and say.. LGTM!
How do we combat it? Well, mostly it comes down to:
Taking the time to think and plan a series of changes or PRs that are individually releasable
Ensuring a pull request is focussed on only 1 thing
Writing tests early, so you have the confidence to merge your change
Take a look at this for an interesting rule-of-thumb with regards to review sizing: 150-150 rule
He suggests that 150 lines of production code and 150 lines of test code serves as a good suggested maximum someone should be expected to review and review well. Obviously this one is subjective, but it's a good number to at least get a feel for what is "too big".
When developing APIs, as with any piece of software, you want to ensure that you can deploy your change at any time. You don't want to force all consumers of your API to be deployed at the same time as the API (even if there's only one consumer). This would create a lock-step deployment.
Coupling your backend and frontend deployments can cause massive headaches. If you discover an issue with either of them you'll need to rollback both. Creating extra work and delaying all the other pieces of work in both systems.
To avoid this we must only make non-breaking API changes.
Only add endpoints, don't remove them
Never add new mandatory parameter to your requests (they must always start out optional)
Never remove a field from a response
Never rename a field or endpoint (This will actually have the same effect as removing).
Ok ok, I know that sounds a bit dramatic. When I say never, I mean before making these removals you must consider the impact it will have on production systems before making this change.
In order to do that, first update all your consumers to handle your change (not using the field, or sending the new field).. then and only then can you make your API change.
One tool I've found useful for this is Swagger Brake, as well as it's plugins for Maven and Gradle.
Keystone Interfaces
One good way to allow your code to be merged and tested without affecting the user is with a Keystone Interface.
Basically, this entails building all the functionality (with all required tests) for a particular feature (over multiple PRs) then, only once everything is ready, we build the UI (or API endpoint) to expose the functionality. This is obviously simplest when the feature can be exposed via a single button but can work well for all kinds of features. In this case, the button becomes the Keystone that is placed at the end and allows us access to the functionality.
This method is very powerful. As Martin Fowler states, it can even be used to hide whole pages of functionality, we can just use a link to that page as the Keystone. It can work just as well in an API where all the functionality is slowly fleshed out, including DB schema and migrations, and then once it's ready and tested, raise a PR to include the change in an existing endpoint. For internal APIs, where you control all consumers you probably don't even need to go to this much effort... Just add the new endpoint.
I've previously felt odd about this particular technique as it feels odd to have code in your codebase that doesn't get used yet (See Dead Code, however as long as the feature is still being worked on, this approach can have some serious benefits and avoid some of the complexities and effort with more involved techniques like feature toggles.
Martin Fowler also mentions Dark Launching, which is the idea of calling the code for a new feature without the user being able to tell. If done correctly, this could allow you to test out new functionality and it's performance impacts in production without overly affecting the user.
You might wish to expand some existing functionality to display more information to the user. You could flesh this extra info out, collecting it and gathering production stats on performance or errors (obviously you need to ensure the errors don't affect the user). Once you're happy, add the code to expose the new functionality to the user.. a bit like a Keystone Interface again.
Once you have considered all other avenues (such as Dark Launching and Keystone Interfaces), in order to avoid launching your feature until you're ready you might need to use a Feature Toggle.
It's often still useful to think about your functionality in terms of Keystones. If your feature allows it, create a Keystone and put your feature toggle around that. Once you have to finished building out and testing your feature (with the feature toggled on for you), you can switch on the toggle.. exposing your feature toggled Keystone to the world.
Not all changes allow you to toggle your code at the UI level but the deeper down in the code you place your toggle, the more intertwined with the test of the code it will be and the harder it will be to test and gain confidence in it.
So place your toggles at the highest level of abstraction you can in your code and try to avoid littering several of the same toggles throughout your code.
If you can't see it, it didn't happen
Lastly, but possibly the most important point is that you must have the ability to see what is going on in production. That means alerting, logging and monitoring.
It should go without saying that if we want to reduce the time taken to become aware of a production issue then we need to be monitoring production for issues!
In the modern software space, you will be inundated with tools that can get you visibility of your production system.
Let's take a look at some of the types of tools to watch out for.
Alerting for errors
HTTP call logging
There are so many things that you COULD track and as you build out your system you will likely need to know more and more about how your system is really used.
The key thing is that your chosen solution allows you to continually expand out your knowledge of the system as required.
Some tools that I've seen be used with success:
Sentry (I can't recommend this enough!)
Again there are plenty of things you could track, hopefully, some (like performance or resource usage) are already handled by your platform.
As with frontend monitoring, it doesn't matter so much what you choose, just have something in place that can expand as your visibility requirements increase.. and they will.
Here are a few backend monitoring tools that I've seen work well:
Splunk (Really great at alerting, querying and data transformation)
ELK (Elasticsearch & Kibana)
Alerts should raise incidents in your incident management system (severity determined by log).
You can even easily calculate your system uptime as:
\%\space upTime = 100 - \frac{numOfErrors * recoveryTime * 100} {time}
For a really good breakdown of what we are actually looking for when monitoring production, take a look at:
Monitoring Production Methodologically
Continuous Delivery is all about automating everything in your process. Everything, right up to the point where you make a manual decision to deploy to production.
The more that is automated, the easier your release process becomes, the more releases that you will be willing to do.
Whilst it might be difficult for highly controlled industries (e.g Government and Finance), at some point we could even automate the decision to deploy to production. We call this continuous deployment. This is just a natural progression of continuous delivery but isn't required to make serious strides towards faster, easier releases.
The medium ground is having a tool with all the codified criteria you have to approve a release to go to production, then just click the button yourself. But given everything we've discussed increasing risk by delaying, if you've got a tool to do the work, once you have confidence in that, why not just let the tool release when it's ready?
For some tips from Atlassian on how to automate visibility of what you are sending into production, see: 6 steps to better release management in Jira Software
Despite everything, you may find that the organisation mandates a whole raft of other checks, post-merge. Don't be disheartened! Once you and your team are following these methodologies the next steps are to begin slowly automating (or even better, negotiating) away the additional post-merge steps.
There are many techniques out there to help us release faster.
Prevent your work from blocking other developers
Commit and merge in small bite-sized chunks
Trust your tests (and if you can't, fix them!)
If you can't see what's happening in production, it's probably broken
And most importantly, using these techniques and others, find a way to release to production on a regular basis. Even if you are in a heavily regulated industry, even if you are required to make no customer-facing changes, you can still gain massive benefits from deploying your (dark launched, toggled off, inactive) code to production on a regular basis.
Huh, keystone interface. I do that all the time but didn't know it had a name.
Great articles. I enjoyed reading them. :-)
Dylan Watson Author
Thanks Steve! Yeah definitely handy to have a word for something we've been doing for a while! Makes it much easier to talk about 😁
More from Dylan Watson
#devops #mindset #enterprise #discuss
|
\begin{array}{l}\stackrel{˙}{x}\left(t\right)=A\left(t\right)x\left(t\right)+\text{}B\left(t\right)u\left(t\right)+G\left(t\right)w\left(t\right)\text{ (state equation)}\\ y\left(t\right)=C\left(t\right)x\left(t\right)+D\left(t\right)u\left(t\right)+H\left(t\right)w\left(t\right)+v\left(t\right)\text{ (measurement equation)}\\ \end{array}
\begin{array}{l}E\left[w\left(t\right)\right]=E\left[v\left(t\right)\right]=0\\ E\left[w\left(t\right){w}^{T}\left(t\right)\right]=Q\left(t\right)\\ E\left[w\left(t\right){v}^{T}\left(t\right)\right]=N\left(t\right)\\ E\left[v\left(t\right)v{\left(}^{t}\right]=R\left(t\right)\end{array}
\stackrel{^}{x}
P\left(t\right)=E\left[\left(x-\stackrel{^}{x}\right){\left(x-\stackrel{^}{x}\right)}^{T}\right]
\begin{array}{l}L\left(t\right)=\left(P\left(t\right){C}^{T}\left(t\right)+\overline{N}\left(t\right)\right)\overline{R}{\left(}^{t},\\ \stackrel{˙}{P}\left(t\right)=A\left(t\right)P\left(t\right)+P\left(t\right){A}^{T}\left(t\right)+\overline{Q}\left(t\right)-L\left(t\right)\overline{R}\left(t\right){L}^{T}\left(t\right),\\ \stackrel{˙}{\stackrel{^}{x}}\left(t\right)=A\left(t\right)\stackrel{^}{x}\left(t\right)+B\left(t\right)u\left(t\right)+L\left(t\right)\left(y\left(t\right)-C\left(t\right)\stackrel{^}{x}\left(t\right)-D\left(t\right)u\left(t\right)\right),\end{array}
\begin{array}{l}\overline{Q}\left(t\right)=G\left(t\right)Q\left(t\right){G}^{T}\left(t\right),\\ \overline{R}\left(t\right)=R\left(t\right)+H\left(t\right)N\left(t\right)+{N}^{T}\left(t\right){H}^{T}\left(t\right)+H\left(t\right)Q\left(t\right){H}^{T}\left(t\right),\\ \overline{N}\left(t\right)=G\left(t\right)\left(Q\left(t\right){H}^{T}\left(t\right)+N\left(t\right)\right).\end{array}
\stackrel{^}{x}
\stackrel{^}{y}
\text{ }P=\underset{t\to \infty }{\mathrm{lim}}E\left[\left(x-\stackrel{^}{x}\right){\left(x-\stackrel{^}{x}\right)}^{T}\right].
\begin{array}{l}x\left[n+1\right]\text{ }=\text{ }A\left[n\right]\text{ }x\left[n\right]\text{ }+\text{ }B\left[n\right]\text{ }u\left[n\right]\text{ }+\text{ }G\left[n\right]\text{ }w\left[n\right],\\ y\left[n\right]\text{ }=\text{ }C\left[n\right]\text{ }x\left[n\right]\text{ }+\text{ }D\left[n\right]\text{ }u\left[n\right]\text{ }+\text{ }H\left[n\right]\text{ }w\left[n\right]\text{ }+\text{ }v\left[n\right],\end{array}
\begin{array}{l}E\left[w\left[n\right]\right]=E\left[v\left[n\right]\right]=0,\\ E\left[w\left[n\right]{w}^{T}\left[n\right]\right]=Q\left[n\right],\\ E\left[v\left[n\right]{v}^{T}\left[n\right]\right]=R\left[n\right],\\ E\left[w\left[n\right]{v}^{T}\left[n\right]\right]=N\left[n\right].\end{array}
\stackrel{^}{x}\left[n+1|n\right]=A\left[n\right]\stackrel{^}{x}\left[n|n-1\right]+B\left[n\right]u\left[n\right]+L\left[n\right]\left(y\left[n\right]-C\left[n\right]\stackrel{^}{x}\left[n|n-1\right]-D\left[n\right]u\left[n\right]\right),
\begin{array}{l}L\left[n\right]=\left(A\left[n\right]P\left[n\right]{C}^{T}\left[n\right]+\overline{N}\left[n\right]\right){\left(}^{C},\\ M\left[n\right]=P\left[n\right]{C}^{T}\left[n\right]{\left(}^{C},\\ Z\left[n\right]=\left(I-M\left[n\right]C\left[n\right]\right)P\left[n\right]{\left(}^{I}+M\left[n\right]\overline{R}\left[n\right]{M}^{T}\left[n\right],\\ P\left[n+1\right]=\left(A\left[n\right]-\overline{N}\left[n\right]{\overline{R}}^{-1}\left[n\right]C\left[n\right]\right)Z{\left(}^{A}+\overline{Q}\left[n\right]-N\left[n\right]{\overline{R}}^{-1}\left[n\right]{\overline{N}}^{T}\left[n\right],\end{array}
\begin{array}{l}\overline{Q}\text{[n]}=G\text{[n]}Q\text{[n]}{G}^{T}\text{[n],}\\ \overline{R}\text{[n]}=R\text{[n]}+H\text{[n]}N\text{[n]}+{N}^{T}\text{[n]}{H}^{T}\text{[n]}+H\text{[n]}Q\text{[n]}{H}^{T}\text{[n],}\\ \overline{N}\text{[n]}=G\text{[n]}\left(Q\text{[n]}{H}^{T}\text{[n]}+N\text{[n]}\right),\\ \text{and}\\ P\text{[n]}=E\left[\left(x-\stackrel{^}{x}\left[n|n-1\right]\right){\left(}^{x}\right],\\ Z\text{[n]}=E\left[\left(x-\stackrel{^}{x}\left[n|n\right]\right){\left(}^{x}\right],\end{array}
\stackrel{^}{x}\left[n|n\right]
\stackrel{^}{x}\left[n|n-1\right]
\begin{array}{l}\stackrel{^}{x}\left[n|n\right]=\stackrel{^}{x}\left[n|n-1\right]+M\left[n\right]\left(y\left[n\right]-C\left[n\right]\stackrel{^}{x}\left[n|n-1\right]-D\left[n\right]u\left[n\right]\right),\\ \stackrel{^}{y}\left[n|n\right]=C\left[n\right]\stackrel{^}{x}\left[n|n\right]+D\left[n\right]u\left[n\right].\end{array}
\stackrel{^}{x}\left[n|n-1\right]
\stackrel{^}{x}\left[n|n-1\right]
\stackrel{^}{y}\left[n|n-1\right]
\stackrel{^}{y}\left[n|n-1\right]=C\left[n\right]\stackrel{^}{x}\left[n|n-1\right]+D\left[n\right]u\left[n\right]
The Kalman Filter block differs from the kalman (Control System Toolbox) command in the following ways:
Model Source is Dialog: LTI State-Space Variable and Variable is an identified state-space model (idss) with a nonzero K matrix.
\stackrel{^}{x}
\stackrel{^}{y}
\stackrel{˙}{\stackrel{^}{x}}\left(t\right)=A\left(t\right)\stackrel{^}{x}\left(t\right)+B\left(t\right)u\left(t\right)
\stackrel{^}{x}\left[n+1|n\right]=A\left[n\right]\stackrel{^}{x}\left[n|n-1\right]+B\left[n\right]u\left[n\right]
\stackrel{^}{x}
\stackrel{^}{y}
\overline{R}>0
\overline{Q}-\overline{N}{\overline{R}}^{-1}{\overline{N}}^{T}\ge 0
\left(A-\overline{N}{\overline{R}}^{-1}C,\overline{Q}-\overline{N}{\overline{R}}^{-1}{\overline{N}}^{T}\right)
\begin{array}{l}\overline{Q}=GQ{G}^{T}\\ \overline{R}=R+HN+{N}^{T}{H}^{T}+HQ{H}^{T}\\ \overline{N}=G\left(Q{H}^{T}+N\right)\end{array}
kalman (Control System Toolbox) | extendedKalmanFilter | unscentedKalmanFilter | particleFilter
|
Partial Common Ownership - Docs
A system of property rights that blends the free market mechanics with common ownership..
The Geo Web uses a novel property rights regime referred to as partial common ownership
^1
to fairly and efficiently administer its digital land market. Partial common ownership is a market-based solution with an important twist on private property rights:
Land holders must publicly declare a "For Sale Price" for each of their land parcels
Land holders pay a license fee to the network based on a percentage of their "For Sale Price"
Any market participant can force transfer of a parcel by paying the current land holder their "For Sale Price"
This system can be conceptualized as creating leases with equity buyout mechanisms.
There are multiple ways to implement these basic rules in a smart contract with different technical, user experience, and market tradeoffs.
The Geo Web currently uses streaming payments for license fees along with a more owner-forgiving continuous auction mechanism.
Here's an overview of how it works in practice:
1) Alice is the first Geo Web user to claim a plot of land in a park. She does so without any upfront fee (excluding blockchain transaction costs).
2) Alice sets her "For Sale Price" at 1 ETHx (ETHx is a wrapped version of the ETH that enables streaming payments). Alice is required to authorize a network fee payment stream equivalent to 0.1 ETHx/per year (based on a 10% annual rate for this example). The stream increments her payment every second. She must keep the ETHx balance in her wallet above 0 to maintain the stream and her parcel license.
3) Alice is an artist so she anchors a piece of her augmented reality art to the parcel. Now, whenever she or any Geo Web user walks through the park they see her art installation.
4) Six months later, Alice's art installation has become a bit of a local landmark. Bob decides that he wants to buy the parcel and change the art in the park.
5) Bob places a fully collateralized on-chain bid that values the parcel at 1.5 ETHx and preauthorizes his corresponding license fee stream.
6) Alice now has 7 days to accept or reject Bob's bid. Alice cannot change her For Sale Price. No other party can place a bid during this period.
7) If Alice wants to keep control of the parcel, she can reject the bid within the window by paying a penalty fee to the Geo Web treasury equal to 5% of the bid's value (think of this like paying for back taxes because the previous For Sale Price was too low). Bob's collateral is returned in full and his preauthorized stream is cancelled.
8) If Alice wants to accept the bid, she will be paid 1 ETHx (her For Sale price) from Bob's collateral. Her payment stream will be closed, Bob's will be opened, his remaining collateral will be returned to him, and the license for the parcel will transfer to Bob.
9) Bob now has exclusive rights to anchor content within the parcel's boundaries and must maintain a valid license fee payment stream.
The Geo Web implements and enforces these rules via the Digital Land Registry smart contracts. It's an elegant system that enables a unified global digital land market with minimal centralization and overhead.
The system offers numerous benefits for the network's health versus a system of pure private property rights:
The system encourages allocative efficiency. Essentially, it's easier for land to make its way into the hands of those that will put it into higher productive use because they are willing to pay more.
This dynamic increases the aggregate utility of the network and helps drive a virtuous network effects cycle.
The requirement for a public "For Sale Price" eliminates long, inefficient negotiations and monopoly holdouts.
This is a pervasive issue in the World Wide Web's domain name market. This dynamic would severely hamper adoption due to the low substitutability of two land parcels.
An individual holding a parcel of land means all other participants (i.e. "the network") are excluded from doing the same. This system assesses direct cost to the land holder to compensate the rest of the network for their exclusion—making for a more fair system.
Network fees are used to fund protocol and ecosystem development which, in turn, drives value and users.
While the novelty of this property rights system will require effective messaging and additional user education, it is the optimal system for taking the Geo Web from boostrap to scale.
The fee rate for the Geo Web testnet is set at 10%. Mainnet rate will likely be set slightly lower to encourage early adoption and investment.
A minimum "For Sale Price" will be enforced for mainnet to limit spam land grabs.
The mainnet Geo Web land market will be initiated with a fair launch auction. Claims during this period will require a one-time payment as determined by a Dutch auction (starts at a high value and linearly decrease to 0). This is to limit gas price wars for highly-desirable locations and create a playing field in which those who genuinely value the land the most have a fair opportunity to claim it.
If a licensor's payment stream runs dry, their corresponding parcel(s) will also be placed in a Dutch auction. The auction price will linearly decrease from the previous "For Sale Price" to zero over two weeks. Any user can claim the land during the auction at the current price. The previous land holder receives the proceeds from the auction.
^1
This system is also known as Harberger Taxes (economist Arnold Harberger initially outlined the basic scheme), Self-Assessed Licenses Sold at Auction (SALSA), Common Ownership Self-assessed Tax (COST), and Depreciating Licenses. Most recently it's gained attention through the book Radical Markets by Eric A. Posner and Glen E. Weyl. Vitalik Buterin has helped popularize the idea in blockchain/crypto circles as well.
|
Viscoelastic Lubrication With Phan-Thein-Tanner Fluid (PTT) | J. Tribol. | ASME Digital Collection
F. Talay Akyildiz,
F. Talay Akyildiz
Department of Mathematics, Ondokuz Mayis University, Samsun, 55139, Turkey
Hamid Bellout
Department of Mathematical Sciences, Northern Illinois University, DeKalb, IL 60115
e-mail: bellout@math.niu.edu
Contributed by the Tribology Division for publication in the ASME JOURNAL OF TRIBOLOGY. Manuscript received by the Tribology Division May 6, 2003; revised manuscript received September 11, 2003. Associate Editor: J. A. Tichy.
Akyildiz, F. T., and Bellout, H. (April 19, 2004). "Viscoelastic Lubrication With Phan-Thein-Tanner Fluid (PTT) ." ASME. J. Tribol. April 2004; 126(2): 288–291. https://doi.org/10.1115/1.1651536
We analyze the lubrication flow of a viscoelastic fluid to account for the time dependent nature of the lubricant. The material obeys the constitutive equation for Phan-Thein-Tanner fluid (PTT). An explicit expression of the velocity field is obtained. This expression shows the effect of the Deborah number
(De=λU/L,
λ is the relaxation time). Using this velocity field, we derive the generalized Reynolds equation for PTT fluids. This equation reduces to the Newtonian case as
De→0.
Finally, the effect of the Deborah number on the pressure field is explored numerically in detail and the results are documented graphically.
non-Newtonian fluids, lubrication, fluid dynamics, finite element analysis, Lubrication Flow, Phan-Thein-Tanner Model(PTT), Thin Film, Finite Difference
Flow (Dynamics), Fluids, Lubrication, Pressure
Non-Newtonian Lubrication With the Convective Maxwell Model
Phan-Thein
Polymer Solution Rheology Based on a Finitely Extensible Bead-Spring Chain Model
A Simple Constitutive Equation for Polymer Based on the Concept of the Deformation Dependent Tensorial Mobility
Use of Coupled Birefringence and LDV Studies of Flow Through a Planar Contraction to Test Constitutive Equations for Concentrated Polymer Solutions
Numerical Analysis of Start-Up Planar and Axisymmetric Contraction Flows Using Multi-Mode Differential Constitutive Models
Ai¨t-Kadi
Numerical Simulation of Viscoelastic Flows Through a Planar Contraction
On Vortex Development in Viscoelastic Expansion and Contraction Flows
Numerical Simulation Studies of the Planar Entry Flow of Polymer Melts
A Nonlinear Network Viscoelastic Model
O’Brien, S. B. G., and Schwartz, L. W., 2002, “Theory and Modeling of Thin Film Flows,” Encyclopedia of Surface and Colloid Science, pp. 5283–5297.
Theoretical Investigation in Thermoelastohydrodynamic Lubrication With Non-Newtonian Lubricants Under Heavy Load Change
|
MultivariatePowerSeries/ApproximatelyEqual - Maple Help
Home : Support : Online Help : MultivariatePowerSeries/ApproximatelyEqual
Determine equality up to some precision
ApproximatelyEqual(p, q, deg)
ApproximatelyEqual(u, v, deg)
univariate polynomials over power series generated by this package
(optional) the precision up to which to compare the inputs
The command ApproximatelyEqual(p,q) returns true if the two power series are equal up to the minimum of their currently computed precisions, otherwise false.
The command ApproximatelyEqual(p,q,deg) returns true if the two power series are equal up to precision deg, otherwise false. This calling sequence will compute any coefficients needed that haven't been computed so far.
The command ApproximatelyEqual(u,v) returns true if the two univariate polynomials over power series are equal up to the currently computed precision of each coefficient power series, otherwise false.
The command ApproximatelyEqual(u,v,deg) returns true, if the two univariate polynomials over power series are equal by comparing each power series coefficient up to precision deg, otherwise false.
\mathrm{with}\left(\mathrm{MultivariatePowerSeries}\right):
Define a power series.
a≔\mathrm{Inverse}\left(\mathrm{PowerSeries}\left(1+x+y\right)\right)
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{PowⅇrSⅇrⅈⅇs of}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{:}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\dots }]
Compute its linear truncation with the Truncate command.
\mathrm{Truncate}\left(a,1\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}
We do the same twice more.
b≔\mathrm{GeometricSeries}\left([x,y]\right)
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{PowⅇrSⅇrⅈⅇs of}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{:}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\dots }]
\mathrm{Truncate}\left(b,1\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}
c≔\mathrm{Inverse}\left(\mathrm{SumOfAllMonomials}\left([x,y]\right)\right)
\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{PowⅇrSⅇrⅈⅇs:}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\dots }]
\mathrm{Truncate}\left(c,1\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}
a
b
c
all have the terms up to homogeneous degree 1 computed. As we see above, these are the same for
and
c
but different for
b
\mathrm{ApproximatelyEqual}\left(a,b\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{ApproximatelyEqual}\left(a,c\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
However, the homogeneous degree 2 parts of
and
c
are different.
\mathrm{ApproximatelyEqual}\left(a,c,2\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
In order to test this, we needed to compute the terms of homogeneous degree 2, as we can see by calling Truncate again.
\mathrm{Truncate}\left(a\right)
{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}
\mathrm{Truncate}\left(c\right)
\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}
We define two univariate polynomials over power series, both linear in their with main variable
z
. The constant coefficient in
z
is 0. The coefficient of
z
is also the same, even though this is not immediately obvious from their definition.
f≔\mathrm{UnivariatePolynomialOverPowerSeries}\left([\mathrm{PowerSeries}\left(0\right),\mathrm{GeometricSeries}\left([x,y]\right)],z\right):
g≔\mathrm{UnivariatePolynomialOverPowerSeries}\left([\mathrm{PowerSeries}\left(0\right),\mathrm{Inverse}\left(\mathrm{PowerSeries}\left(1-x-y\right)\right)],z\right):
\mathrm{ApproximatelyEqual}\left(f,g\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{ApproximatelyEqual}\left(f,g,10\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
The MultivariatePowerSeries[ApproximatelyEqual] command was introduced in Maple 2021.
|
Aqueous Deposition of Metals on Multiwalled Carbon Nanotubes to be Used as Electrocatalyst for Polymer Exchange Membrane Fuel Cells | J. Electrochem. En. Conv. Stor | ASME Digital Collection
Y. Verde,
Y. Verde
, Kabah km 3, Cancún, Quintana Roo 77500, México
e-mail: yverde@itcancun.edu.mx
A. Keer,
A. Keer
, Avanzados S.C. Miguel de Cervantes 120, Complejo Industrial, Chihuahua, Chihuahua 31109, México
M. Miki-Yoshida,
M. Miki-Yoshida
F. Paraguay-Delgado,
F. Paraguay-Delgado
G. Alonso-Nuñez,
G. Alonso-Nuñez
M. Avalos
Centro de Ciencias de la Materia Condensada-UNAM
, A. Postal 2681, Ensenada, B.C. 22800, México
Verde, Y., Keer, A., Miki-Yoshida, M., Paraguay-Delgado, F., Alonso-Nuñez, G., and Avalos, M. (September 8, 2006). "Aqueous Deposition of Metals on Multiwalled Carbon Nanotubes to be Used as Electrocatalyst for Polymer Exchange Membrane Fuel Cells." ASME. J. Fuel Cell Sci. Technol. May 2007; 4(2): 130–133. https://doi.org/10.1115/1.2714565
The polymer exchange membrane fuel cell (PEMFC) is considered the new power source technology for portable applications. Pt and Pt–alloy nanoparticles supported on carbon black have been used traditionally to make electrodes due to their high activity for
H2
oxidation and
O2
reduction under PEMFC conditions. Recently, ammonium hexachloroplatinate (IV)
((NH4)2PtCl6)
has been shown to be a good precursor of metallic Pt by thermal decomposition. In addition, multi-walled carbon nanotubes (MWCNTs) present convenient physical and chemical properties to be employed as a support for electrocatalysts. MWCNTs were synthesized by spray pyrolysis using a precursor solution of ferrocene dissolved in benzene or toluene. Ammonium hexachloroplatinate, ammonium hexachlororhutenate, and ammonium hexachloropaladate were used as the Pt, Ru, and Pd precursors, respectively. Aqueous solution reaction, followed by a two stage thermal process, was utilized to support separately Pt, Ru, and Pd nanoparticles on the MWCNT. The results suggest that the deposition takes place on anchored sites formed during the aqueous reaction, due to the in situ oxidation of the external wall of the nanotube. Very good dispersion and particle size between
3nm
12nm
were obtained for each metal. Such characteristics are advantageous for the use of CNT supported electrocatalyst in PEMFC and direct methanol fuel cell (DMFC) electrodes.
proton exchange membrane fuel cells, catalysts, platinum, nanoparticles, oxidation, carbon nanotubes, pyrolysis, particle size, electrochemical analysis, electrochemical electrodes, ammonium compounds, platinum compounds, ruthenium, palladium, dissolving, organic compounds, coating techniques, electrocatalyst, carbon nanotubes, fuel cell, HRTEM
Carbon nanotubes, Electrodes, Fuel cells, Membranes, Metals, Multi-walled carbon nanotubes, Nanoparticles, Oxidation, Particle size, Polymers, Proton exchange membrane fuel cells, Pyrolysis, Nanotubes, Sprays, Catalysts, Carbon black pigments, Benzene, Platinum alloys
Renewable Energy Systems Based on Hydrogen for Remote Applications
Polymer Electrolyte Membrane Fuel Cells for Communication Applications
PEFC Cathode Alloy Catalysts Prepared by a Novel Process Using a Chelating Agent
Pt∕C Obtained From Carbon With Different Treatments and (NH4)2PtCl6 as a Pt Precursor
Carbon Nanotube Membranes for Electrochemical Energy Storage and Production
Laubernds
Graphite Nanofibers as an Electrode for Fuel Cell Applications
Musameh
Carbon Nanotube/Teflon Composite Electrochemical Sensors and Biosensors
Platinum Deposition on Carbon Nanotubes via Chemical Modification
High Dispersion and Electrocatalytic Properties of Platinum on Eell-Aligned Carbon Nanotube Arrays
Proton Exchange Membrane Fuel Cells With Carbon Nanotube Based Electrodes
Deposition and Electrocatalytic Properties of Platinum Nanoparticals on Carbon Nanotubes for Methanol Electrooxidation
Electrostatic Layer by Layer Assembled Carbon Nanotube Multilayer Film and Its Electrocatalytic Activity for O2 Reduction
Aguilar-Elguézabal
Paraguay-Delgado
Miki-Yoshida
Study of Carbon Nanotubes Synthesis by Spray Pyrolysis and Model of Growth
Roman-Martinez
Cazorla-Amoros
Linares-Solano
Salinas-Martinez de Lecea
Metal-Support Interaction in Pt∕C Catalysts. Influence of the Support Surface Chemistry and the Metal Precursor
Sepúlveda-Escribano
Rodríguez-Reinoso
Platinum Catalysts Supported on Carbon Blacks With Different Surface Chemical Properties
|
NXX Headquarters - Tears of Themis Wiki
This article is about the NXX HQ game mechanic. For details about the location and organization, see NXX Investigation Team.
NXX Headquarters is unlocked after Main Story 04-02. Players can gather resources and also level up their headquarters for additional stat gains.
1 Headquarters Level
2 NXX-OS
3 Resource Requisition
4 File Room
5.1 Confidence Level
5.2.1 Logical Thinking Talents
5.2.2 Recommended Talent Order
Headquarters Level[edit | edit source | hide | hide all]
In-game explanation of NXX HQ Level. Click to expand.
The NXX Headquarters can be leveled up to unlock to upgrade the various features in the headquarters.
Headquarters EXP is gained through upgrading cards. Certain tasks are only available for certain rarities of cards, and may give different amounts of points depending on rarity.
Acquire card 3 8 15
Enhance a card to Lv. 40 2 2 3
Enhance a card to Lv. 100 - 5 5
Evolve a card 1 time(s) 2 2 3
Evolve a card 2 time(s) - 4 5
Upgrade a card 1 time(s) 1 3 5
Gaining Headquarters levels will grant additional upgrades to the Resource Requisition, File Room, and Study Room. The Study Room is locked until the player finishes Main Story Episode 4-15
1 30 Resource Requisition [I] (2000 Stellin and Oracle of Justice II x6)
File Room [I] (Level B cases, 4 simultaneous Case Analyses)
Study Room [I] (20 Max CL)
2 30 Resource Requisition Upgrade [II] (2400 Stellin and Oracle of Justice II x8)
3 30 File Room Upgrade [II] (Level A Cases, 4 simultaneous Case Analyses)
4 60 Study Room Upgrade [II] (30 Max CL)
5 60 Resource Requisition Upgrade [III] (2800 Stellin and Oracle of Justice II x10)
6 60 File Room Upgrade [III] (Level S Cases, 4 simultaneous Case Analyses)
7 90 Resource Requisition Upgrade [IV] (3200 Stellin and Oracle of Justice II x12)
8 90 Study Room Upgrade [III] (40 Max CL)
9 90 Resource Requisition Upgrade [V] (3600 Stellin and Oracle of Justice II x14)
10 120 File Room Upgrade [IV] (Level S Cases, 5 simultaneous Case Analyses)
11 120 Resource Requisition Upgrade [VI] (4000 Stellin and Oracle of Justice II x16)
12 120 Study Room Upgrade [IV] (50 Max CL)
13 150 File Room Upgrade [V]
14 150 Study Room Upgrade [V] (60 Max CL)
15 150 File Room Upgrade [VI] (Level S cases, 5 simultaneous Case Analyses, Further increased chances for higher-tier cases)
16 200 Study Room Upgrade [VI] (70 Max CL)
17 200 Study Room Upgrade [VII] (80 Max CL)
18 200 File Room Upgrade [VII] (Level S cases, 5 simultaneous Case Analyses, Further increased chances for higher-tier cases)
19 200 File Room Upgrade [VIII] (90 Max CL)
20 - Study Room Upgrade [IX] (100 Max CL)
NXX-OS[edit | edit source | hide]
Players can view the System Log, Case Files, and Personnel Files for the NXX Organization here. These include ongoing NXX cases, as well as the player's completed cases from the Main Story.
Resource Requisition[edit | edit source | hide]
In-game explanation of Resource Requisitions. Click to expand.
To ensure confidentiality of NXX investigations, please use an alias or shell company when applying for funding or resources.
A resource request automatically goes out every 4 hours for Stellin, Oracle of Justice II, or both depending on player choice. The base amount returned on each request increases as HQ level increases. Selecting both returns half the base amount for each of Stellin and Oracle of Justice II.
2 II 2400 8
5 III 2800 10
7 IV 3200 12
11 VI 4000 16
Once a request finishes, the resource request rewards will take up a slot until a player claims it. A player starts off with 3 slots, meaning a player can go 12 hours until all slots are filled and no more resource requests will automatically start.
Additional slots are unlocked at Headquarters Lv. 2, Lv. 7, and Lv. 11, pushing this maximum to 24 hours, increasing rewards, and unlocking additional resource requisition options. Purchasing the Monthly Pass will also unlock an additional slot. A maximum of 7 slots are available at any given point in time.
File Room[edit | edit source | hide]
In-game explanation of the File Room. Click to expand.
Data from the Big Lab shows a marked increase in the incidence of missing persons and mental illness cases in Stellis over the past three years. Careful analysis of these cases is necessary to discern the root cause.
Players can obtain Research Materials and Impression items through the File Room. The case levels available and the number of available case analysis slots go up as HQ level increases.
1 I Unlock Level B cases. Conduct up to 4 simultaneous Case Analyses.
3 II Unlock Level A cases. Conduct up to 4 simultaneous Case Analyses.
6 III Unlock Level S cases. Conduct up to 4 simultaneous Case Analyses.
10 IV Unlock Level S cases. Conduct up to 5 simultaneous Case Analyses.
13 V Unlock Level S cases. Conduct up to 5 simultaneous Case Analyses. Increased chances for higher-tier cases.
15 VI Unlock Level S cases. Conduct up to 5 simultaneous Case Analyses. Further increased chances for higher-tier cases.
Players start out with 4 case analysis slots, which can range from Level S to Level C, and take 4-6 hours to complete. The level determines how the quality and quantity of items given. Players can refresh one of the case analyses at a time, for a total of up to three refreshes per day. The level will remain the same, but the item drop may change. Purchasing the Monthly Pass will unlock an additional case analysis slot.
Other Items (1x skill up item or 2x Impressions)
S 60 Impression III
Gilded Poker Cards
Senior Attorney's Badge
Level 40+ cards 6 hrs
A 30 Impression III
B 20 Impression II
C 10 Impression I Level 1+ cards 4 hrs
Additionally, players must assign Cards for each analysis. The higher level and rarity of the cards, the higher the chance of performing a Deep Analysis. A successful Deep Analysis grants 3x the rewards for that case analysis.
Study Room[edit | edit source | hide]
Unlocked after Main Story Episode 4-15. Research Materials gained from the File Room can be used to increase Confidence Level (CL). A player's Study Room has an initial cap of 20 CL, which can be increased as Headquarters Level increases.
Confidence Level[edit | edit source | hide]
In-game explanation of the Study Room. Click to expand.
Leveling up Confidence Level requires Research Materials from the File Room. Every level grants a Confidence Point (CP) and increases the player's base HP Level.
The required exp to the next level is
{\displaystyle (}
previous level's required EXP
{\displaystyle ) + ceiling((CL + 1)/10)*40}
. The HP gained is
{\displaystyle (}
previous level's HP
{\displaystyle ) + ceiling(CL/10)*100}
For example, at level 21, it is
{\displaystyle 1160 + ceiling(22/10)*40 = 1160 + 3*40 = 1280}
EXP needed to the next level, and
{\displaystyle 3000 + ceiling(21/10)*100 = 3000 + 3*100 = 3300}
HP.
Talents[edit | edit source | hide]
Confidence Points can be used to learn Talents. Leveling up a talent costs 1 CP. Talents are buffs that are helpful during debates, for example boosting a player's Influence (Attack) and Defense.
Talents are arranged in a skill tree and have a minimum CL Lv. required to unlock. Some talents also require a previous entry in the skill tree to be raised before the talent can be raised.
CL Lv.
1 Boost the Influence of Empathy cards by 4/7/10%
1 Boost the Influence of Intuition cards by 4/7/10%
1 Boost the Influence of Logic cards by 4/7/10%
5 Boost Defense of Empathy cards by 8/14/20%
5 Boost Defense of Intuition cards by 8/14/20%
5 Boost Defense of Logic cards by 8/14/20%
10 Boost damage dealt to Empathy arguments by 4/7/10%
10 Boost damage dealt to Intuition arguments by 4/7/10%
10 Boost damage dealt to Logic arguments by 4/7/10%
10 Reduces the time of your next logical thinking by 20/35/50% after reorganizing your cards.
10 Reduces the time of your next logical thinking by 8/14/20% after refuting an argument.
10 Grants a 20% chance to reduce the time of your next logical thinking by 16/28/40% after trumping an argument.
15 Reduce damage taken from Empathy argument attacks by 4/7/10%
15 Reduce damage taken from Intuition argument attacks by 4/7/10%
15 Reduce damage taken from Logic argument attacks by 4/7/10%
20 Recover HP by 1/2/3% after reorganizing your cards
20 After refuting an argument, recover HP equal to 1/2/3% of damage dealt.
20 5/10/15% chance to recover HP by 1% after trumping an argument.
30 Boost damage dealt to Core arguments by 8/15%
35 Increase your Influence by 1% for 2 turns after trumping an argument
40 Recover HP by 10% when HP reaches 0
Logical Thinking Talents[edit | edit source | hide]
Recommended Talent Order[edit | edit source | hide]
1 Boost the Influence of <Attribute> cards Prioritize offensive skills first.
2 Boost damage dealt to <Attribute> cards Prioritize offensive skills first.
3 Reduce the time of your next logical thinking after refuting an argument. Reducing the time of your next logical thinking allows you to attack twice, meaning you take less damage from the enemy. On a lesser note, the talent can also slightly speed up Auto runs by reducing enemy turn animation time.
The main draw is the next talent this unlocks.
4 After refuting an argument, recover HP equal to 1/2/3% of damage dealt. Allows the player to heal after refuting an argument. Another defensive skill helpful if a player needs to heal up HP during close fights. For stronger enemies, this can gain you extra turns where you otherwise would've been defeated.
Boost Defense of <Attribute> cards
Reduce damage taken from <Attribute> arguments
The order these can be leveled is flexible. Unlike the other logical thinking skills which may only happen a few times during a fight, defense talents are a passive skill that will always take effect and don't require a trigger.
When you reach CL 29 though, prioritize #6.
6 Boost damage dealt Prioritize this skill by saving a point at CL 29 and then using both points when unlocked at CL 30.
7 Everything Else -
Retrieved from "https://tot.wiki/w/index.php?title=NXX_Headquarters&oldid=15153"
|
A Projection-Based Approach for Constructing Piecewise Linear Pareto Front Approximations | J. Mech. Des. | ASME Digital Collection
Contributed by the Design Automation Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received December 22, 2015; final manuscript received June 7, 2016; published online July 21, 2016. Assoc. Editor: Kazuhiro Saitou.
Kumar Singh, H., Shankar Bhattacharjee, K., and Ray, T. (July 21, 2016). "A Projection-Based Approach for Constructing Piecewise Linear Pareto Front Approximations." ASME. J. Mech. Des. September 2016; 138(9): 091404. https://doi.org/10.1115/1.4033991
Real-life design problems often require simultaneous optimization of multiple conflicting criteria resulting in a set of best trade-off solutions. This best trade-off set of solutions is referred to as Pareto optimal front (POF) in the outcome space. Obtaining the complete POF becomes impractical for problems where evaluation of each solution is computationally expensive. Such problems are commonly encountered in several fields, such as engineering, management, and scheduling. A practical approach in such cases is to construct suitable POF approximations, which can aid visualization, decision-making, and interactive optimization. In this paper, we propose a method to generate piecewise linear Pareto front approximation from a given set of N Pareto optimal outcomes. The approximations are represented using geometrical linear objects known as polytopes, which are formed by triangulating the given M-objective outcomes in a reduced
(M−1)
-objective space. The proposed approach is hence referred to as projection-based Pareto interpolation (PROP). The performance of PROP is demonstrated on a number of benchmark problems and practical applications with linear and nonlinear fronts to illustrate its strengths and limitations. While being novel and theoretically interesting, PROP also improves on the computational complexity required in generating such approximations when compared with existing Pareto interpolation (PAINT) algorithm.
Approximation-based design, Design optimization, Metamodeling, Multiobjective optimization, Optimization
Approximation, Paints, Performance, Design, Optimization
Constructing a Pareto Front Approximation for Decision Making
Introduction to Multiobjective Optimization: Interactive Approaches
An Evidential Reasoning Approach for Multiple-Attribute Decision Making With Uncertainty
Ruzika
Visualizing the Pareto Frontier
, J. Branke, K. Deb, K. Miettinen, R. Slowinski, eds.
Reference Point Approximation Method for the Solution of Bicriterial Nonlinear Optimization Problems
Norm-Based Approximation in Multicriteria Programming
Piecewise Quadratic Approximation of the Non-Dominated Set for Bi-Criteria Programs
Pareto Navigation–Interactive Multiobjective Optimisation and Its Application in Radiotherapy Planning
,” Ph.D. thesis, Department of Mathematics, Technical University of Kaiserslautern, Kaiserslautern, Germany.
Pareto Navigator for Interactive Nonlinear Multiobjective Optimization
An Algorithm for Approximating Convex Pareto Surfaces Based on Dual Techniques
Hybrid Adaptive Methods for Approximating a Nonconvex Multidimensional Pareto Frontier
A Computationally Inexpensive Approach in Multiobjective Heat Exchanger Network Synthesis
2nd International Conference on Applied Operational Research (ICAOR)
PAINT: Pareto Front Interpolation for Nonlinear Multiobjective Optimization
Demonstrating the Applicability of PAINT to Computationally Expensive Real-Life Multiobjective Optimization
, p. 1109.3411.
PAINT–SiCon: Constructing Consistent Parametric Representations of Pareto Sets in Nonconvex Multiobjective Optimization
Approximating Nondominated Sets in Continuous Multiobjective Optimization Problems
IEEE Congress Evol. Comput.
Vlennet
Multicriteria Optimization Using a Genetic Algorithm for Determining a Pareto Set
Eyvindson
Towards Constructing a Pareto Front Approximation for Use in Interactive Forest Management Planning
Implementation of DSS Tools into the Forestry Practice: Reviewed Conference Proceedings
, Technical University in Zvolen, Zvolen, Slovakia, pp. 83–91.
Computational-Fluid-Dynamics-Based Design Optimization for Single-Element Rocket Injector
Wastewater Treatment: New Insight Provided by Interactive Multiobjective Optimization
|
Augustin is in line to choose a new locker at school. The locker coordinator has each student reach into a bin and pull out a locker number. There is one locker at the school that all the kids dread! This locker, #
831
, is supposed to be haunted, and anyone who has used it has had strange things happen to him or her! When it is Augustin’s turn to reach into the bin and select a locker number, he is very nervous. He knows that there are
535
lockers left and locker #
831
is one of them. What is the probability that Augustin reaches in and pulls out the dreaded locker #
831
? Should he be worried? Explain.
\frac{\text{Number of haunted lockers}}{\text{Total number of lockers left to be picked}}
\frac{1}{535}\approx0.0019
Should Augustin be worried?:
No, because his probability of choosing the dreaded locker is less than
1\%
|
A New Mistuning Identification Method Based on the Subset of Nominal System Modes Method | J. Eng. Gas Turbines Power | ASME Digital Collection
Christian U. Waldherr,
Christian U. Waldherr
Stuttgart D-70569,
e-mail: waldherr@itsm.uni-stuttgart.de
Patrick Buchwald,
Patrick Buchwald
Waldherr, C. U., Buchwald, P., and Vogt, D. M. (January 16, 2020). "A New Mistuning Identification Method Based on the Subset of Nominal System Modes Method." ASME. J. Eng. Gas Turbines Power. February 2020; 142(2): 021016. https://doi.org/10.1115/1.4045517
The mistuning problem of quasi-periodic structures has been the subject of numerous scientific investigations for more than 50 years. Researchers developed reduced-order models to reduce the computational costs of mistuning investigations including finite element models. One question which has also high practical relevance is the identification of mistuning based on modal properties. In this work, a new identification method based on the subset of nominal system modes method (SNM) is presented. Different to existing identification methods where usually the blade stiffness of each sector is scaled by a scalar value,
N
identification parameters are used to adapt the modal blade stiffness of each sector. The input data for the identification procedure consist solely of the mistuned natural frequencies of the investigated mode family as well as of the corresponding mistuned mode shapes in the form of one degree-of-freedom per sector. The reduction basis consists of the tuned mode shapes of the investigated mode family. Furthermore, the proposed identification method allows for the inclusion of centrifugal effects like stress stiffening and spin softening without additional computational effort. From this point of view, the presented method is also appropriate to handle centrifugal effects in reduced-order models using a minimum set of input data compared to existing methods. The power of the new identification method is demonstrated on the example of an axial compressor blisk. Finite element calculations including geometrical mistuning provide the database for the identification procedure. The correct functioning of the identification method including measurement noise is also validated to show the applicability to a case of application where real measurement data are available.
Blades, Compressors, Eigenvalues, Errors, Mode shapes, Noise (Sound), Probability, Resonance, Stiffness, Traveling waves, Vibration, Modal assurance criterion, Stress, Damping, Finite element model, Degrees of freedom, Finite element analysis, Scalars, Density, Excitation
Vibration Characteristics of Mistimed Shrouded Blade Assemblies
Experimental Mistuning Identification in Bladed Disks Using a Component-Mode Based Reduced-Order Model
Sternschüss
On the Reduction of Quasi-Cyclic Disk Models With Variable Rotation Speeds
Kashangaki
T. A. I.
Structural Damage Detection of Space Truss Structures Using Best Achievable Eigenvectors
Structural Damage Detection Using Constrained Eigenstructure Assignment
Mistuning Identification and Model Updating of an Industrial Blisk
An Inverse Approach to Identify Tuned Aerodynamic Damping, System Frequencies, and Mistuning—Part I: Theory Under Rotating Conditions
Proceedings of the 15th International Symposium on Unsteady Aerodynamics
, Aeroacoustics and Aeroelasticity of Turbomachines (ISUAAAT15), Oxford, UK, Sept. 24–27.
Mistuning Identification of Integrally Bladed Disks With Cascaded Optimization and Neural Networks
Identification of Mistuning and Model Updating of an Academic Blisk Based on Geometry and Vibration Measurements
Numerical Methods for Turbomachinery Aeromechanical Predictions
Evaluation of Forced Response Methods on an Embedded Compressor Rotor Blade
Proceedings of the First Global Power and Propulsion Forum
, Zurich, Switzerland, Jan. 16–18, GPPF-2017-183.
Màrtensson
Uncertainty of Forced Response Numerical Predictions of an Industrial Blisk-Comparison With Experiments
An Experimental Investigation of Vibration Localization in Bladed Disks—Part I: Free Response
Proceedings of the First International Modal Analysis Conference and Exhibit
|
How do I find diameter of a cylinder? The formula for diameter of a cylinder
Use this diameter of a cylinder calculator as an easy-to-use tool that calculates the diameter of a cylinder for you! The calculator can also come in handy if you're interested in the volume of a cylinder. If you're curious about the formula for the diameter of a cylinder and how to find the diameter of a cylinder, come along!
To find the diameter of a cylinder, you can utilize the following formula:
d= 2×\sqrt \frac {V}{πh},
d
is the diameter of a cylinder;
V
is the volume of a cylinder; and
h
is the height of a cylinder.
For instance, if the volume of a cylinder is 60 cm³ and the height is 8 cm, the diameter of the cylinder in centimeters would be:
d= 2×\sqrt \frac {60}{π8} = 3.09
Looks cumbersome? Don't worry; the diameter of a cylinder calculator will do all the computations for you!
Now that you know how to find the diameter of a cylinder, check out other cylinder related tools, similar to the diameter of a cylinder calculator, that are also convenient and easy to use:
How do I calculate the diameter of a cylinder given the volume and height?
To calculate the diameter of a cylinder given the volume and height:
Multiply the height of a cylinder by pi;
Divide the volume of a cylinder by the number computed in Step 1;
Square root the number computed in Step 2;
Multiply your result by far by 2; and
Ta-da! You have calculated the diameter of a cylinder given the radius and height.
How do I find the volume of a cylinder with diameter and height?
To find the volume of a cylinder with a diameter, you need to:
Divide the diameter of a cylinder by two, and then square the number;
Take the result you computed in step 1, and multiply it by the height of a cylinder and pi; and
Voilà! You have calculated the volume of a cylinder.
Exponent calculator helps you find the result of any base raised to a positive or negative exponent.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.