content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
LP modes
LP Modes
Author: the photonics expert Dr. Rüdiger Paschotta
Definition: linearly polarized modes of optical fibers with radially symmetric index profiles in the approximation of weak guidance
More general term: modes of optical fibers
DOI: 10.61835/3z8 Cite the article: BibTex plain textHTML Link to this page LinkedIn
The transverse refractive index profiles of many optical fibers are radially symmetric, i.e., the refractive index depends only on the radial coordinate <$r$> and not on the azimuthal coordinate <$\
varphi$>. Also, the index profiles of nearly all fibers (except for photonic crystal fibers) exhibit only a small index contrast, so that the fiber can be assumed to be only weakly guiding. In this
situation, the calculation of the fiber modes is greatly simplified. One obtains the linearly polarized LP modes.
In cases with stronger guidance, one would need to distinguish TE and TM modes, where only either the electric or the magnetic field is exactly perpendicular to the fiber axis. There are also hybrid
modes of HE and EH type, having a non-zero longitudinal components of both electric and magnetic field. That case of not weakly guiding fibers applies, for example, to nanofibers where a glass/air
interface provides the waveguide function.
We explain the basics of fiber modes, having self-reproducing amplitude profiles.
We explore various properties of guided modes of multimode fibers. We also test how the mode structure of such a fiber reacts to certain changes of the index profile, e.g. to smoothening of the index
Calculation of LP Modes
The wave equation for the complex electric field profile <$E(r,\varphi )$> in cylindrical coordinates is:
$$\frac{{{\partial ^2}E}}{{\partial {r^2}}} + \frac{1}{r}\frac{{\partial E}}{{\partial r}} + \frac{1}{{{r^2}}}\frac{{{\partial ^2}E}}{{\partial {\varphi ^2}}} + \frac{{{\partial ^2}E}}{{\partial {z^
2}}} + {k^2}E = 0$$
where <$k = 2\pi n / \lambda$> is the wavenumber resulting for the local refractive index <$n$> and the vacuum wavelength <$\lambda$>. In a fiber, that quantity is usually spatially varying – and
under our specific assumptions only dependent on the radial coordinate <$r$>.
Looking for modes with phase constant <$\beta$> (imaginary part of the propagation constant, still to be determined for a given vacuum wavelength <$\lambda$>), we obtain:
$$\frac{{{\partial ^2}E}}{{\partial {r^2}}} + \frac{1}{r}\frac{{\partial E}}{{\partial r}} + \frac{1}{{{r^2}}}\frac{{{\partial ^2}E}}{{\partial {\varphi ^2}}} + ({k^2} - {\beta ^2})E = 0$$
Due to the radial symmetry, we can use the ansatz
$$E(r,\varphi ) = {F_{\ell m}}(r)\;\cos (\ell \varphi )$$
where <$\ell$> needs to be an integer ( because otherwise the <$\varphi$>-dependent factor would not be continuous). We can use the same ansatz with a factor <$\sin \ell \varphi$> instead of <$\cos \
ell \varphi$>, or in fact some linear combination of those, or use <$\exp(\pm i \ell \varphi )$>, but the resulting radial equation is in any case
$${F_{\ell m}}''(r) + \frac{{{F_{\ell m}}'(r)}}{r} + \left( {{n^2}(r)\;{k^2} - \frac{{{\ell ^2}}}{{{r^2}}} - {\beta ^2}} \right){F_{\ell m}}(r) = 0$$
For a given wavelength, only for certain discrete values of <$\beta$> the radial equation has solutions which converge towards zero for <$r \rightarrow \infty$>. Only such solutions can represent
guided modes of the fiber. These <$\beta$> values corresponding to guided modes are called <$\beta_{\ell m}$>, where <$\ell$> is the selected azimuthal index (see above) and the index <$m$> starts
from 1 (for the highest possible <$\beta$> value) and ranges to some maximum value, which tends to decrease for increasing <$\ell$>. Once <$\ell$> gets too high, there are no solutions at all.
Unfortunately, there are no analytical solutions for the eigenvalues (even for step-index profiles), but one can employ numerical methods. One can then find all guided modes by starting with <$\ell =
0$>, finding all <$\beta$> values for that, and then do this for increasing <$\ell$> values until there are no solutions anymore.
Calculating LP Modes
With the RP Fiber Power software, a wide range of properties of LP modes can be calculated. There are convenient Power Forms for cases with given refractive index profile or for germanosilicate
fibers with given doping profile. These forms provide many features directly out of the box, and any other sophisticated things can be done with a little script code.
Calculation for Step-index Fibers
For the more specific case of step-index fibers (where the refractive index is constant within the fiber core), analytical solutions for the core and cladding part of the radial equation can be
found. The core part involves a Bessel function <$J_{\ell}(u r / r_\textrm{core})$>, and the cladding part a modified Bessel function <$K_{\ell}(w \: r / r_\textrm{core}))$>, where
$$u = {r_{{\rm{core}}}}\sqrt {n_{{\rm{core}}}^{\rm{2}}{k^2} - {\beta ^2}} $$
$$w = {r_{{\rm{core}}}}\sqrt {{\beta ^2} - n_{{\rm{cl}}}^{\rm{2}}{k^2}} $$
The prefactors for the core and cladding part must be balanced such that the function is continuous at the core/cladding interface.
One recognizes easily that
$${u^2} + {w^2} = r_{{\rm{core}}}^{\rm{2}}\left( {n_{{\rm{core}}}^{\rm{2}} - n_{{\rm{cl}}}^{\rm{2}}} \right){k^2} = {\left( {k\;{r_{{\rm{core}}}}\;{\rm{NA}}} \right)^2}$$
where NA is the numerical aperture.
Properties of the LP Modes
Step-index Fibers
All guided modes have <$\beta$> values which lie between the plane-wave wavenumbers of the core and the cladding. Modes with <$\beta$> values close to the lower limit (the cladding wavenumber) have a
small <$w$> parameter, leading to a slow decay of the radial function in the cladding.
One may calculate the effective refractive index of a fiber as its <$\beta$> value divided by the vacuum wavenumber. For guided modes, that effective index lies between the refractive indices of core
and cladding.
The lowest-order mode (LP[01]) has an intensity profile which is similar to that of a Gaussian beam, particularly in cases with not too high V number. Particularly for the higher <$m$> values, the
resulting radial functions can oscillate in the fiber core, whereas it decays in the cladding. Figure 1 shows the radial functions for an example case. Here, we have two modes with <$\ell = 0$> (LP
[01], LP[02]) and one mode each for <$\ell = 1$> and <$\ell = 2$>. Note that for each non-zero <$\ell$> value we have two linearly independent solutions, having a <$\cos \ell \varphi$> and <$\sin \
ell \varphi$> dependence, respectively, or alternatively a dependence on <$\exp(\pm i \ell \varphi )$>. Taking this into account, we have a total of 1 + 1 + 2 + 2 = 6 modes in our example case.
Figure 1: Radial functions of the fiber modes for a step-index fiber.
The higher the V number of the fiber, the more guided modes exist. For <$V$> below 2.405, there is only a single guided mode (apart from different polarization directions), so that we have a
single-mode fiber. For large <$V$>, the number of modes is approximately to <$V^2 / 2$> (counting modes of both polarization directions). Figure 2 shows the complex amplitude profiles of all modes of
a step-index fiber with a higher V number of 11.4.
Figure 2: Electric field amplitude profiles for all the guided modes of a step-index fiber.
The two colors indicate different signs of the electric field values. This diagram (as all others) has been produced with the software RP Fiber Power.
In this example, the LP[23] and LP[04] modes are relatively close to their cut-off: they would cease to exist for only a slightly longer wavelength. In such a case, the <$w$> parameter becomes quite
small, so that the field penetrates more into the cladding. Such modes can be more sensitive to bend losses, for example. However, only for modes with <$\ell = 0$>, the power propagating in the core
vanishes at the cut-off.
Fibers with Other Refractive Index Profiles
For arbitrary radial index profiles, the guided modes can still be calculated as LP modes, even though their shapes may deviate substantially from those for a step-index fiber. One usually requires a
numerical method for finding the radial solutions for guided modes, at least for the core part; a modified Bessel function can still be used for the cladding part, where the refractive index is
constant. For the core part, one may always start at r = 0, propagate the field up to the core/cladding interface (using the Runge–Kutta algorithm, for example), and connect it with the modified
Bessel function for the cladding part. The mismatch of the derivatives at the interface can be minimized by numerically refining the <$\beta$> value. One needs to implement a numerical strategy for
finding all <$\beta$> values where that mismatch vanishes.
The numerical calculations are not entirely trivial due to various technical details. At least if a high computation speed is required, one has to carefully determine the required numerical step
sizes depending on the parameters for each mode. The same applies to the parameters for numerical root finding.
Of course, the whole method cannot be applied anymore for not radially symmetric index profiles; one then has to refer to two-dimensional numerical methods, which are much more complicated to handle
and require substantially more computation time.
Figure 2 shows the calculated mode functions for an example case.
Figure 3: Radial functions of the fiber modes calculated for a case with a smooth refractive index profile, determined from the concentration of GeO[2] in the core of a silica fiber.
Propagation Velocity and Chromatic Dispersion
The phase velocity of a mode is simply the vacuum velocity of light divided by the effective refractive index (see above). The group velocity is the inverse of the derivative of the <$\beta$> value
with respect to the angular frequency. For numerical calculations of the group velocity, one thus needs to calculate a mode for at least two different (closely spaced) wavelengths. In order to take
into account the material dispersion, one needs to use wavelength-dependent refractive indices.
The group velocity dispersion is the second derivative of the <$\beta$> value with respect to the angular frequency. Numerically, one requires the <$\beta$> values for at least three different
wavelengths. Note that for small wavelength spacings one requires a very high accuracy of the calculated <$\beta$> values.
Orbital Angular Momentum
Mode functions based on an ansatz with <$\exp(\pm i \ell \varphi )$> (see above) are associated with an orbital angular momentum of <$\pm \ell \: h/2\pi$> per photon.
Optimization of Refractive Index Profiles
By optimizing the refractive index profile of a fiber, one can improve a number of important parameters of the LP modes. For example, one may achieve the wanted mode sizes and number of modes, but
also strongly modify the group velocity and chromatic dispersion. For minimizing mode coupling effects, one may take care that the <$\beta$> values of relevant modes do not get too close. Flexible
software for calculating fiber modes can be an essential tool for such optimizations.
More to Learn
Encyclopedia articles:
[1] A. W. Snyder and J. D. Love, Optical Waveguide Theory, Chapman and Hall, London (1983)
[2] J. A. Buck, Fundamentals of Optical Fibers, Wiley, Hoboken, New Jersey (2004)
[3] F. Mitschke, Fiber Optics: Physics and Technology, Springer, Berlin (2010)
[4] R. Paschotta, Field Guide to Optical Fiber Technology, SPIE Press, Bellingham, WA (2010)
Questions and Comments from Users
By changing the 'l' value we can get different solutions from the Maxwell's equation. How could we change the value of 'm'? Where is the value of 'm' coming from?
The author's answer:
For a given <$\ell$> value, we obtain multiple solutions satisfying the boundary conditions, and the index <$m$> is simply used to enumerate those.
Do LP modes act like meridional or skew rays?
The author's answer:
Modes are not rays. Rays can in principle be regarded as complicated superpositions of modes, although that is not necessarily useful.
However, the point of interest in this context may be whether the light has substantial spatial overlap with the fiber core. The overlap of mode fields with the fiber core can be calculated. A
substantial overlap is generally obtained for LP modes with <$l = 0$>, particularly for the fundamental mode LP[01]. So in a way those LP modes behave more like meridional rays, while those with
larger <$l$> are more like skew rays.
How do we calculate the maximum value of <$m$> for each <$l$>?
The author's answer:
That generally requires numerical methods. As it is only a one-dimensional differential equation, it is not that difficult to solve. That solving has to be done for many <$\beta$> values, following a
suitable strategy.
Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance
based on certain criteria. Essentially, the issue must be of sufficiently broad interest.
Please do not enter personal data here. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail.
By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those
inputs.) As your inputs are first reviewed by the author, they may be published with some delay.
|
{"url":"https://www.rp-photonics.com/lp_modes.html","timestamp":"2024-11-11T23:23:11Z","content_type":"text/html","content_length":"35976","record_id":"<urn:uuid:6b41a8da-f8e7-460c-afba-4bc46db78063>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00213.warc.gz"}
|
[Solved] In a local boutique, you intend to buy a | SolutionInn
Answered step by step
Verified Expert Solution
In a local boutique, you intend to buy a handbag with an original price of $38, a jacket with an original price of $189, and
In a local boutique, you intend to buy a handbag with an original price of $38, a jacket with an original price of $189, and a scarf with an original price of $23. Currently, the store is running a
promotion for 30% off the entire store. In addition, as a store loyalty cardmember, you are entitled to an additional 10% off all sale prices, once sale prices are calculated. The state charges a 5%
sales tax on all purchases. What is the final purchase price of the items including all discounts and sales tax? Round your answer to the nearest cent
There are 3 Steps involved in it
Step: 1
To find the final purchase price after applying all discounts and sales tax we need to follow these ...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
Recommended Textbook for
Authors: Jerry J. Weygandt, Paul D. Kimmel, Donald E. Kieso
6th Edition
978-0470477144, 1118096894, 9781118214657, 470477148, 111821465X, 978-1118096895
More Books
Students also viewed these Accounting questions
View Answer in SolutionInn App
|
{"url":"https://www.solutioninn.com/study-help/questions/in-a-local-boutique-you-intend-to-buy-a-handbag-1220940","timestamp":"2024-11-11T20:58:10Z","content_type":"text/html","content_length":"103558","record_id":"<urn:uuid:d9edc4e3-2ba7-45dd-88c7-47c530512865>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00056.warc.gz"}
|
VSEPR Geometry - MCAT Physical
All MCAT Physical Resources
Example Questions
Example Question #1 : Vsepr Geometry
Which statement best describes VSEPR theory?
Possible Answers:
The repulsion between atoms helps determine the polarity of molecules
Molecular shapes are determined according to which orbitals are energetically accessible for bonding
The repulsion between protons in an atom's nucleus determines its size
The repulsion between electrons helps determine the geometry of covalent molecules
Covalent bonds are formed by overlapping valence electron shells
Correct answer:
The repulsion between electrons helps determine the geometry of covalent molecules
The idea of VSEPR (valence shell electron pair repulsion) theory is that valence electron pairs repel each other, arranging themselves into positions that minimize their repulsions by maximizing the
distance between them. The positions of these electron pairs then determine the overall geometry of the molecule. Molecular geometry is thus determined by the arrangement of electrons and nuclei such
that the electrons are as far from one another as possible, while remaining as close to the positively charged nucleus as possible.
Example Question #2 : Vsepr Geometry
Which of the following compounds has a molecular tetrahedral geometry?
Correct answer:
Of the available answer choices, only
Example Question #2 : Vsepr Geometry
What is the molecular geometry of sulfur hexafluoride?
Correct answer:
Sulfur hexaflouride (SF[6]) is an example of octahedral geometry, as it follows the skeleton of AX[6]E[0] format. A refers to sulfur, X to fluorine, and E to the lone pair electrons.
Square planar has an AX[4]E[2] format, while tetrahedral and trigonal bipyramidal follow AX[4 ]and AX[5] formats, respectively.
Example Question #4 : Vsepr Geometry
Which of the following is not the correct geometric configuration for the given molecule?
Correct answer:
Recall the following relationships between geometry and number of pairs of electrons on the central atom.
2: linear
3: trigonal planar
4: tetrahedral
5: trigonal bipyriamidal
6: octahedral
To visualize the geometry, we need to think of how many electron pairs are on the central atom. Drawing Lewis dot diagrams may be helpful here. None of the answer choices has lone central electron
pairs, with the exception of water, so the number of atoms bound to the central atom is the same as the number of central electron pairs.
The only one that does not match up with the correct geometry is SF[6], which is actually octahedral since it has six central electron pairs. In a water molecule, the central oxygen has six valence
electrons, plus one from each bond with hydrogen, for a total of eight central electrons and four central electron pairs. So, this geometry is a variation on the tetrahedral form (bent), in which two
central electron pairs are not bound.
Example Question #23 : Compounds, Molecules, And Bonds
Which of the following molecules will have the smallest bond angles?
Correct answer:
In order to determine which molecule will have the smallest bond angle(s), make sure to factor in both the number of atoms around the central atom as well any lone pairs on the central atom.
Example Question #3 : Vsepr Geometry
The geometry of a certain molecule with the general formula
Correct answer:
Octahedral geometry always corresponds to the
Example Question #1 : Vsepr Geometry
Correct answer:
The central atom, sulfur, is surrounded by four electron groups (oxygen atoms), two of which are double bonded. Also note that the lone pairs are on the oxygen atoms, not the central atom. Thus the
molecular geometry is tetrahedral.
Example Question #5 : Vsepr Geometry
According to the VSEPR theory, what is the angle between the two lone pairs in
Correct answer:
According to VSEPR theory, the electron pairs will repel each other as much as possible. Therefore, in the octahedral shape, the lone pairs will be on opposite ends of the molecule, or
Certified Tutor
Northampton County Area Community College, Associate in Science, Biology, General. Northampton County Area Community College,...
Certified Tutor
DeSales University, Bachelor of Science, Biology, General. Sidney Kimmel Medical College at Thomas Jefferson University , Pro...
Certified Tutor
Williams College, Bachelor of Science, Chemistry.
All MCAT Physical Resources
|
{"url":"https://www.varsitytutors.com/mcat_physical-help/vsepr-geometry","timestamp":"2024-11-05T03:20:11Z","content_type":"application/xhtml+xml","content_length":"162546","record_id":"<urn:uuid:b9e34d9e-b5ed-45f8-8f5c-2c64835564cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00767.warc.gz"}
|
The dimensions - math word problem (83845)
The dimensions
The dimensions of a rectangular piece of paper are 22 cm × 14 cm. It is rolled once across the breadth and once across the length to form right circular cylinders of biggest possible surface area.
Find the difference in volumes of the two cylinders that will be formed.
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Showing 1 comment:
Dr Math
Diff of volumes= π(7/π)2*22-π(11/π)2*14
=616/π=196 cm^3
You need to know the following knowledge to solve this word math problem:
Units of physical quantities:
Grade of the word problem:
Related math problems and questions:
|
{"url":"https://www.hackmath.net/en/math-problem/83845","timestamp":"2024-11-12T19:20:40Z","content_type":"text/html","content_length":"78301","record_id":"<urn:uuid:9fbaff97-c74c-4003-92bd-aacd16f1d216>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00351.warc.gz"}
|
Is it correct and consistent to check two vertex points for equality?
I am writing a Typescript game where there are vertexes drawn to a canvas, and some edges coming from them. I want to determine if some of edges from different vertexes intersect. For this, I am
using a generic doLinesIntersect function to determine if two edges intersect.
However, the problem is that it shows all lines as intersecting, because if two edges share a vertex, they compute as always intersecting, but I do not want this. I was able to get it "working" by
checking manually to see if they share a vertex like so:
function doLinesTrulyIntersect(aX1: number, aY1: number, aX2: number, aY2: number, bX1: number, bY1: number, bX2: number, bY2: number): boolean {
const sharesVertexes = (aX1 === bX1 && aY1 === bY1) ||
(aX1 === bX2 && aY1 === bY2) ||
(aX2 === bX1 && aY2 === bY1) ||
(aX2 === bX2 && aY2 === bY2);
return !sharesVertexes && doLinesIntersect(
aX1, aY1, aX2, aY2,
bX1, bY1, bX2, bY2
Basically, if two edges share a vertex, then they do not intersect.
But, will this always work? The logic is that since if two edges have the same vertex, then (because they're the same point), floating point equality testing should work, right? It's the exact same
bits under the hood, after all.
However, I know that in general testing two floating points for equality is incorrect and inconsistent, but does that apply in this case?
If an endpoint X in one edge and an endpoint Y in another edge are set equal to the same point P, then X equals Y. Floating-point entities are not mystical entities that vary spontaneously, and
testing for equality reports two things are equal if and only if they are equal.
When you see cautions about comparing floating-point numbers for equality, the actual problem is about comparing any kind of approximations, not anything particular to floating-point arithmetic.
Suppose there is some ideal number N and we have two approximations A and B to N. Will A and B be equal? Quite possibly not, because preparing two approximations in different ways will often produce
different results. To estimate the percentage of the US population that has blue eyes, you might go out and sample 1,000 people and come up with some percentage A, and another person might go out and
sample 1,000 other people and come up with some percentage B. Even though both of these are approximations of N, they are quite likely not equal.
The reason this arises as a problem in floating-point arithmetic is that floating point was primarily designed to approximate real-number arithmetic, and it is primarily used for that. Note that it
is floating-point arithmetic that approximates real-number arithmetic. Floating-point numbers are actual real numbers (a subset of them). There is nothing approximate or otherwise wonky about the
numbers: Each floating-point number is exactly one real number. When you set a floating-point variable to a floating-point number, it retains that number. It does not change or become approximate.
It is the process of calculating with floating-point numbers that introduces approximations and rounding errors: Most floating-point operations, including addition and other arithmetic, scientific
functions, and converting to a floating-point format from decimal (including decimal numerals in source code or program input) are operations that may round their results to fit in the floating-point
Note that the above is actually true of any numerical format. Calculations with fixed-point numbers also produce rounded results, when the exact mathematical result does not fit in the format. And
calculations with integers are also adjusted to fit in the integer format. 7/3 in integer arithmetic produces 2, an error greater than 1 part in 10, whereas 7./3. in “double precision” floating-point
has an error less than 1 part in a quadrillion. So integer arithmetic can be much worse for rounding errors than floating-point arithmetic. The reason people caution about floating-point rounding and
not integer rounding is due to the predominant ways we use these formats, not due to their intrinsic susceptibility to errors.
Further, once you are working with approximations, all of your results are approximate. You can get very large relative errors by subtracting two nearly equal numbers. People warn about comparison
because it is one operation that novices may misuse. But anytime you are working with approximations, in any format, you should be aware of what errors can happen and how large they can be.
All of the above is to give you context for determining what operations you can rely on. If you set two endpoints to the same point and later compare them, they will compare as equal.
Some things that could go wrong in the problem you describe are:
• You do not set the two endpoints to the same point in the first place.
• Something changes the value(s) of one or both endpoints.
• There is a comparison between something computed and an endpoint, rather than between the two endpoints directly. For example, if we have an edge XY and an edge YZ with common endpoint Y, and
some software calculates where they intersect, those calculations might produce a point Y' that is slightly different from Y.
|
{"url":"https://coderapp.vercel.app/answer/78704621","timestamp":"2024-11-06T21:52:02Z","content_type":"text/html","content_length":"105450","record_id":"<urn:uuid:2bd855c0-3a41-49ee-aa54-c1838e6a3295>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00649.warc.gz"}
|
Break-Even Point Formula, Methods to Calculate, Importance
Get instant access to video lessons taught by experienced investment bankers. Learn financial statement modeling, DCF, M&A, LBO, Comps and Excel shortcuts. There is no net loss or gain is purchase
return a debit or credit at the break-even point (BEP), but the company is now operating at a profit from that point onward.
Why You Can Trust Finance Strategists
Assume a company has $1 million in fixed costs and a gross margin of 37%. In this breakeven point example, the company must generate $2.7 million in revenue to cover its fixed and variable costs.
Break-even analysis compares income from sales to the fixed costs of doing business.
Upon selling 500 units, the payment of all fixed costs is complete, and the company will report a net profit or loss of $0. To find the total units required to break even, divide the total fixed
costs by the unit contribution margin. The break-even point is the volume of activity at which a company’s total revenue equals the sum of all variable and fixed costs. The hard part of running a
business is when customer sales or product demand remains the same while the price of variable costs increases, such direct labor efficiency variance formula as the price of raw materials. When that
happens, the break-even point also goes up because of the additional expense.
Do you own a business?
The selling price is $15 per pizza, and the monthly sales are 1,500 pizzas. As we can see from the sensitivity table, the company operates at a loss until it begins to sell products in quantities in
excess of 5k. For instance, if the company sells 5.5k products, its net profit is $5k.
Who Calculates BEPs?
Or, if using Excel, the break-even point can be calculated using the “Goal Seek” function. If a company has reached its break-even point, the company is operating at neither a net loss nor a net gain
(i.e. “broken even”). An unprofitable business eventually runs out of cash on hand, and its operations can no longer be sustained (e.g., compensating employees, purchasing inventory, paying office
rent on time).
For information pertaining to the registration status of 11 Financial, please contact the state securities regulators for those states in which 11 Financial maintains a registration filing. Finance
Strategists has an advertising relationship with some of the companies included on this website. We may earn a commission when you click on a link or make a purchase through the links on our site.
All of our content is based on objective analysis, and the opinions are our own.
Consider the following example in which an investor pays a $10 premium for a stock call option, and the strike price is $100. The breakeven point would equal the $10 premium plus the $100 strike
price, or $110. On the other hand, if this were applied to a put option, the breakeven point would be calculated as the $100 strike price minus the $10 premium paid, amounting to $90.
If customer demand and sales are higher for the company in a certain period, its variable costs will also move in the same direction and increase (and vice versa). Break-even analysis assumes that
the fixed and variable costs remain constant over time. However, costs may change due to factors such as inflation, changes in technology, and changes in market conditions. It also assumes that there
is a linear relationship between costs and production.
Traders can use break-even analysis to set realistic profit targets, manage risk, and make informed trading decisions. While the breakeven point is a valuable tool for decision-making, it has several
limitations. One major downside is its reliance on the assumption that costs can be neatly divided into fixed and variable categories. For example, semi-variable costs, which have both fixed and
variable components, can complicate the accuracy of the breakeven calculation which then changes the breakeven point in units. The total variable costs will therefore be equal to the variable cost
per unit of $10.00 multiplied by the number of units sold.
1. In corporate accounting, the breakeven point (BEP) is the moment a company’s operations stop being unprofitable and starts to earn a profit.
2. For instance, if the company sells 5.5k products, its net profit is $5k.
3. Break-even analysis, or the comparison of sales to fixed costs, is a tool used by businesses and stock and option traders.
4. As the owner of a small business, you can see that any decision you make about pricing your product, the costs you incur in your business, and sales volume are interrelated.
Barbara is the managerial accountant in charge of a large furniture factory’s production lines and supply chains. She isn’t sure the current year’s couch models are going to turn a profit and what to
measure the number of units they will have to produce and sell in order to cover their expenses and make at $500,000 in profit. Calculating breakeven points can be used when talking about a business
or with traders in the market when they consider recouping losses or some initial outlay. Options traders also use the technique to figure out what price level the underlying price must be for a
trade so that it expires in the money. A breakeven point calculation is often done by also including the costs of any fees, commissions, taxes, and in some cases, the effects of inflation. If the
stock is trading at a market price of $170, for example, the trader has a profit of $6 (breakeven of $176 minus the current market price of $170).
Leave a Comment
|
{"url":"https://tsexpressindo.com/break-even-point-formula-methods-to-calculate/","timestamp":"2024-11-07T09:16:54Z","content_type":"text/html","content_length":"205461","record_id":"<urn:uuid:b3c89dc1-ce65-4040-b61f-46b61c1aacc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00074.warc.gz"}
|
Algebra practice workbook with solutions
Algebra Tutorials! Wednesday 6th of November
algebra practice workbook with solutions
Related topics:
Home maths practice for year 6 online free | prentice hall worksheets for algerba 1 | 3rd order polynomial roots in excel | algebra calculator gives me the answer] | binomial
Calculations with multiply grid solver | free internet tutor in 6th grade math | free pre-algebra tests | how to solve a system of equations on my casio | solving simple common denominator
Negative Numbers | general form of 3rd degree equation having 3 variables | java in solving divisible question | quadratic equations, 3 variables | trigonometry powerpoints
Solving Linear Equations
Systems of Linear
Equations Author Message
Solving Linear Equations
Graphically Roun Toke Posted: Wednesday 27th of Dec 20:33
Algebra Expressions Hi dudes, I’m really stuck on algebra practice workbook with solutions and would sure appreciate help to get me started with rational equations,
Evaluating Expressions algebra formulas and adding exponents. My tests is due soon. I have even thought of hiring a algebra tutor, but they are dear . So any help would be
and Solving Equations greatly valued .
Fraction rules
Factoring Quadratic Registered:
Trinomials 25.06.2005
Multiplying and Dividing From:
Dividing Decimals by
Whole Numbers
Adding and Subtracting IlbendF Posted: Thursday 28th of Dec 20:30
Radicals Being a teacher , this is a comment I usually hear from students. algebra practice workbook with solutions is not one of the most liked topics
Subtracting Fractions amongst students . I never encourage my students to get ready made answers from the internet , however I do advise them to use Algebrator. I have
Factoring Polynomials by developed a liking for this tool over time. It helps the students learn math in an easy to understand way.
Slopes of Perpendicular Registered:
Lines 11.03.2004
Linear Equations From: Netherlands
Roots - Radicals 1
Graph of a Line
Sum of the Roots of a
Quadratic Vild Posted: Friday 29th of Dec 18:27
Writing Linear Equations I might be able to help if you can send more details about your problems. Alternatively you may also check out Algebrator which is a great piece of
Using Slope and Point software that helps to solve math problems. It explains everything systematically and makes the topics seem very easy. I must say that it is indeed
Factoring Trinomials worth every single penny.
with Leading Coefficient
1 Registered:
Writing Linear Equations 03.07.2001
Using Slope and Point From: Sacramento, CA
Simplifying Expressions
with Negative Exponents
Solving Equations 3
Solving Quadratic Hiinidam Posted: Saturday 30th of Dec 14:19
Equations I remember having problems with lcf, complex fractions and equivalent fractions. Algebrator is a really great piece of math software. I have used it
Parent and Family Graphs through several math classes - Basic Math, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on
Collecting Like Terms Solve, step by step solution would appear. The program is highly recommended.
nth Roots
Power of a Quotient Registered:
Property of Exponents 06.07.2001
Adding and Subtracting From: Greeley, CO, US
Solving Linear Systems
of Equations by
The Quadratic Formula
Fractions and Mixed
Solving Rational
Multiplying Special
Rounding Numbers
Factoring by Grouping
Polar Form of a Complex
Solving Quadratic
Simplifying Complex
Common Logs
Operations on Signed
Multiplying Fractions in
Dividing Polynomials
Higher Degrees and
Variable Exponents
Solving Quadratic
Inequalities with a Sign
Writing a Rational
Expression in Lowest
Solving Quadratic
Inequalities with a Sign
Solving Linear Equations
The Square of a Binomial
Properties of Negative
Inverse Functions
Rotating an Ellipse
Multiplying Numbers
Linear Equations
Solving Equations with
One Log Term
Combining Operations
The Ellipse
Straight Lines
Graphing Inequalities in
Two Variables
Solving Trigonometric
Adding and Subtracting
Simple Trinomials as
Products of Binomials
Ratios and Proportions
Solving Equations
Multiplying and Dividing
Fractions 2
Rational Numbers
Difference of Two
Factoring Polynomials by
Solving Equations That
Contain Rational
Solving Quadratic
Dividing and Subtracting
Rational Expressions
Square Roots and Real
Order of Operations
Solving Nonlinear
Equations by
The Distance and
Midpoint Formulas
Linear Equations
Graphing Using x- and y-
Properties of Exponents
Solving Quadratic
Solving One-Step
Equations Using Algebra
Relatively Prime Numbers
Solving a Quadratic
Inequality with Two
Operations on Radicals
Factoring a Difference
of Two Squares
Straight Lines
Solving Quadratic
Equations by Factoring
Graphing Logarithmic
Simplifying Expressions
Involving Variables
Adding Integers
Factoring Completely
General Quadratic
Using Patterns to
Multiply Two Binomials
Adding and Subtracting
Rational Expressions
With Unlike Denominators
Rational Exponents
Horizontal and Vertical
algebra practice workbook with solutions
Related topics:
Home maths practice for year 6 online free | prentice hall worksheets for algerba 1 | 3rd order polynomial roots in excel | algebra calculator gives me the answer] | binomial
Calculations with multiply grid solver | free internet tutor in 6th grade math | free pre-algebra tests | how to solve a system of equations on my casio | solving simple common denominator
Negative Numbers | general form of 3rd degree equation having 3 variables | java in solving divisible question | quadratic equations, 3 variables | trigonometry powerpoints
Solving Linear Equations
Systems of Linear
Equations Author Message
Solving Linear Equations
Graphically Roun Toke Posted: Wednesday 27th of Dec 20:33
Algebra Expressions Hi dudes, I’m really stuck on algebra practice workbook with solutions and would sure appreciate help to get me started with rational equations,
Evaluating Expressions algebra formulas and adding exponents. My tests is due soon. I have even thought of hiring a algebra tutor, but they are dear . So any help would be
and Solving Equations greatly valued .
Fraction rules
Factoring Quadratic Registered:
Trinomials 25.06.2005
Multiplying and Dividing From:
Dividing Decimals by
Whole Numbers
Adding and Subtracting IlbendF Posted: Thursday 28th of Dec 20:30
Radicals Being a teacher , this is a comment I usually hear from students. algebra practice workbook with solutions is not one of the most liked topics
Subtracting Fractions amongst students . I never encourage my students to get ready made answers from the internet , however I do advise them to use Algebrator. I have
Factoring Polynomials by developed a liking for this tool over time. It helps the students learn math in an easy to understand way.
Slopes of Perpendicular Registered:
Lines 11.03.2004
Linear Equations From: Netherlands
Roots - Radicals 1
Graph of a Line
Sum of the Roots of a
Quadratic Vild Posted: Friday 29th of Dec 18:27
Writing Linear Equations I might be able to help if you can send more details about your problems. Alternatively you may also check out Algebrator which is a great piece of
Using Slope and Point software that helps to solve math problems. It explains everything systematically and makes the topics seem very easy. I must say that it is indeed
Factoring Trinomials worth every single penny.
with Leading Coefficient
1 Registered:
Writing Linear Equations 03.07.2001
Using Slope and Point From: Sacramento, CA
Simplifying Expressions
with Negative Exponents
Solving Equations 3
Solving Quadratic Hiinidam Posted: Saturday 30th of Dec 14:19
Equations I remember having problems with lcf, complex fractions and equivalent fractions. Algebrator is a really great piece of math software. I have used it
Parent and Family Graphs through several math classes - Basic Math, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on
Collecting Like Terms Solve, step by step solution would appear. The program is highly recommended.
nth Roots
Power of a Quotient Registered:
Property of Exponents 06.07.2001
Adding and Subtracting From: Greeley, CO, US
Solving Linear Systems
of Equations by
The Quadratic Formula
Fractions and Mixed
Solving Rational
Multiplying Special
Rounding Numbers
Factoring by Grouping
Polar Form of a Complex
Solving Quadratic
Simplifying Complex
Common Logs
Operations on Signed
Multiplying Fractions in
Dividing Polynomials
Higher Degrees and
Variable Exponents
Solving Quadratic
Inequalities with a Sign
Writing a Rational
Expression in Lowest
Solving Quadratic
Inequalities with a Sign
Solving Linear Equations
The Square of a Binomial
Properties of Negative
Inverse Functions
Rotating an Ellipse
Multiplying Numbers
Linear Equations
Solving Equations with
One Log Term
Combining Operations
The Ellipse
Straight Lines
Graphing Inequalities in
Two Variables
Solving Trigonometric
Adding and Subtracting
Simple Trinomials as
Products of Binomials
Ratios and Proportions
Solving Equations
Multiplying and Dividing
Fractions 2
Rational Numbers
Difference of Two
Factoring Polynomials by
Solving Equations That
Contain Rational
Solving Quadratic
Dividing and Subtracting
Rational Expressions
Square Roots and Real
Order of Operations
Solving Nonlinear
Equations by
The Distance and
Midpoint Formulas
Linear Equations
Graphing Using x- and y-
Properties of Exponents
Solving Quadratic
Solving One-Step
Equations Using Algebra
Relatively Prime Numbers
Solving a Quadratic
Inequality with Two
Operations on Radicals
Factoring a Difference
of Two Squares
Straight Lines
Solving Quadratic
Equations by Factoring
Graphing Logarithmic
Simplifying Expressions
Involving Variables
Adding Integers
Factoring Completely
General Quadratic
Using Patterns to
Multiply Two Binomials
Adding and Subtracting
Rational Expressions
With Unlike Denominators
Rational Exponents
Horizontal and Vertical
Calculations with
Negative Numbers
Solving Linear Equations
Systems of Linear
Solving Linear Equations
Algebra Expressions
Evaluating Expressions
and Solving Equations
Fraction rules
Factoring Quadratic
Multiplying and Dividing
Dividing Decimals by
Whole Numbers
Adding and Subtracting
Subtracting Fractions
Factoring Polynomials by
Slopes of Perpendicular
Linear Equations
Roots - Radicals 1
Graph of a Line
Sum of the Roots of a
Writing Linear Equations
Using Slope and Point
Factoring Trinomials
with Leading Coefficient
Writing Linear Equations
Using Slope and Point
Simplifying Expressions
with Negative Exponents
Solving Equations 3
Solving Quadratic
Parent and Family Graphs
Collecting Like Terms
nth Roots
Power of a Quotient
Property of Exponents
Adding and Subtracting
Solving Linear Systems
of Equations by
The Quadratic Formula
Fractions and Mixed
Solving Rational
Multiplying Special
Rounding Numbers
Factoring by Grouping
Polar Form of a Complex
Solving Quadratic
Simplifying Complex
Common Logs
Operations on Signed
Multiplying Fractions in
Dividing Polynomials
Higher Degrees and
Variable Exponents
Solving Quadratic
Inequalities with a Sign
Writing a Rational
Expression in Lowest
Solving Quadratic
Inequalities with a Sign
Solving Linear Equations
The Square of a Binomial
Properties of Negative
Inverse Functions
Rotating an Ellipse
Multiplying Numbers
Linear Equations
Solving Equations with
One Log Term
Combining Operations
The Ellipse
Straight Lines
Graphing Inequalities in
Two Variables
Solving Trigonometric
Adding and Subtracting
Simple Trinomials as
Products of Binomials
Ratios and Proportions
Solving Equations
Multiplying and Dividing
Fractions 2
Rational Numbers
Difference of Two
Factoring Polynomials by
Solving Equations That
Contain Rational
Solving Quadratic
Dividing and Subtracting
Rational Expressions
Square Roots and Real
Order of Operations
Solving Nonlinear
Equations by
The Distance and
Midpoint Formulas
Linear Equations
Graphing Using x- and y-
Properties of Exponents
Solving Quadratic
Solving One-Step
Equations Using Algebra
Relatively Prime Numbers
Solving a Quadratic
Inequality with Two
Operations on Radicals
Factoring a Difference
of Two Squares
Straight Lines
Solving Quadratic
Equations by Factoring
Graphing Logarithmic
Simplifying Expressions
Involving Variables
Adding Integers
Factoring Completely
General Quadratic
Using Patterns to
Multiply Two Binomials
Adding and Subtracting
Rational Expressions
With Unlike Denominators
Rational Exponents
Horizontal and Vertical
Author Message
Roun Toke Posted: Wednesday 27th of Dec 20:33
Hi dudes, I’m really stuck on algebra practice workbook with solutions and would sure appreciate help to get me started with rational equations, algebra formulas and adding
exponents. My tests is due soon. I have even thought of hiring a algebra tutor, but they are dear . So any help would be greatly valued .
IlbendF Posted: Thursday 28th of Dec 20:30
Being a teacher , this is a comment I usually hear from students. algebra practice workbook with solutions is not one of the most liked topics amongst students . I never
encourage my students to get ready made answers from the internet , however I do advise them to use Algebrator. I have developed a liking for this tool over time. It helps the
students learn math in an easy to understand way.
From: Netherlands
Vild Posted: Friday 29th of Dec 18:27
I might be able to help if you can send more details about your problems. Alternatively you may also check out Algebrator which is a great piece of software that helps to solve
math problems. It explains everything systematically and makes the topics seem very easy. I must say that it is indeed worth every single penny.
From: Sacramento, CA
Hiinidam Posted: Saturday 30th of Dec 14:19
I remember having problems with lcf, complex fractions and equivalent fractions. Algebrator is a really great piece of math software. I have used it through several math classes
- Basic Math, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program is
highly recommended.
From: Greeley, CO, US
Posted: Wednesday 27th of Dec 20:33
Hi dudes, I’m really stuck on algebra practice workbook with solutions and would sure appreciate help to get me started with rational equations, algebra formulas and adding exponents. My tests is due
soon. I have even thought of hiring a algebra tutor, but they are dear . So any help would be greatly valued .
Posted: Thursday 28th of Dec 20:30
Being a teacher , this is a comment I usually hear from students. algebra practice workbook with solutions is not one of the most liked topics amongst students . I never encourage my students to get
ready made answers from the internet , however I do advise them to use Algebrator. I have developed a liking for this tool over time. It helps the students learn math in an easy to understand way.
Posted: Friday 29th of Dec 18:27
I might be able to help if you can send more details about your problems. Alternatively you may also check out Algebrator which is a great piece of software that helps to solve math problems. It
explains everything systematically and makes the topics seem very easy. I must say that it is indeed worth every single penny.
Posted: Saturday 30th of Dec 14:19
I remember having problems with lcf, complex fractions and equivalent fractions. Algebrator is a really great piece of math software. I have used it through several math classes - Basic Math,
Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program is highly recommended.
|
{"url":"https://polymathlove.com/polymonials/midpoint-of-a-line/algebra-practice-workbook-with.html","timestamp":"2024-11-06T09:18:10Z","content_type":"text/html","content_length":"113498","record_id":"<urn:uuid:7d374805-5f4e-4490-9444-60d9d86e0750>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00233.warc.gz"}
|
Is Life Insurance a Good Asset? - Insurance Network for Fiduciary Advisors
Many arguments have been presented regarding the inclusion of life insurance as an asset class in a long-term investment portfolio. However, none have been more convincingly expressed than those by
Christian Kaplan, CFA, from Equitable. This article is inspired by Mr. Kaplan’s thought leadership and while his detailed explanations can get quite complicated, we have attempted to make this as
simple as possible.
The Real Question
As insurance advisors, we run hundreds of life insurance illustrations every week, and we are frequently asked: “What is the policy Internal Rate of Return?” – meaning, the return of the amount
invested in premiums (negative cash flows) to the eventual death benefit received (positive cash flows).
1. A “good investment” compared to what; and
2. when is the insured expected to die?
The Internal Rate of Return page of a life insurance illustration looks something like the following with the blue line (age 84) indicating the life expectancy:
The answer to the question “Is life insurance a good investment?” is “maybe.” In the above example, the IRR on net death benefit is 5.85%. This is based on annual premiums of $9,344 on a Male age 50
at Standard Plus Non-Tobacco rates and a death benefit of $1,000,000 received at age 84, which is the calculated Life Expectancy using the Society of Actuaries Valuation Basic Table (VBT) from 2008.
The problem with this answer is that “Life Expectancy” (the average period that a person may expect to live) simply means that 50% of the people will die before age 84 and 50% of the people will die
after 84. The reason for saying “maybe” is obvious. The insured could die sooner or later and when death occurs will change the cash flows and, by consequence, the Internal Rate of Return.
Analysis to Answer the Question
To have a better sense of the true value of life insurance as an asset class, Mr. Kaplan argues that one must dive into some complicated statistical analysis, which I will spare you in this article.
In summary however, one must calculate the “Expected Rate of Return” which considers the likelihood of the insured passing away before or after LE. To do that, a mathematic calculation needs to occur
around the probability of mortality and its impact on the standard deviation of returns. Statistics tell you that 68.2% of all data points lie plus or minus 1 standard deviation from the mean as
reflected in the bell curve below.
If you apply this logic to a mortality curve (life expectancy) this means that 68.2% of people will die (in our example) between the ages of 74 and 94. Further math determines that the arithmetic
mean of the returns during that period is 6.08% with a standard deviation from the mean at 1.89%. Therefore, we can comfortably conclude, that there is a high probability, greater than two-thirds,
that the returns will be between 4.19% and 7.97%.
Please note this analysis does not factor in policy return variations (our example uses an IUL run at 5%), carrier default risk, or policy cost changes of carrier product. With that said, it is
important to note that with a guarantee provision in a contract and use of a highly rated carrier, much of this risk is reduced dramatically. So now we return to the original question, “is life
insurance a good investment?” and again we ask, “compared to what?”
Uncorrelated to Other Capital Markets
Life insurance proceeds are completely uncorrelated to other capital markets, 100% liquid for face value, and paid in cash at the time of death. Furthermore, life insurance proceeds are income tax
free, which means that in a 40% income tax rate, the returns expectation – 68.2% of the time in our example – would be between 6.98% and 13.28%. Given that, this seems like a very attractive asset
Sharpe Ratio & Life Insurance
Portfolio managers, however, would prefer to get into a discussion of expected portfolio returns, risk free rates and portfolio standard deviations to calculate a Sharpe Ratio and then compare life
insurance to other asset classes such as stocks and bonds or a portfolio of a mix of stocks and bonds. The Sharpe Ratio is a measure of the risk-adjusted return and has become the industry standard
for this calculation. The Sharpe Ratio, developed by Nobel laureate William F. Sharpe, is the average return earned in excess of the risk-free rate per unit of volatility or total risk. The higher
the Sharpe Ratio, the more attractive the risk-adjusted return.
So, let’s assume we are now speaking with portfolio managers and talk about the Sharpe Ratio and life insurance. If we assume that the Expected Rate of Return (after taxes) at Life Expectancy is
5.85% (as per the illustration provided) and that the Risk-Free Rate as measured by the 10-Year Treasury is 2.4%, using the standard deviation previously calculated (1.89%), the Sharpe Ratio of Life
Insurance is 1.82. By comparison, the proxy for the stock market is the SPY (SPDR S&P 500 ETF) which has a Sharpe Ratio of .51 and the proxy for the bond market is the LQD (iShares IBoxx $ Inv Grade
Corp Bonds) which has a Sharpe Ratio of .68. Now let’s bring this statistics and finance class to closure. What does all this mean? How is the Sharpe Ratio affected in scenarios that do and do not
include life insurance? Consider a client with the following two asset allocation options:
1. A portfolio of 70% stocks and 30% bonds; and
2. A portfolio of 50% stocks, 25% bonds and 25% permanent life insurance.
The estimated Sharpe Ratio for Portfolio 1 is .56 versus Portfolio 2 which is .88. Portfolio 2 has a markedly improved Sharpe Ratio – 57% higher. By including life insurance as an asset class in the
portfolio, the investment manager NOW has a means to improve risk-adjusted returns!
For the client interested in managing an investment portfolio for wealth transfer purposes and with an appropriate time horizon, life insurance should have a seat at the portfolio management table as
an asset class.
|
{"url":"https://in4fa.net/is-life-insurance-a-good-asset/","timestamp":"2024-11-11T10:50:45Z","content_type":"text/html","content_length":"93458","record_id":"<urn:uuid:12b6cec9-9219-48a1-b43b-3b7f49bd86ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00828.warc.gz"}
|
Universal Laws
Over time, people have managed to discover many important laws about the universe we live in. Most of these laws remain separated and never are brought together in such a way that people can really
and truly benefit from the BIGGER PICTURE. I have decided to try and bring enlightenment to the broader world by doing so. Read the interesting list of laws below and perhaps add your own in the
Murphy’s Law: If there are two or more ways to do something, and one of those ways can result in a catastrophe, then someone will do it. (Edward A Murphy)
Ellison’s Law: Once the business data have been centralized and integrated, the value of the database is greater than the sum of the preexisting parts. (Larry Ellison)
Kurzweil’s Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes). (Ray Kurzweil)
Rule of 1950: The probability that automated decisions systems will be adopted is approximately one divided by one plus the number of individuals involved in the approval process who were born in
1950 or before squared. (Frank Demmler)
Hofstadter’s Law: It always takes longer than you think, even when you take Hofstadter’s Law into account. (Douglas Hofstadter)
Finagle’s Law: Anything that can go wrong, will. (?Larry Niven)
Spector’s Law: The time it takes your favorite application to complete a given task doubles with each new revision. (Lincoln Spector)
Church-Turing Thesis: Every function which would naturally be regarded as computable can be computed by the universal Turing machine.
Nathan’s First Law: Software is a gas; it expands to fill its container. (Nathan Myhrvold)
Amdahl’s Law: The speed-up achievable on a parallel computer can be significantly limited by the existence of a small fraction of inherently sequential code which cannot be parallelised. (Gene
Tesler’s Law of Conservation of Complexity: You cannot reduce the complexity of a given task beyond a certain point. Once you’ve reached that point, you can only shift the burden around. (Larry
Hoare’s Law: Inside every large problem is a small problem struggling to get out. (Charles Hoare)
Moore’s Law: Transistor die sizes are cut in half every 24 months. Therefore, both the number of transistors on a chip and the speed of each transistor double every 18 (or 12 or 24) months. (Gordon
Ninety-ninety Law: The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time. (Tom Cargill)
Pesticide Paradox: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. (Bruce Beizer)
Conway’s Law: If you have four groups working on a compiler, you’ll get a 4-pass compiler. (Melvin Conway)
Fitts’s Law: The movement time required for tapping operations is a linear function of the log of the ratio of the distance to the target divided by width of the target. (Paul Fitts)
Cope’s Law: There is a general tendency toward size increase in evolution. (Edward Drinker Cope)
Kerckhoff’s Principle: Security resides solely in the key. (Auguste Kerckhoff)
Pareto Principle: 20% of the people own 80% of the country’s assets. (Corollary: 20% of the effort generates 80% of the results.) (Vilfredo Pareto)
Augustine’s Second Law of Socioscience: For every scientific (or engineering) action, there is an equal and opposite social reaction. (Norman Augustine)
Law of the Conservation of Catastrophe: The solutions to one crisis pave the way for some equal or greater future disaster. (William McNeill)
Red Queen Principle: For an evolutionary system, continuing development is needed just in order to maintain its fitness relative to the system it is co-evolving with. (Leigh van Valen)
Heisenbug Uncertainty Principle: Most production software bugs are soft: they go away when you look at them. (Jim Gray)
Ellison’s Law: The userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work. (Carl Ellison)
Joy’s Law: Computing power of the fastest microprocessors, measured in MIPS, increases exponentially in time. (Bill Joy)
Godwin’s Law: As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one. (Mike Godwin)
Lloyd’s Hypothesis: Everything that’s worth understanding about a complex system, can be understood in terms of how it processes information. (Seth Lloyd)
Law of False Alerts: As the rate of erroneous alerts increases, operator reliance, or belief, in subsequent warnings decreases. (George Spafford)
Hick’s Law: The time to choose between a number of alternative targets is a function of the number of targets and is related logarithmically. (W E Hick)
Gilder’s Law: Bandwidth grows at least three times faster than computer power. (George Gilder)
Zawinski’s Law: Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can. (Jamie Zawinski)
Parkinson’s Law: Work expands so as to fill the time available for its completion. (C Northcote Parkinson)
Dilbert Principle: The most ineffective workers are systematically moved to the place where they can do the least damage: management. (Scott Adams)
Clarke’s First Law: When a distinguished but elderly scientist states that something is possible he is almost certainly right. When he states that something is impossible, he is very probably wrong.
(Arthur C Clarke)
Grosch’s Law: The cost of computing systems increases as the square root of the computational power of the systems. (Herbert Grosch)
Sturgeon’s Law: Ninety percent of everything is crap. (Theodore Sturgeon)
Brooks’ Law: Adding manpower to a late software project makes it later. (Frederick P Brooks Jr)
Flon’s axiom: There does not now, nor will there ever, exist a programming language in which it is the least bit hard to write bad programs. (Lawrence Flon)
Metcalfe’s Law: The value of a network grows as the square of the number of its users. (Robert Metcalfe)
Rock’s Law: The cost of semiconductor fabrication equipment doubles every four years. (Arthur Rock)
Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. (Arthur C Clarke)
Fisher’s Fundamental Theorem: The more highly adapted an organism becomes, the less adaptable it is to any new change. (R A Fisher)
Hartree’s Law: Whatever the state of a project, the time a project-leader will estimate for completition is constant. (Douglas Hartree)
Osborn’s Law: Variables won’t; constants aren’t. (Don Osborn)
Wirth’s Law: Software gets slower faster than hardware gets faster. (Nicklaus Wirth)
Weibull’s Power Law: The logarithm of failure rates increases linearly with the logarithm of age. (Waloddi Weibull)
Tesler’s Theorem: Artificial Intelligence is whatever hasn’t been done yet. (Larry Tesler)
Occam’s Razor: The explanation requiring the fewest assumptions is most likely to be correct. (William of Occam)
Deutsch’s Seven Fallacies of Distributed Computing: Reliable delivery; Zero latency; Infinite bandwidth; Secure transmissions; Stable topology; Single adminstrator; Zero cost. (Peter Deutsch)
Hanlon’s Law: Never attribute to malice that which can be adequately explained by stupidity. (?Robert Heinlein)
Benford’s Law: Passion is inversely proportional to the amount of real information available. (Gregory Benford)
Ellison’s Law: The two most common elements in the universe are hydrogen and stupidity. (Harlan Ellison)
Grove’s Law: Telecommunications bandwidth doubles every century. (Andy Grove)
Jakob’s Law of the Internet User Experience: Users spend most of their time on other websites. (Jakob Nielsen)
Lister’s Law: People under time pressure don’t think faster. (Timothy Lister)
Sixty-sixty Law: Sixty percent of software’s dollar is spent on maintenance, and sixty percent of that maintenance is enhancement. (Robert Glass)
Peter Principle: In a hierarchy, every employee tends to rise to his level of incompetence. (Laurence J Peter)
Clarke’s Second Law: The only way of discovering the limits of the possible is to venture a little way past them into the impossible. (Arthur C Clarke)
Weinberg’s Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. (Gerald M Weinberg)
One thought on “Universal Laws”
1. Today, ukraine considered as the most compelling destination for it outsourcing services in eastern europe and attracts many world’s premier companies.
|
{"url":"https://joelx.com/universal-laws/828/","timestamp":"2024-11-01T20:32:57Z","content_type":"text/html","content_length":"42352","record_id":"<urn:uuid:a10c1c75-2895-47cd-bdb7-b9d2ab6e8df9>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00861.warc.gz"}
|
BIG > Research
[1] C.S. Seelamantula, N. Pavillon, C. Depeursinge, M. Unser, "Local Demodulation of Holograms Using the Riesz Transform with Application to Microscopy," Journal of the Optical Society of America A,
vol. 29, no. 10, pp. 2118-2129, October 2012.
[2] U. Kamilov, E. Bostan, M. Unser, "Wavelet Shrinkage with Consistent Cycle Spinning Generalizes Total Variation Denoising," IEEE Signal Processing Letters, vol. 19, no. 4, pp. 187-190, April 2012.
[3] J.P. Ward, H. Kirshner, M. Unser, "Is Uniqueness Lost for Under-Sampled Continuous-Time Auto-Regressive Processes?," IEEE Signal Processing Letters, vol. 19, no. 4, pp. 183-186, April 2012.
[4] P.D. Tafti, M. Unser, "On Regularized Reconstruction of Vector Fields," IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3163-3178, November 2011.
[5] U.S. Kamilov, P. Pad, A. Amini, M. Unser, "MMSE Estimation of Sparse Lévy Processes," IEEE Transactions on Signal Processing, vol. 61, no. 1, pp. 137-147, January 1, 2013.
[6] M. Unser, N. Chenouard, "A Unifying Parametric Framework for 2D Steerable Wavelet Transforms," SIAM Journal on Imaging Sciences, vol. 6, no. 1, pp. 102-135, 2013.
[7] A. Bourquard, N. Pavillon, E. Bostan, C. Depeursinge, M. Unser, "A Practical Inverse-Problem Approach to Digital Holographic Reconstruction," Optics Express, vol. 21, no. 3, pp. 3417-3433,
February 11, 2013.
[8] A. Kazerouni, U.S. Kamilov, E. Bostan, M. Unser, "Bayesian Denoising: From MAP to MMSE Using Consistent Cycle Spinning," IEEE Signal Processing Letters, vol. 20, no. 3, pp. 249-252, March 2013.
[9] J.P. Ward, K.N. Chaudhury, M. Unser, "Decay Properties of Riesz Transforms and Steerable Wavelets," SIAM Journal on Imaging Sciences, vol. 6, no. 2, pp. 984-998, 2013.
[10] A. Amini, U.S. Kamilov, E. Bostan, M. Unser, "Bayesian Estimation for Continuous-Time Sparse Stochastic Processes," IEEE Transactions on Signal Processing, vol. 61, no. 4, pp. 907-920, February
15, 2013.
[11] M. Nilchian, C. Vonesch, P. Modregger, M. Stampanoni, M. Unser, "Fast Iterative Reconstruction of Differential Phase Contrast X-Ray Tomograms," Optics Express, vol. 21, no. 5, pp. 5511-5528,
March 11, 2013.
[12] S. Lefkimmiatis, J.P. Ward, M. Unser, "Hessian Schatten-Norm Regularization for Linear Inverse Problems," IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1873-1888, May 2013.
[13] E. Bostan, U.S. Kamilov, M. Nilchian, M. Unser, "Sparse Stochastic Processes and Discretization of Linear Inverse Problems," IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2699-2710,
July 2013.
[14] A. Amini, P. Thévenaz, J.P. Ward, M. Unser, "On the Linearity of Bayesian Interpolators for Non-Gaussian Continuous-Time AR(1) Processes," IEEE Transactions on Information Theory, vol. 59. no.
8, pp. 5063-5074, August 2013.
[15] N. Chenouard, M. Unser, "3D Steerable Wavelets in Practice," IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4522-4533, November 2012.
[16] A. Amini, U.S. Kamilov, M. Unser, "The Analog Formulation of Sparsity Implies Infinite Divisibility and Rules Out Bernoulli-Gaussian Priors," Proceedings of the 2012 IEEE Information Theory
Workshop (ITW'12), Lausanne VD, Swiss Confederation, September 3-7, 2012, pp. 687-691.
[17] E. Bostan, U. Kamilov, M. Unser, "Reconstruction of Biomedical Images and Sparse Stochastic Modeling," Proceedings of the Ninth IEEE International Symposium on Biomedical Imaging: From Nano to
Macro (ISBI'12), Barcelona, Kingdom of Spain, May 2-5, 2012, pp. 880-883.
[18] U. Kamilov, A. Amini, M. Unser, "MMSE Denoising of Sparse Lévy Processes via Message Passing," Proceedings of the Thirty-Seventh IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP'12), 京都市 (Kyoto), Japan, March 25-30, 2012, pp. 3637-3640.
[19] A. Amini, U. Kamilov, M. Unser, "Bayesian Denoising of Generalized Poisson Processes with Finite Rate of Innovation," Proceedings of the Thirty-Seventh IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP'12), 京都市 (Kyoto), Japan, March 25-30, 2012, pp. 3629-3632.
[20] U. Kamilov, E. Bostan, M. Unser, "Generalized Total Variation Denoising via Augmented Lagrangian Cycle Spinning with Haar Wavelets," Proceedings of the Thirty-Seventh IEEE International
Conference on Acoustics, Speech, and Signal Processing (ICASSP'12), 京都市 (Kyoto), Japan, March 25-30, 2012, pp. 909-912.
|
{"url":"https://bigwww.epfl.ch/research/projects/funsp.html","timestamp":"2024-11-05T20:27:33Z","content_type":"text/html","content_length":"17143","record_id":"<urn:uuid:8685a5a6-696f-4547-bfd1-1756cae65fe0>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00202.warc.gz"}
|
How do you solve 6x - 5 < 6/x? | HIX Tutor
How do you solve #6x - 5 < 6/x#?
Answer 1
$x \in \left(- \frac{2}{3} , 0\right) \cup \left(0 , \frac{3}{2}\right)$
Right from the start, you know that #x# cannot be equal to zero, since it is the denominator of a fraction. So you need to have #x!=0#.
With that in mind, multiply the left-hand side of the inequality by #1= x/x# to get rid of the denominator
#6x * x/x - 5 * x/x < 6x#
#6x^2 - 5x < 6#
Next, add #-6# to both sides of the inequality
#6x^2 - 5x - 6 < color(red)(cancel(color(black)(6))) - color(red)(cancel(color(black)(6)))#
#6x^2 - 5x - 6 < 0#
To help you determine the intervals on which this quadratic function is smaller than zero, you need to first determine its root by using the quadratic formula
#6x^2 - 5x - 6 = 0#
#x_(1,2) = (-(-5) +- sqrt((-5)^2 - 4 * 6 * (-6)))/(2 * 6)#
#x_(1,2) = (5 +- sqrt(169))/12#
#x_(1,2) = (5 +- 13)/12 = {(x_1 = (5 + 13)/12 = 3/2), (x_2 = (5 - 13)/12 = -2/3) :}#
You can thus rewrite the quadratic as
#6(x-3/2)(x+2/3) = 0#
So, you need this expression to be negative, which implies that #(x-3/2)# must be positive whenever #(x+2/3)# is negative, and vice versa.
For #x> -2/3# and #x<3/2# you get
#{(x-3/2 < 0), (x + 2/3 > 0) :} implies (x-3/2)(x+2/3) < 0#
Any value of #x>3/2# will make both terms positive, and any value of #x<-2/3# will make both terms negative. Keeping in mind that you also need #x!=0#, the solution set for this inequality will be #x
in (-2/3, 0) uu (0, 3/2)#.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve (6x - 5 < \frac{6}{x}), follow these steps:
1. Multiply both sides by (x) to clear the fraction: (6x^2 - 5x < 6).
2. Rearrange the inequality: (6x^2 - 5x - 6 < 0).
3. Factor the quadratic equation: ((2x - 3)(3x + 2) < 0).
4. Determine the critical points by setting each factor equal to zero: (x = \frac{3}{2}) and (x = -\frac{2}{3}).
5. Plot these critical points on a number line and test intervals.
6. Determine the sign of each factor in each interval to find where the inequality holds true.
7. The solution is the interval where the inequality holds true.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-solve-6x-5-6-x-8f9af936a5","timestamp":"2024-11-11T07:31:45Z","content_type":"text/html","content_length":"583439","record_id":"<urn:uuid:ffcd86dc-bf4d-4128-ae71-497825d8758a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00151.warc.gz"}
|
How do band splitters work?
How do band splitters work?
Hey guys!
I want to make my own, but I'm having trouble figuring out how to set up the filters. What aspects are there to take into consideration to make sure the split signal is as close to the original as
If anyone has a simple example patch I would really appreciate that.
@schitz It is a complex subject and will probably become a long thread once it gets going.
Any crossover points between bands will have a cut-off slope which means that 2 speakers will be producing a small band of frequencies at the same time.
That means that phase correlation between the bands is important, but I remember that the steeper the slope the harder that is to achieve.
And of course as flat a total response as possible.
That might not be your intended purpose but it is relevant.
And for filters generally @katjav is a great read........ all of her site is littered with fft gems.......
The difference of a modified signal to it's dry one is the opposite:
see this post:
If I'm not mistaken, using a Butterwoth filter instead, makes a quick and dirty digital Linkwitz-Riley-Filter https://en.wikipedia.org/wiki/Linkwitz–Riley_filter
[years later EDIT: Also see this patch: https://forum.pdpatchrepo.info/topic/7006/multiband-compressor/5 ]
@schitz said:
close to the original as possible
@whale-av said:
phase correlation
That's the difficult part:
I would love to have a chat and see examples of linear-phase
and minimal-phase filters in Pd!
anyone ported this to Pd yet?
2 speakers
would make the topic infinitely complicated, as different positions and responses of the speakers come into play.
@lacuna said:
anyone ported this to Pd yet?
@solipp's pp.fft-split~ in Audiolab might be close enough for @schitz's purpose
this answer might be a bit terrifying, but i am afraid that what you had to do is to not change anything in the splitted signals.
(or in other words: only if you do that what happens in an ideal 3-way speaker)
otherwise any correctly set up bandsplitter - based on a butterworth or FFT - will lead to the usual "noticeable" filter artefacts.
for things like multiband compression and such you might want to try using phaselinear filters instead. it is not like it would be required, but it is an alternative.
|
{"url":"https://forum.pdpatchrepo.info/topic/13761/how-do-band-splitters-work/1","timestamp":"2024-11-07T22:52:41Z","content_type":"text/html","content_length":"70288","record_id":"<urn:uuid:f0a032f7-51e3-4b84-a982-0613d3071b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00394.warc.gz"}
|
Mathematics Notes For Class 10 (KPK) | Tehkals.com
Mathematics notes for class 10 (KPK)
Updated: 21 Jul 2020
If you need learning materials for different subjects, Kindly Visit our website. If you found any errors, broken links or objectionable materials, Kindly contact us through chat or through the
contact us Page, We will be very helpful to you.
These notes are according to the syllabus of the KPK textbook. Other board notes will also be uploaded from time to time.
Mathematics Notes for Class 10
Unit # 1 Quadratic Equations
Quadratic Equation MCQs
Unit # 2 Theory of Quadratic Equations
Chapter 2 Class 10 MCQs
Unit # 3 Variations
Unit # 4 Partial Fraction
Unit # 5 Sets and Functions
Unit # 6 Basic Statistics
Unit # 7 Introduction to Trigonometry
Unit # 13 Practical Geometry Circles
Class 10 Other Subjects
Please Write Your Comments
|
{"url":"https://tehkals.com/mathematics-notes-for-class-10-kpk/","timestamp":"2024-11-02T02:43:23Z","content_type":"text/html","content_length":"75744","record_id":"<urn:uuid:0eddc294-ce04-444f-817c-12094b6f337a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00699.warc.gz"}
|
Long Division Worksheets Year 4 - Divisonworksheets.com
Long Division Worksheets Year 4
Long Division Worksheets Year 4 – Your child can learn and refresh their division skills with the help of division worksheets. You can create your own worksheets. There are numerous choices for
worksheets. They can be downloaded at no cost, and you can customize them however you like. They are excellent for first-graders, kindergarteners and even second graders.
Two are able to produce massive quantities
Do some practice on worksheets that contain huge numbers. It is common to see only two, three , or four divisors in the worksheets. The child won’t have to worry about forgetting how to divide the
huge number, or making mistakes when using their times tables thanks to this method. You can find worksheets on the internet or download them to your personal computer to aid your child in this math
Use worksheets for multidigit division to assist children with their practice and enhance their understanding of the subject. It’s a fundamental mathematical skill that is required to perform complex
calculations and many other things in our daily lives. These worksheets offer interactive questions and activities to reinforce the concept.
It is not easy for students to divide huge numbers. The worksheets usually are built on a standard algorithm and provide step-by–step instructions. They can cause students to lose the knowledge
required. To teach long division, one strategy is to employ bases ten blocks. Students must be at ease with the concept of long division once they’ve learned the steps.
Students can practice division of large numbers with various worksheets and questions to practice. These worksheets also contain fractional results in decimals. There are worksheets that can be used
to determine hundreds ofths. This is particularly useful when you have to divide large sums of money.
Sort the numbers into smaller groups.
Putting a number into small groups could be difficult. While it may sound great on paper but many facilitators of small groups do not like this method. It’s a natural reflection of how the body grows
and is a great way to aid in the Kingdom’s continual development. It inspires others to reach for the lost and seek out new leadership to lead the way.
It can be helpful for brainstorming. You can make groups with people with similar experience and personality traits. You could develop creative ideas using this method. Once you’ve created your
groups, introduce everyone to you. This is a great activity to spark creativity and stimulate new thinking.
The most fundamental operation in arithmetic is division is to break down huge numbers into smaller numbers. This is extremely useful in situations where you have to make equal quantities of things
for multiple groups. A large class could be broken down into five groups. When you add these groups together, you get the original 30 students.
Be aware that you can divide numbers with two kinds of numbers: the divisor, as well as the quotient. When you multiply two numbers, the result will be “ten/five,” but the same results are achieved
when you divide them in two ways.
For large numbers, the power of ten is not recommended.
It is possible to divide large numbers into powers 10 which makes it much easier to make comparisons. Decimals are a regular element of shopping. They can be located on receipts and food labels,
price tags and even receipts. Even petrol pumps utilize them to display the price per gallon, as well as the amount of gas that comes via a nozzle.
There are two methods to split a large number into powers of 10 either by moving the decimal mark to one side or by multiplying by 10-1. The second approach uses the associative feature of powers of
10. After you have learned the properties of associative power of 10 you can divide the large number into smaller power equal to 10.
The first technique involves mental computation. A pattern can be observed by dividing 2.5. 2.5 is divided by the power 10. The decimal place shifts one way for every tenth power. It is possible to
apply this principle to solve any issue.
Mentally dividing large numbers in power of 10 is another method. The next step is to write large numbers in a scientific note. When using scientific notation, large numbers must be written using
positive exponents. For example, if you shift the decimal points five spaces to the left, you can convert 450,000 into 4.5. To split a huge number into smaller numbers of 10, you could apply the
factor 5. Or, divide it in smaller amounts of 10.
Gallery of Long Division Worksheets Year 4
Long Division No Remainder Worksheet 4 Free Printable Worksheets For
4 Grade Division Worksheets Harry Carrol s English Worksheets
4th Grade Long Division Worksheets
Leave a Comment
|
{"url":"https://www.divisonworksheets.com/long-division-worksheets-year-4/","timestamp":"2024-11-07T07:00:41Z","content_type":"text/html","content_length":"64561","record_id":"<urn:uuid:6db2d856-1611-4d37-8992-bdc5c00a2024>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00438.warc.gz"}
|
Fourier...? An introductionFourier Guide
Fourier...? An introduction
The english translation is currently under construction. Additionally, there might be some potential for optimization necessary at some points.
After reading this article, you should have an overview of the Fourier-Transform and the algorithm of the Fast-Fourier-Transform. The main focus is that no assumptions are made, but that an
understanding is built on the foundation of school physics/mathematics.
It is important for you to think about the content yourself and learn actively by using your intuition to understand the content. Because of this there will be small questions to think about, which
look like this:
It is ok, if you can't answer some questions, but you should always try to fully understand the correct answer.
Additionally at times there will be some notes:
These notes are mathmatical additions which could be helpful.
These notes reference more in-depth material for further reading, but they are not essential for understanding the article.
The contents in the appendix are also recommended. They are essential for understanding some of the article. Especially the parts about complex numbers and complex exponentials are very recommended.
|
{"url":"https://fourier.tobotis.com/","timestamp":"2024-11-04T21:32:27Z","content_type":"text/html","content_length":"45680","record_id":"<urn:uuid:6a90add7-75b5-424b-b476-52811376628f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00400.warc.gz"}
|
Conditional random field
Jump to navigation Jump to search
This article has multiple issues.
Please help
improve it
or discuss these issues on the
talk page
(Learn how and when to remove these template messages)
This article
provides insufficient context for those unfamiliar with the subject
(January 2013) (Learn how and when to remove this template message)
This article
may be too technical for most readers to understand
. Please
help improve it
make it understandable to non-experts
, without removing the technical details.
(June 2012) (Learn how and when to remove this template message)
(Learn how and when to remove this template message)
Conditional random fields (CRFs) are a class of statistical modeling method often applied in pattern recognition and machine learning and used for structured prediction. CRFs fall into the sequence
modeling family. Whereas a discrete classifier predicts a label for a single sample without considering "neighboring" samples, a CRF can take context into account; e.g., the linear chain CRF (which
is popular in natural language processing) predicts sequences of labels for sequences of input samples.
CRFs are a type of discriminative undirected probabilistic graphical model. They are used to encode known relationships between observations and construct consistent interpretations and are often
used for labeling or parsing of sequential data, such as natural language processing or biological sequences^[1] and in computer vision.^[2] Specifically, CRFs find applications in POS Tagging,
shallow parsing,^[3] named entity recognition,^[4] gene finding and peptide critical functional region finding,^[5] among other tasks, being an alternative to the related hidden Markov models (HMMs).
In computer vision, CRFs are often used for object recognition^[6] and image segmentation.
Lafferty, McCallum and Pereira^[1] define a CRF on observations ${\displaystyle {\boldsymbol {X}}}$ and random variables ${\displaystyle {\boldsymbol {Y}}}$ as follows:
Let ${\displaystyle G=(V,E)}$ be a graph such that
${\displaystyle {\boldsymbol {Y}}=({\boldsymbol {Y}}_{v})_{v\in V}}$, so that ${\displaystyle {\boldsymbol {Y}}}$ is indexed by the vertices of ${\displaystyle G}$. Then ${\displaystyle ({\
boldsymbol {X}},{\boldsymbol {Y}})}$ is a conditional random field when the random variables ${\displaystyle {\boldsymbol {Y}}_{v}}$, conditioned on ${\displaystyle {\boldsymbol {X}}}$, obey the
Markov property with respect to the graph: ${\displaystyle p({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},{\boldsymbol {Y}}_{w},weq v)=p({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},{\boldsymbol {Y}}_{w},w
\sim v)}$, where ${\displaystyle {\mathit {w}}\sim v}$ means that ${\displaystyle w}$ and ${\displaystyle v}$ are neighbors in ${\displaystyle G}$.
What this means is that a CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets ${\displaystyle {\boldsymbol {X}}}$ and ${\displaystyle {\boldsymbol {Y}}}$,
the observed and output variables, respectively; the conditional distribution ${\displaystyle p({\boldsymbol {Y}}|{\boldsymbol {X}})}$ is then modeled.
For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MRF and the same arguments hold.^[7] However, there exist
special cases for which exact inference is feasible:
• If the graph is a chain or a tree, message passing algorithms yield exact solutions. The algorithms used in these cases are analogous to the forward-backward and Viterbi algorithm for the case of
• If the CRF only contains pair-wise potentials and the energy is submodular, combinatorial min cut/max flow algorithms yield exact solutions.
If exact inference is impossible, several algorithms can be used to obtain approximate solutions. These include:
• Alpha expansion
• Mean field inference
Parameter Learning[edit]
Learning the parameters ${\displaystyle \theta }$ is usually done by maximum likelihood learning for ${\displaystyle p(Y_{i}|X_{i};\theta )}$. If all nodes have exponential family distributions and
all nodes are observed during training, this optimization is convex.^[7] It can be solved for example using gradient descent algorithms, or Quasi-Newton methods such as the L-BFGS algorithm. On the
other hand, if some variables are unobserved, the inference problem has to be solved for these variables. Exact inference is intractable in general graphs, so approximations have to be used.
In sequence modeling, the graph of interest is usually a chain graph. An input sequence of observed variables ${\displaystyle X}$ represents a sequence of observations and ${\displaystyle Y}$
represents a hidden (or unknown) state variable that needs to be inferred given the observations. The ${\displaystyle Y_{i}}$ are structured to form a chain, with an edge between each ${\displaystyle
Y_{i-1}}$ and ${\displaystyle Y_{i}}$. As well as having a simple interpretation of the ${\displaystyle Y_{i}}$ as "labels" for each element in the input sequence, this layout admits efficient
algorithms for:
• model training, learning the conditional distributions between the ${\displaystyle Y_{i}}$ and feature functions from some corpus of training data.
• decoding, determining the probability of a given label sequence ${\displaystyle Y}$ given ${\displaystyle X}$.
• inference, determining the most likely label sequence ${\displaystyle Y}$ given ${\displaystyle X}$.
The conditional dependency of each ${\displaystyle Y_{i}}$ on ${\displaystyle X}$ is defined through a fixed set of feature functions of the form ${\displaystyle f(i,Y_{i-1},Y_{i},X)}$, which can be
thought of as measurements on the input sequence that partially determine the likelihood of each possible value for ${\displaystyle Y_{i}}$. The model assigns each feature a numerical weight and
combines them to determine the probability of a certain value for ${\displaystyle Y_{i}}$.
Linear-chain CRFs have many of the same applications as conceptually simpler hidden Markov models (HMMs), but relax certain assumptions about the input and output sequence distributions. An HMM can
loosely be understood as a CRF with very specific feature functions that use constant probabilities to model state transitions and emissions. Conversely, a CRF can loosely be understood as a
generalization of an HMM that makes the constant transition probabilities into arbitrary functions that vary across the positions in the sequence of hidden states, depending on the input sequence.
Notably, in contrast to HMMs, CRFs can contain any number of feature functions, the feature functions can inspect the entire input sequence ${\displaystyle X}$ at any point during inference, and the
range of the feature functions need not have a probabilistic interpretation.
Higher-order CRFs and semi-Markov CRFs[edit]
CRFs can be extended into higher order models by making each ${\displaystyle Y_{i}}$ dependent on a fixed number ${\displaystyle k}$ of previous variables ${\displaystyle Y_{i-k},...,Y_{i-1}}$. In
conventional formulations of higher order CRFs, training and inference are only practical for small values of ${\displaystyle k}$ (such as k ≤ 5),^[8] since their computational cost increases
exponentially with ${\displaystyle k}$.
However, another recent advance has managed to ameliorate these issues by leveraging concepts and tools from the field of Bayesian nonparametrics. Specifically, the CRF-infinity approach^[9]
constitutes a CRF-type model that is capable of learning infinitely-long temporal dynamics in a scalable fashion. This is effected by introducing a novel potential function for CRFs that is based on
the Sequence Memoizer (SM), a nonparametric Bayesian model for learning infinitely-long dynamics in sequential observations.^[10] To render such a model computationally tractable, CRF-infinity
employs a mean-field approximation ^[11] of the postulated novel potential functions (which are driven by an SM). This allows for devising efficient approximate training and inference algorithms for
the model, without undermining its capability to capture and model temporal dependencies of arbitrary length.
There exists another generalization of CRFs, the semi-Markov conditional random field (semi-CRF), which models variable-length segmentations of the label sequence ${\displaystyle Y}$.^[12] This
provides much of the power of higher-order CRFs to model long-range dependencies of the ${\displaystyle Y_{i}}$, at a reasonable computational cost.
Finally, large-margin models for structured prediction, such as the structured Support Vector Machine can be seen as an alternative training procedure to CRFs.
Latent-dynamic conditional random field[edit]
Latent-dynamic conditional random fields (LDCRF) or discriminative probabilistic latent variable models (DPLVM) are a type of CRFs for sequence tagging tasks. They are latent variable models that are
trained discriminatively.
In an LDCRF, like in any sequence tagging task, given a sequence of observations x = ${\displaystyle x_{1},\dots ,x_{n}}$, the main problem the model must solve is how to assign a sequence of labels
y = ${\displaystyle y_{1},\dots ,y_{n}}$ from one finite set of labels Y. Instead of directly modeling P(y|x) as an ordinary linear-chain CRF would do, a set of latent variables h is "inserted"
between x and y using the chain rule of probability:^[13]
${\displaystyle P(\mathbf {y} |\mathbf {x} )=\sum _{\mathbf {h} }P(\mathbf {y} |\mathbf {h} ,\mathbf {x} )P(\mathbf {h} |\mathbf {x} )}$
This allows capturing latent structure between the observations and labels.^[14] While LDCRFs can be trained using quasi-Newton methods, a specialized version of the perceptron algorithm called the
latent-variable perceptron has been developed for them as well, based on Collins' structured perceptron algorithm.^[13] These models find applications in computer vision, specifically gesture
recognition from video streams^[14] and shallow parsing.^[13]
This is a partial list of software that implement generic CRF tools.
This is a partial list of software that implement CRF related tools.
See also[edit]
1. ^ ^a ^b Lafferty, J., McCallum, A., Pereira, F. (2001). "Conditional random fields: Probabilistic models for segmenting and labeling sequence data". Proc. 18th International Conf. on Machine
Learning. Morgan Kaufmann. pp. 282–289.CS1 maint: Uses authors parameter (link)
2. ^ He, X.; Zemel, R.S.; Carreira-Perpinñán, M.A. (2004). "Multiscale conditional random fields for image labeling". IEEE Computer Society. CiteSeerX 10.1.1.3.7826.
3. ^ Sha, F.; Pereira, F. (2003). shallow parsing with conditional random fields.
4. ^ Settles, B. (2004). "Biomedical named entity recognition using conditional random fields and rich feature sets" (PDF). Proceedings of the International Joint Workshop on Natural Language
Processing in Biomedicine and its Applications. pp. 104–107.
5. ^ Chang KY; Lin T-p; Shih L-Y; Wang C-K (2015). Analysis and Prediction of the Critical Regions of Antimicrobial Peptides Based on Conditional Random Fields. PLoS ONE.
6. ^ ^a ^b J.R. Ruiz-Sarmiento; C. Galindo; J. Gonzalez-Jimenez (2015). "UPGMpp: a Software Library for Contextual Object Recognition.". 3rd. Workshop on Recognition and Action for Scene
Understanding (REACTS).
7. ^ ^a ^b Sutton, Charles; McCallum, Andrew (2010). "An Introduction to Conditional Random Fields". arXiv:1011.4088v1 [stat.ML].
8. ^ Lavergne, Thomas; Yvon, François (September 7, 2017). "Learning the Structure of Variable-Order CRFs: a Finite-State Perspective". Proceedings of the 2017 Conference on Empirical Methods in
Natural Language Processing. Copenhagen, Denmark: Association for Computational Linguistics. p. 433.
9. ^ Chatzis, Sotirios; Demiris, Yiannis (2013). "The Infinite-Order Conditional Random Field Model for Sequential Data Modeling". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35
(6): 1523–1534. doi:10.1109/tpami.2012.208.
10. ^ Gasthaus, Jan; Teh, Yee Whye (2010). "Improvements to the Sequence Memoizer" (PDF). Proc. NIPS.
11. ^ Celeux, G.; Forbes, F.; Peyrard, N. (2003). "EM Procedures Using Mean Field-Like Approximations for Markov Model-Based Image Segmentation". Pattern Recognition. 36 (1): 131–144. doi:10.1016/
12. ^ Sarawagi, Sunita; Cohen, William W. (2005). "Semi-Markov conditional random fields for information extraction" (PDF). In Lawrence K. Saul, Yair Weiss, Léon Bottou (eds.). Advances in Neural
Information Processing Systems 17. Cambridge, MA: MIT Press. pp. 1185–1192.CS1 maint: Uses editors parameter (link)
13. ^ ^a ^b ^c Xu Sun; Takuya Matsuzaki; Daisuke Okanohara; Jun'ichi Tsujii (2009). Latent Variable Perceptron Algorithm for Structured Classification. IJCAI. pp. 1236–1242.
14. ^ ^a ^b Morency, L. P.; Quattoni, A.; Darrell, T. (2007). "Latent-Dynamic Discriminative Models for Continuous Gesture Recognition". 2007 IEEE Conference on Computer Vision and Pattern
Recognition (PDF). p. 1. doi:10.1109/CVPR.2007.383299. ISBN 1-4244-1179-3.
15. ^ T. Lavergne, O. Cappé and F. Yvon (2010). Practical very large scale CRFs Archived 2013-07-18 at the Wayback Machine. Proc. 48th Annual Meeting of the ACL, pp. 504-513.
Further reading[edit]
|
{"url":"https://static.hlt.bme.hu/semantics/external/pages/inform%C3%A1ci%C3%B3kinyer%C3%A9s/en.wikipedia.org/wiki/Conditional_random_field.html","timestamp":"2024-11-15T03:51:27Z","content_type":"text/html","content_length":"146661","record_id":"<urn:uuid:b0961e8d-7274-424e-ad3e-d9d840c54d20>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00555.warc.gz"}
|
One Layer Neural Network From Scratch—Classification
One Layer Neural Network From Scratch — Classification
Classification is a supervised learning method. In supervised learning, we have labels for our data. Algorithms learn from these labels and make predictions accordingly. Classification aims to divide
the dataset into classes. For example, let’s consider data of people applying for credit. Deciding whom to grant credit or not based on past granted/not granted credits is called binary
In this article, we will build a single-layer artificial neural network from scratch for binary classification. But before that, we need to understand what artificial neural networks are and the
mathematics behind them
Artificial Neural Networks
Artificial Neural Networks are an artificial intelligence method inspired by the human brain. They consist of interconnected neurons and layers, similar to the human brain. Each neuron performs
specific mathematical operations, and we reach the final result in the output layer. They are very beneficial for problems involving non-linear and complex inputs and outputs.
Neural Network Structure for Classification Model
The input layer takes two inputs, X1 and X2. Bias (b) and Weights (W1, W2) are parameters and get updated during the training phase. The equation for z in the hidden layer is expanded as follows:
This equation goes into the sigmoid activation function. The sigmoid function is defined as follows:
The following graph belongs to the sigmoid function. The sigmoid function assists us in obtaining outputs for classification problems. It considers 0.5 as the threshold value. If a value is above
0.5, it is assigned 1; otherwise, if it is below 0.5, it is assigned 0, thus classifying the data.
Forward Propagation
In artificial neural networks, our main goal is to find the most optimized weight and bias parameters. Since we don’t know the optimal values when we start training, we initialize these parameters
randomly and perform forward computations from left to right to calculate the output. This output becomes the model’s prediction. Then, we need to assess how good this prediction is. Here, for
classification tasks, the commonly used loss function is the Log Loss function.
Log Loss Function
Log Loss, also known as Cross-Entropy Loss, is a metric commonly used for evaluating classification models that make predictions based on probability values. It measures the difference between the
predicted probabilities and the true labels of the data. The lower the Log Loss value, the more successful the model is.
Backward Propagation
In the backpropagation phase, our goal is to update the parameters in a way that minimizes the error. We use gradient descent to update the parameters.
Gradient Descent
Gradient descent aims to reach the global minimum using an initially chosen random value. During this process, parameters get updated.
1. The derivative of the loss function is calculated for each parameter. The loss function is the Log Loss mentioned above.
2. The new parameters are calculated using the following formula:
The learning rate parameter specifies the magnitude of the steps taken to approach the minimum point. If the learning rate is small, the process may take too long, whereas if it is large, the minimum
point may be missed. Therefore, selecting this parameter optimally is crucial. As a result, parameter update operations are performed using the following functions:
Building an Artificial Neural Network from Scratch
1.Let’s define the sigmoid activation function that we will use.
def sigmoid(z):
return 1/(1 + np.exp(-z))
2.Let’s determine the model structure. For this, we need to learn the input and output dimensions. The X parameter given to the function specifies the independent variables, and the Y parameter
specifies the dependent variable (target variable).
def layer_sizes(X, Y):
n_x = X.shape[0]
n_y = Y.shape[0]
return (n_x, n_y)
3. Then we will do the parameter assignment operations that we mentioned in the forward propagation section. When there will be matrix multiplication, we give the input and output dimensions as
parameters to the function. Weight matrix and bias are given according to these dimensions.
def initialize_parameters(n_x, n_y):
W = np.random.randn(n_y, n_x) * 0.01
b = np.zeros((n_y, 1))
parameters = {"W": W,
"b": b}
return parameters
4. After defining the parameters, we can perform forward propagation. In the forward propagation process, the result of the equation z = w1x1 + w2x2 + b was given into the sigmoid function and the
model estimate was obtained. The np.matmul() function is the matrix product of two arrays.
def forward_propagation(X, parameters):
W = parameters["W"]
b = parameters["b"]
Z = np.matmul(W, X) + b
A = sigmoid(Z)
return A
5. The next thing we need to do is to define the log loss (cost) function we mentioned above. In fact, what we’re doing is just putting the formula into code.
def compute_cost(A, Y):
m = Y.shape[1]
logprobs = - np.multiply(np.log(A),Y) - np.multiply(np.log(1 - A),1 - Y)
cost = 1/m * np.sum(logprobs)
return cost
6. After we’ve done our cost function, it’s time to do backward propagation. Let’s take another look at what we need to calculate.
Our goal in backward propagation was to minimize the cost function by updating the weights. For this, we need to calculate the derivatives after the upper alpha parameter. Values we give as
• A: Output of forward propagation function
• X: Input data
• Y: Target
As an output, our function returns gradient values for weights and bias.
def backward_propagation(A, X, Y):
m = X.shape[1]
dZ = A - Y
dW = 1/m * np.dot(dZ, X.T)
db = 1/m * np.sum(dZ, axis = 1, keepdims = True)
grads = {"dW": dW,
"db": db}
return grads
7. Now that we have found our parameters, let’s write the function that will update the old ones. The function takes the old parameters, the gradients obtained after backward propagation, and the
learning rate parameter, which determines the step span. After finding all these values, all we have to do is calculate the equations in the figure in the 6th item and update the parameters.
def update_parameters(parameters, grads, learning_rate=1.2):
W = parameters["W"]
b = parameters["b"]
dW = grads["dW"]
db = grads["db"]
W = W - learning_rate * dW
b = b - learning_rate * db
parameters = {"W": W,
"b": b}
return parameters
8. We wrote all our auxiliary functions, now it’s time to combine them to create our artificial neural network model. First, we get the dimensions of the features and our target variable. The
num_iterations parameter here is the same as the epoch used in neural networks. Epoch is a training cycle in which all samples in the dataset are shown to the network and the network is updated based
on this data. The function returns the parameters learned by the model. These are used to make predictions.
def nn_model(X, Y, num_iterations=10, learning_rate=1.2, print_cost=False):
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[1]
parameters = initialize_parameters(n_x, n_y)
for i in range(0, num_iterations):
A = forward_propagation(X, parameters)
cost = compute_cost(A, Y)
grads = backward_propagation(A, X, Y)
parameters = update_parameters(parameters, grads, learning_rate)
if print_cost:
print ("Cost after iteration %i: %f" %(i, cost))
9. Our neural network model is complete. Now we can move on to the function that will perform the estimation with the new data. This function uses parameters to predict which class the sent data
belongs to. Since this is a binary classification, the class of the data will be 0 if less than 1 if the probability is greater than 0.5.
def predict(X, parameters):
A = forward_propagation(X, parameters)
predictions = A > 0.5
return predictions
That’s it. A more complex model can be created by adding hidden layers to these. When we use libraries such as Tensorflow and PyTorch, these operations are performed in the background. In the next
article we will look at how we can add hidden layers. Thank you for reading.
|
{"url":"https://buse-koseoglu13.medium.com/one-layer-neural-network-from-scratch-classification-b6c71481f992?source=read_next_recirc-----332f1d2fedd5----0---------------------87f7317e_08ed_4d89_a1b6_00c5e5af2d3e-------","timestamp":"2024-11-05T23:19:32Z","content_type":"text/html","content_length":"149266","record_id":"<urn:uuid:9293b368-4ad1-47c3-b4b7-777dcf11b66f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00112.warc.gz"}
|
CAT 2014: Understand your CAT Scores; Scaled Score and Percentile to final Percentile
The CAT 2014 score card reflects 2 type of scores namely scaled scores and percentile scores but does not reflect your raw scores earned by you
CAT 2014 result declared on Dec 27, 2014 has brought with it a different type of score card as compared to the score cards in earlier CAT results. The CAT 2014 score card reflects 2 type of scores
namely scaled scores and percentile scores but does not reflect your raw scores earned by you in CAT 2014. The scaled scores and percentile scores are arrived at after converting the raw scores to
scaled scores and then converting them to percentile scores. The process is known as normalization of scores.
The test takers of CAT 2014 would remember that raw scoring pattern as declared for CAT 2014 exam by the CAT Centre 2014 was that 3 marks would be awarded for every correct answer and in case of any
wrong answer, a penalty of 1/3 mark was to be imposed. Since maximum number of questions in CAT 2014 was 100, a CAT 2014 test taker could score not more than 300 raw marks, if he/she could attempt
all the questions correct. But this might not be the case and no one could attempt all the questions correct.
Top 0.1 percent scorers were scaled in top slot and accordingly the score scaling was done for the rest of the candidates section wise. After this exercise, the scores were converted to percentile.
CAT 2014 result score card therefore felt no need to declare the raw scores and as such scaled scores have been reflected in the CAT 2014 score card.
Normalization process across different test sessions
Raw scores of CAT 2014 have been converted to scaled scores. CAT Centre 2014 declares In order to ensure fairness and equity in comparison of performances of the candidates across different test
sessions, the scores of the candidates shall be subjected to a process of Normalization. The Normalization process to be implemented shall adjust for location and scale differences of score
distributions across different forms and the scaled scores obtained by this process shall be converted into percentiles for purposes of shortlisting.
GATE pattern adopted
The scoring pattern is on the lines of GATE exam as announced by CAT Centre 2014. On the question of transparency of normalization process across the multiple sessions CAT Centre 2014 states The
process of Normalization is an established practice for comparing candidate scores across multiple Forms and is similar to those being adopted in other large educational selection tests conducted in
India such as Graduate Aptitude Test in Engineering (GATE).
Understand the scoring pattern
For the examinations conducted in multiple sessions, suitable normalization process is applied to take into account any variation in the difficulty levels of the question sets across the different
sessions. The normalization is done based on the fundamental assumption that in all multi-session GATE papers, the distribution of abilities of candidates is the same across all the sessions.
According to the GATE committee, this assumption is justified since the number of candidates appearing in multi-session subjects in GATE 2014 is large and the procedure of allocation of session to
candidates is random. Further it is also ensured that for the same multi-session subject, the number of candidates allotted in each session is of the same order of magnitude.
Know the GATE formula
Based on the above the committee arrived at the following formula for calculating the normalized marks, for CE, CS, EC, EE and ME subjects. From GATE 2014 onward (and year 2014-15 of the 2-year
validity period of GATE 2013 score), a candidates GATE score is computed by the following new formula.
S = Sq + (St - Sq)= M-Mq / Mt-Mq where, S= Score (normalized) of a candidate; M= Marks obtained by a candidate (normalized marks in case of multiple-session subjects CE, CS, EC, EE and ME); Mq=
Qualifying marks for general category candidates in that subject (usually 25 or ? + ?, whichever is higher); ? = Average (i.e. arithmetic mean) of marks of all candidates in that subject; ? =
Standard deviation of marks of all candidates in that subject; Mt= Average marks of top 0.1% candidates (for subjects with 10000 or more appeared candidates) or top 10 candidates (for subjects with
less than 10000 appeared candidates); St= 900 = Score assigned toMt; Sq= 350 = Score assigned toMq.
A candidates percentile denotes the percentage of candidates scoring lower than that particular candidate. It is calculated as: Percentile= ( 1 -All India rank( No. of candidates in that subject) x
Score scaling: CAT 2014 adopted the process
On the similar note the formula to calculate the percentile in CAT 2014 can be devised. Experts with MBAUniverse.com have decoded the pattern of normalization and calculation of percentile in CAT
2014 as follows-
Normalized marks (?Mij) ofjthcandidate inithslot, is given by ?Mij=Mgt-MgqMti-Miq(Mij-Miq) +Mgq where, Mijis the actual marks obtained by thejthcandidate in theithslot, Mgtis the average marks of the
top 0.1% candidates in all slots, Mgqis the sum of mean and standard deviation of marks of all candidates in all slots, Mtiis the average of marks of top 0.1% candidates in theithslot, Miqis the sum
of mean and standard deviation of marks of all candidates in theithslot.
Score Scaling example
Total aspirants = 2,00,000 slots = 4 aspirants in each slot = 50,000[.1% of it = 50] scaled as top scorers.
Mij = 202 out of 300 ( are the actual marks obtained by thejthcandidate in the3rdslot) Mgt= [195 + 202 + 210 + 205 ]/4 = 203 ( is the average marks of the top 0.1% candidates in all slots) here, 195,
202, 210 and 205 are the average of the top .1% aspirants of all slots
Mgq= 125 (is the sum of mean and standard deviation of marks of all candidates in all slots) Mti = 210 (is the average of marks of top 0.1% candidates in the3rdslot) Miq = 128 (is the sum of mean and
standard deviation of marks of all candidates in the3rdslot.)
According to the given formula, normalized marks = (203-125)(202-128)/(210-128) + 125 = 195.4
The relative high percentile
The above score is supposed to fetch a percentile near about 99.5. On the similar note and going by the relative difficulty level across the sections in different slots of CAT 2014 examination, CAT
2014 takers might have scored a high sectional or uneven sectional percentile.
Score Scaling & Percentile: Key points to note
CAT Centre 2014 has shared that the Scaled Scores of Section-1 and Section-2 are not additive. The Overall Scaled Scores is based on the total raw scores earned by the candidate.
Percentile refers to percentage of candidates who receive score less than or equal to the score obtained by the candidate. According to this formula a candidate may score high scaled scores but his/
her percentile may be lower than the scaled scores.
It is also possible that you might have scored low sectional scaled score but high sectional percentile but when it comes to over all percentile, despite the good overall scaled scores, the
percentile may be low.
Related Links
CAT 2014: Result creates history with 16 candidate scoring 100 Percentile; lone girl topper since 2009
CAT 2014: Result declared; 5 important dos to follow for final admission round
CAT 2014: Result declared check now; Know the instructions and steps to follow
CAT 2014: Result declared check now; View, Download and take the print now
CAT 2014 Result Check now: Site getting updated; Instructions how to access the result, published soon
Stay tuned to MBAUniverse.com for more updates on CAT 2014 result
|
{"url":"https://www.mbauniverse.com/article/id/8248/CAT-2014","timestamp":"2024-11-11T13:44:02Z","content_type":"text/html","content_length":"141749","record_id":"<urn:uuid:88af6a39-fbb6-47ed-b044-9001615ffc33>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00452.warc.gz"}
|
A Treatise on Land-surveying
its place be again noted. Then the required distance will be equal to the difference of the readings on the rod, in feet, multiplied by the distance at which a foot was intercepted between the lines.
One of the horizontal hairs may be made movable, and its distance from the other, when the space between them exactly covers an object of known height, can be very precisely measured by counting the
number of turns and fractions of a turn, of a screw by which this movable hair is raised or lowered. A simple proportion will then give the distance.
On sloping ground a double correction is necessary to reduce the slope to the horizon and to correct the oblique view of the rod. The horizontal distance is, in consequence, approximately equal to
the observed distance multiplied by the square of the cosine of the slope of the ground.
The latter of the above two corrections will be dispensed with by holding the rod perpendicular to the line of sight, with the aid of a right angled triangle, one side of which coincides with the rod
at the height of the telescope, and the other side of which adjoining the right angle, is caused, by leaning the rod, to point to the telescope.
Other contrivances have been used for the same object, such as a Binocular Telescope with two eye-pieces inclined at a certain angle; a Telescope with an object-glass cut into two movable parts; &c.
(376) Ranging out lines. This is the converse of Surveying lines. The instrument is fixed over the first station with great precision, its telescope being very carefully adjusted to move in a
vertical plane. A series of stakes, with nails driven in their tops, or otherwise well defined, are then set in the desired line as far as the power of the instrument extends. It is then taken
forward to a stake three or four from the last one set, and is fixed over it, first by the plumb and then by sighting backward and forward to the first and last stake. The line is then continued as
before. good object for a long sight is a board painted like a target, with black and white concentric rings, and made to slide in grooves cut in the tops of two stakes set in the ground about in the
line. It
is moved till the vertical hair bisects the circles (which the eye can determine with great precision) and a plumb-line dropped from their centre, gives the place of the stake. "Mason & Dixon's Line"
was thus ranged.
If a Transit be used for ranging, its "Second Adjustment" is most important to ensure the accuracy of the reversal of its Telescope. If a Theodolite be used, the line is continued by turning the
vernier 180°, or by reversing the telescope in its Ys, as noticed in Arts. (325) and (362).
(377) Farm Surveying, &c. A large farm can be most easily and accurately surveyed, by measuring the angles of its main boundaries (and a few main diagonals, if it be very large,) with a Theodolite or
Transit, as in Arts. (366) or (371), and filling up the interior details, as fences, &c., with the Compass and Chain.
Art. (366), will be the interior angles of the field, as noted in the figure.
The accuracy of the work will be proved, as alluded to in Art. (257), if the sum of all the interior angles be equal to the product of 180° by the number of sides of the figure less two. Thus in the
figure, the sum of all the interior angles = 540° = 180° × (5-2). The sum of the exterior angles would of course equal 180° x (5+ 2) = 1260°.
If the Transit be used, the farm should be kept on the right hand, and then the angles measured will be the supplements of the interior angles. If the angles to the right be called positive, and
those to the left negative, their algebraic sum should equal 360°.
If the boundary lines be surveyed by "Traversing," as in Art. (373), the reading, on getting back to the last station and looking back to the first line, should be 360°, or 0°.
The content of any surface surveyed by "Traversing" with the Transit can be calculated by the Traverse Table, as in Chapter 'VI, of Part III, by the following modification. When the angle of
deflection of any side from the first side, or Meridian, is less than 90°, call this angle the Bearing, find its Latitude and Departure, and call them both plus. When the angle is between 90° and
180°, call the difference between the angle and 180° the Bearing, and call its Latitude minus and its Departure plus. When the angle is between 180° and 270°, call its difference from 180° the
Bearing, and call its Latitude minus and its Departure minus. When the angle is more than 270°, call its difference from 360° the Bearing, and call its Latitude plus and its Departure minus. Then use
these as in getting the content of a Compass-survey. The signs of the Latitudes and Departures follow those of the cosines and sines in the successive quadrants.
Town-Surveying would be performed as directed in Art. (261), substituting "angles" for "Bearings." "Traversing" is the best method in all these cases.
Inaccessible areas would be surveyed nearly as in Art. (134), except that the angles of the lines enclosing the space would be measured with the instrument, instead of with the chain.
(378) Platting. Any of these surveys can be platted by any of the methods explained and characterized in Chapter IV, of the preceding Part. A circular Protractor, Art. (264), may be regarded as a
Theodolite placed on the paper.
"Platting Bear
ings," Art. (265), can be employed when the survey has been made by "Traversing." But the method of "Latitudes and Departures," Art. (285), is by far the most accurate.
PART V.
TRIANGULAR SURVEYING;
By the Fourth Method.
(379) TRIANGULAR SURVEYING is founded on the Fourth Method of determining the position of a point, by the intersection of two known lines, as given in Art. (8). By an extension of the principle, a
field, a farm, or a country, can be surveyed by measuring only one line, and calculating all the other desired distances, which are made sides of a connected series of imaginary Triangles, whose
angles are carefully measured. The district surveyed is covered with a sort of net-work of such triangles, whence the name given to this kind of Surveying. It is more commonly called "Trigonometrical
Surveying;" and sometimes "Geodesic Surveying," but improperly, since it does not necessarily take into account the curvature of the earth, though always adopted in the great surveys in which that is
(380) Outline of operations. A base line, as long as possible, (5 or 10 miles in surveys of countries), is measured with extreme accuracy.
From its extremities, angles are taken to the most distant objects visible, such as steeples, signals on mountain tops, &c.
The distances to these and between these are then calculated by the rules of Trigonometry.
The instrument is then placed at each of these new stations, and angles are taken from them to still more distant stations, the calculated lines being used as new base lines.
This process is repeated and extended till the whole district is embraced by these "primary triangles" of as large sides as possible.
One side of the last triangle is so located that its length can be obtained by measurement as well as by calculation, and the agreement of the two proves the accuracy of the whole work.
Within these primary triangles, secondary or smaller triangles are formed, to fix the position of the minor local details, and to serve as starting points for common surveys with chain and compass, &
c. Tertiary triangles may also be required.
The larger triangles are first formed, and the smaller ones based on them, in accordance with the important principle in all surveying operations, always to work from the whole to the parts, and from
greater to less.
Each of these steps will now be considered in turn, in the following order:
1. The Base; articles (381), (382).
2. The Triangulation; articles (383) to (390).
3. Modifications of the method; articles (391) to (395).
(381) Measuring a Base. Extreme accuracy in this is necessary, because any error in it will be multiplied in the subsequent work. The ground on which it is located must be smooth and nearly level,
and its extremities must be in sight of the chief points in the neighborhood. Its point of beginning must be marked by a stone set in the ground with a bolt let into it. Over this a Theodolite or
Transit is to be set, and the line "ranged out" as directed in Art. (376). The measurement may be made with chains, (which should be formed like that of a watch,) &c. but best with rods. We will
notice in turn their Materials, Supports, Alinement, Levelling, and Contact.
As to Materials, iron, brass and other metals have been used, but are greatly lengthened and shortened by changes of temperature. Wood is affected by moisture. Glass rods and tubes are preferable on
both these accounts. But wood is the most convenient. Wooden rods should be straight-grained white pine, &c.; well seasoned, baked, soaked in boiling oil, painted and varnished. They may be trussed,
or framed like a mason's plumb-line level, to prevent their bending. Ten or fifteen feet is a convenient length. Three are required, which may be of different colors, to prevent
« PreviousContinue »
|
{"url":"https://books.google.co.ls/books?id=lr0UAAAAYAAJ&pg=PA260&vq=square+chains&dq=editions:ISBN1357875746&output=html_text&source=gbs_toc_r&cad=3","timestamp":"2024-11-12T05:16:14Z","content_type":"text/html","content_length":"29434","record_id":"<urn:uuid:09453890-4f1b-4ad3-bef1-60a9697379ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00388.warc.gz"}
|
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
This algebra software has an exceptional ability to accommodate individual users. While offering help with algebra homework, it also forces the student to learn basic math. The algebra tutor part of
the software provides easy to understand explanations for every step of algebra problem solution.
Sandy Ketchum, AL
The most thing that I like about Algebrator software, is that I can save the expressions in a file, so I can save my homework on the computer, and print it for the teacher whenever he asked for it,
and it looks much prettier than my hand writing.
Paul D'Souza, NC
I am a 9th grade Math Teacher. I use the Algebrator application in my class room, to assist in the learning process. My students have found the easy step by step instructions, and the explanations on
how the formula works to be a great help.
Lee Wyatt, TX
Moving from town to town is hard, especially when you have to understand every teacher's way of teaching. With the Algebrator it feels like there's only one teacher, and a good one too. Now I don't
have to worry about coping with Algebra. I am searching for help in other domains too.
Jessica Short, NJ
WOW! I had no idea how easy this really was going to be. I just plug in my math problems and learn how to solve them. Algebrator is worth every cent!
Dan Jadden, CO
Search phrases used on 2011-01-16:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• logarithm base 10 worksheets
• dividing fractional exponents
• simplifying algerbra equations
• mixed review of adding subtracting multiplying and dividing fractions
• maths test paper freem grade 1 to 5
• math quizes of angles
• system of equations DEFINITIONS
• mcdougal littell US history worksheet answers
• adding and subtracting games grade 1
• factoring cubed variables
• linear inequalities in ga toolbox matlab
• partial Differential Equation Calculator
• Free Graphing Linear Equations
• ks2 free fraction worksheets
• reducing exponents in square roots
• solving quadratic binomials
• calculate solution of linear differential equation
• defenition of mathtrivia
• lesson plan permutations 6th grade
• advanced algebra answers
• ti-84 graphing calculator find intersection graph
• solve nonlinear differential equations
• writing quadratic function in vertex form
• online fraction calculator
• graphing calculator boolean equation drawing
• ti 83 linear equation
• online trig calculator
• creative "rational expressions" lesson
• skills practice workbook answers
• principles of mathematics + 6th grade
• simple math trivia
• algebra 2 calculate ordered triple equation
• lcm with exponents calculator free
• CAT exam practice questions on geometry download
• 5th grade algebra worksheets
• holt pre algebra page 10 worksheet answers
• second order non constant coefficient partial differential equation
• adding rational expression calculator
• gcse scientific notation exam question
• graphing claculator find slope
• imperfect square roots
• answers to a glencoe McGraw Algebra 2 worksheet
• multiplying monomials glencoe/mcgraw 9-1
• printable: world's hardest math problem
• help with two variable equations
• how to multiply add subtract and divide fractions
• define parabola from 3 points
• free algebra for 3rd graders printables
• radical algebraic equation solvers
• physics square roots calculator
• mathematics formulas for 9th std[algebra]
• ratios into percents
• algebra 1 solver
• multiplying and dividing real numbers
• 3rd gradefraction word problems
• glencoe texas math 5th grade
• multivariable solver ti -89
• CPM teacher edition
• power is fraction
• multiplying rational expression calculator
• balancing chemical equations power point
• graphing systems of linear inequalities wkst
• How to enter inequalities on a TI-83 graphing calculator
• difference between standard and vertex form quadratic
• algebra with pizzazz direct variation
• simplifying by factoring
• hardest math problem
• least common denominator calculator fractions
• Radical equations and inequalities real world word problems
• difference of two squares when no square numbers
• polynomial factoring calculator online
• 5th grade prealgebra worksheets
• calculator quadratic formula square root
• Middle School Math with pizzazz answers
• yr 11 australia maths algebra practise test
• cubed polynomial
• linear equations converter
• bitesize ks2 math fraction exercise
• "year 8" and test papers
• quadratic word problems and answers
• fraction and decimal games printouts
• simplifying integer exponents free solver and simplifier
• algebra distance between two points radical form
• scott foresman math chapter 8 3rd grade
• motivational phrases for taks test
• algebra editor that works problems
• linear equation simplify a fraction
• holt physics book answers
• 6th grade algebra sample questions
• simplifying exponents with square roots
• pre algebra quotation marks
• "nj pass exam"
• write an application to solve quadratic equatons fo the form+java
|
{"url":"https://softmath.com/math-book-answers/multiplying-fractions/printable-algebra-textbooks.html","timestamp":"2024-11-10T10:43:00Z","content_type":"text/html","content_length":"36577","record_id":"<urn:uuid:28347a64-c6cf-45fd-88d9-89c90e98c91b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00219.warc.gz"}
|
Answered! Write a C program that is able to view, add, or multiply two matrices input by the user. After inputting the two…
Write a C program that is able to view, add, or multiply two matrices input by the user. After inputting the two matrices, the user should be presented with a menu of possible operations ◦ View the
matrices ◦ Add the matrices ◦ Multiply the matrices ◦ Quit the program The user should be able to make a choice, see the outcome of the operation, and then make another choice from the menu. The
program should continue to operate in this manner until the user selects the quit action, at which point the program should exit. 1. Collect user input for two matrices ◦ The size of each matrix in
rows and columns ◦ The data stored in the matrices The user should be able to input up to a 5×5 matrix. Your program should be able to support any size up to a 5×5 (e.g. 2×3, 1×2, 4×4, 5×3, etc). The
matrices should hold double type floating point numbers and be stored in a single or multidimensional array. 2. Display a menu of available options that includes, at least, the following ◦ View the
matrices ◦ Add the matrices ◦ Multiply the matrices ◦ Quit the program The program should loop indefinitely until the user selects the option to quit. Each time an operation is selected, your program
should perform the selected operation, display the results, and then present the user with the menu again. 3. The results of the performed operation should only be displayed and not stored, meaning
they should not affect the values of the matrices input by the user. Thus performing the same operation repeatedly will return the same results. 4. Before performing an operation, you must ensure
that the operation is valid. If the user has input matrices which are not compatible with the selected operation, print an error message indicating the problem and then present them with the menu
again. Do not simply quit the program.
Expert Answer
The program is as follows:
void main(){
int row1, col1,
int row2, col2;
int sum;
int i, j, k,l;
int first[5][5], second[5][5];
int choice;
do {
printf(“Enter the number of rows and columns of first matrixn”);
scanf(“%d%d”, &row1, &col1);
if (row1 > 5 || col1 > 5)
printf(“The maximum value for row or column is 5n”);
} while (row1 > 5 && col1 > 5)
printf(“Enter the elements of first matrixn”);
for (i = 0; i < row1; i++)
for (j = 0; j < col1; j++)
scanf(“%d”, &first[i][j]);
do {
printf(“Enter the number of rows and columns of second matrixn”);
scanf(“%d%d”, &row2, &col2);
if (row2 > 5 || col2 > 5)
printf(“The maximum value for row or column is 5n”);
} while (row2 > 5 && col2 > 5)
printf(“Enter the elements of second matrixn”);
for (i = 0; i < row2; i++)
for (j = 0; j < col2; j++)
scanf(“%d”, &second[i][j]);
do {
printf(“1.View the matricesn”);
printf(“2.Add the matricesn”);
printf(“3.Multiply the matricesn”);
switch (choice) {
case 1:
printf (“First Matrix:nn”);
for (i = 0; i < row1; i++){
for (j = 0; j < col1; j++)
printf(“%d %c”, first[i][j], ‘ ‘);
printf (“Second Matrix:nn”);
for (i = 0; i < row2; i++){
for (j = 0; j < col2; j++)
printf(“%d %c”, second[i][j], ‘ ‘);
case 2:
if ((row1 == row2) && (col1 == col2)){
printf (“Sum of Matrices:nn”);
for (i = 0; i < row1; i++){
for (j = 0; j < col1; j++)
printf(“%d %c”, first[i][j]+second[i][j], ‘ ‘);
else {
printf(“Rows and columns of matrices are not compatible for additionn”);
case 3:
if (col1 == row2){
printf (“Multiplication of Matrices:nn”);
for (i = 0; i < row1; i++){
for (j = 0; j < col2; j++){
sum = 0;
for (k=0; k<col1; k++)
sum = sum + first[i][k] * second[k][j];
printf(“%d %c”, sum, ‘ ‘);
else {
printf(“Rows and columns of matrices are not compatible for multiplicationn”);
} while (choice ! = 4)
|
{"url":"https://grandpaperwriters.com/answered-write-a-c-program-that-is-able-to-view-add-or-multiply-two-matrices-input-by-the-user-after-inputting-the-two/","timestamp":"2024-11-14T20:29:15Z","content_type":"text/html","content_length":"45948","record_id":"<urn:uuid:ed5747de-075d-4350-a647-86ad38e916bf>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00272.warc.gz"}
|
iTDVP and canonical forms
I'm working on developing an iTDVP algorithm using TeNPy. I'm able to recycle most of the aspects from the finite TDVP code, but there are some issues when trying to insert the updated tensors to for
an updated MPS. Essentially, we have a 1-site and 0-site effective Hamiltonian that we use to update the singular values, and the tensor with form "Th". Let's denote updated tensors Th' and s'.
First, s' is no longer necessarily diagonal, and also Th' is no longer in its canonical form. To test this, I would expect that since Th = A-s, that Th' = A'-s', and so contracting Th' with Th'* from
the left, should result in s' contracted with s'* from the left (essentially the test in the function "MPS.norm_test()"). However, this test fails, and the difference grows with the time evolution
time dt.
I'm hoping maybe I can get help with the following question. Is this loss of canonical form actually a property of the iTDVP evolution procedure, or is it a bug in my code? Also, if it is a property
of the iTDVP procedure, how do I then bring these tensors into the appropriate form to then create an updated MPS? Making s' diagonal can be done using an SVD, and absorbing U, V appropriately using
the gauge freedom we have, but it's not as clear to me what to do with Th'.
Thank you for your time.
Re: iTDVP and canonical forms
What is your Th?
Re: iTDVP and canonical forms
Let AL denote the left canonical form of the iMPS, and AR denote the right canonical form. Let C be the two-leg center tensor. (C should be "s" in your notation.) Also the mixed-form tensor AC = AL-C
= C-AR. (I believe AC is your "Th"?) Then, iTDVP updates C to C' and AC to AC'. To find AL' and AR' from AC' and C', you can use Eq. 139-142 in this https://arxiv.org/pdf/1810.07006.pdf.
|
{"url":"https://tenpy.johannes-hauschild.de/viewtopic.php?p=750&sid=c6962cd1e84fd05d36b4b02e35517da2","timestamp":"2024-11-07T02:19:31Z","content_type":"text/html","content_length":"26857","record_id":"<urn:uuid:c3ffa620-7e83-44ee-9755-17995ac7a6dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00527.warc.gz"}
|
Project Euler #159: Digital root sums of factorisations. | HackerRank
[This problem is a programming version of Problem 159 from projecteuler.net]
A composite number can be factored many different ways.
For instance, not including multiplication by one, can be factored in distinct ways:
Recall that the digital root of a number, in base , is found by adding together the digits of that number, and repeating that process until a number is arrived at that is less than .
Thus the digital root of is .
We shall call a Digital Root Sum () the sum of the digital roots of the individual factors of our number.
The chart below demonstrates all of the values for .
The maximum Digital Root Sum of is .
The function gives the maximum Digital Root Sum of . So .
Find .
First line of each file contains an integer which is the number of testcases.
lines follow, each containing one integer .
Output lines, one for each testcase.
|
{"url":"https://www.hackerrank.com/contests/projecteuler/challenges/euler159/problem","timestamp":"2024-11-14T14:31:02Z","content_type":"text/html","content_length":"1008261","record_id":"<urn:uuid:c666be17-acca-4a95-898b-a29ff1ceeb1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00137.warc.gz"}
|
Normally Distributed Data Set
The ends are lower than the rest of the histogram. The "favorite color" histogram is not normally distributed, as all colors are liked fairly equally. The ". If the bars roughly follow a symmetrical
bell or hill shape, like the example below, then the distribution is approximately normally distributed. Frequency-. Normal distributed Profit Data created using below excel formula: cryptotrees.site
(RAND(),,) grid_3x3 Year sort Year on which profit/loss is recorded. Normal distribution, also known as the Gaussian distribution, is a probability distribution that is symmetric about the mean,
showing that data. A variable that is normally distributed has a histogram (or "density function") that is bell-shaped, with only one peak, and is symmetric around the mean.
In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random. Data can be "distributed" (spread out) in
different ways. The blue curve is a Normal Distribution. follows it closely, but not perfectly (which is usual). A tool that will generate a normally distributed dataset based on a specified
population mean and standard deviation. ▫ The main purpose of a histogram is to illustrate the general distribution of a set of data. ▫ This variable has a mean of and a standard deviation. Of the
three data sets, the one that most closely resembles a normal distribution is the "IQ test results". Reasons to support this include having the highest. An important class of distributions or density
curves in statistics is the normal distribution. All normal distributions have the same overall shape that is. The normal distribution is a theoretical distribution of values for a population. Often
referred to as a bell curve when plotted on a graph. The standard normal distribution (z distribution) is a normal distribution with a mean of 0 and a standard deviation of 1. It is for this reason
that it is included among the lifetime distributions commonly used for reliability and life data analysis. There are some who argue that. A normal distribution is a type of continuous probability
distribution in which most data points cluster toward the middle of the range. If you want to test your data for normal distribution, simply copy your data into the table on DATAtab, click on
descriptive statistics and then select the.
Minitab can be used to generate random data. In this example, we use Minitab to create a random set of data that is normally distributed. These data on housefly wing lengths provide an excellent
example of normally distributed data from the field of biometry. The normal distribution is a continuous probability distribution that is symmetrical around its mean, most of the observations cluster
around the central peak. Converting Normal to Standard Normal To convert X X to Z Z use the formula Z=X−μσ. Let's think about what this does. We have a normally distributed random. Normal distributed
Profit Data created using below excel formula: cryptotrees.site(RAND(),,) grid_3x3 Year sort Year on which profit/loss is recorded. The standard deviation of a dataset is simply the number (or
distance) that constitutes a complete step away from the mean. Adding or subtracting the standard. The normal distribution describes a symmetrical plot of data around its mean value, where the width
of the curve is defined by the standard deviation. The most common graphical tool for assessing normality is the Q-Q plot. In these plots, the observed data is plotted against the expected quantiles
of a normal. To generate data there, you'd want to name your column (whatever you'd like) and select “Normal Distribution” under “Math” in the drop-down menu.
A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. collection of independent normal deviates. A normal distribution is a common
probability distribution. It has a shape often referred to as a bell curve. Many everyday data sets typically follow a normal. The area under the bell-shaped curve of the normal distribution can be
shown to be equal to 1, and therefore the normal distribution is a probability. The Standard Normal curve, shown here, has mean 0 and standard deviation 1. If a dataset follows a normal distribution,
then about 68% of the observations will. The statistical way to check if the data is normally distributed is to perform the Anderson-Darling test of normality. In this approach, the data points are.
The Bell Curve (Normal/Gaussian Distribution) Explained in One Minute: From Definition to Examples
This distribution is known as the normal distribution (or, alternatively, the Gauss distribution or bell curve), and it is a continuous distribution having the. The normal distribution model always
describes a symmetric, unimodal, bell-shaped curve. However, these curves can look different depending on the details of. The normal distribution is a continuous distribution that is specified by the
mean (μ) and the standard deviation (σ). Minitab can be used to generate random data. In this example, we use Minitab to create a random set of data that is normally distributed. Many of the
statistical tests detailed in subsequent pages of this module rely on the assumption that any continuous data approximates a normal distribution. The mean for the standard normal distribution is
zero, and the standard deviation is one. The transformation z.
Which Of These Is The Best Way To Prevent Foreclosure | 50 Decibels Example
|
{"url":"https://cryptotrees.site/prices/normally-distributed-data-set.php","timestamp":"2024-11-03T17:10:33Z","content_type":"text/html","content_length":"11859","record_id":"<urn:uuid:80e3708a-338e-49dc-ae81-cb08e4e4f4ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00721.warc.gz"}
|
How to Use Index Match Instead of Vlookup - Excel Campus
How to Use Index Match Instead of Vlookup
Bottom Line: Learn how to use the INDEX and MATCH functions as an alternative to VLOOKUP.
Skill Level: Beginner
Watch on YouTube & Subscribe to our Channel
Download the Excel File
Here is the Excel file that I used in the video. I encourage you to follow along and practice writing the formulas.
Advantages of Using INDEX MATCH instead of VLOOKUP
It's best to first understand why we might want to learn this new formula. There are two main advantages that INDEX MATCH have over VLOOKUP.
#1 – Lookup to the Left
The first advantage of using these functions is that INDEX MATCH allows you to return a value in a column to the left. With VLOOKUP you're stuck returning a value from a column to the right.
Yes, you can technically use the CHOOSE function with VLOOKUP to lookup to the left, but I wouldn't recommend it (performance test).
#2 – Separate Lookup and Return Columns
Another benefit is that you specify a single column for both the lookup and return ranges, instead of the entire table array that VLOOKUP requires.
Why is this a benefit? Because VLOOKUP formulas tend to break when columns in a table array get inserted or deleted. The calculation can also slow down if there are other formulas or dependencies in
the table array.
With INDEX MATCH there's less maintenance required for your formulas when changes are made to your worksheet.
Wrapping Your Head Around INDEX MATCH
An INDEX MATCH formula uses both the INDEX and MATCH functions. It can look like the following formula.
This can look complex and overwhelming when you first see it!
To understand how the formula works, we'll start from the inside and learn the MATCH function first. Then I'll explain how INDEX works. Finally, we will combine the two in one formula.
The MATCH Function is Similar to VLOOKUP
The MATCH function is really like VLOOKUP's twin sister (or brother). Its job is to look through a range of cells and find a match. The difference is that it returns a row or column number, NOT the
value of a cell.
The following image shows the Excel definition of the MATCH function, and then my simple definition. This simple definition just makes it easier for me to remember the three arguments.
The MATCH function's arguments are also similar to VLOOKUP's. MATCH's lookup_array argument is a single row/column. Therefore, we don't need the column index number argument that VLOOKUP requires.
=MATCH(lookup_value, lookup_array, [match_type])
=VLOOKUP(lookup_value, table_array, col_index_number, [range_lookup])
Let's dive into an example to see how MATCH works.
An Example of the Match Function
We'll use my Starbucks menu example to learn MATCH. In this case, we want to use the MATCH function to return the row number for “Caffe Mocha” from the list of items in column A.
Here are instructions on how to write the MATCH formula.
1. lookup_value – The “what” argument
In the first argument, we tell MATCH what we are looking for. In this example, we are looking for “Caffe Mocha” in column A. I have entered the text “Caffe Mocha” in cell A12 and referenced the cell
in the formula.
2. lookup_array – The “where” argument.
Next, we need to tell MATCH where to look for the lookup value. I selected the range $A$2:$A$8, which contains the list of items. MATCH will look through this column from top-to-bottom until it finds
a match.
I made this an absolute reference (F4 on the keyboard) so that the range does not change if we copy the formula down. It's good to get in the habit of doing this after selecting the lookup_array
Note: You can also specify a row for this argument. In that case, MATCH would look across the column from left-to-right to find a match.
3. [match_type] – The “closest/exact” match argument
Here we specify if the function should look for an exact MATCH, or a value that is less than or greater than the lookup_value.
MATCH defaults to 1 – Less than. So we always need to specify a 0 (zero) for an exact match. This is similar to specifying FALSE or 0 (zero) for the last argument in VLOOKUP.
When your MATCH is looking up text you will generally want to look for an exact match.
If you are looking up numbers with the MATCH function then the “Less than” or “Greater than” match types can be very useful for tax and commission rate calculations.
The Result
The MATCH function returns a 4. This is because it finds the lookup value in the 4th row of the lookup_array (A2:A8).
It's important to note that this is NOT the row number of the sheet. The row/column number that MATCH returns is relative to the lookup_array (range).
Now that we have a basic understanding of how MATCH works, let's see how INDEX fits in.
The Index Function
The INDEX function is like a roadmap for the spreadsheet. It returns the value of a cell in a range based on the row and/or column number you provide it.
There are three arguments to the INDEX function.
=INDEX(array, row_num, [column_num])
The third argument [column_num] is optional, and not needed for the VLOOKUP replacement formula.
So, let’s look at the Starbucks menu again and answer the following question using the INDEX function.
“What is the price of the Caffe Mocha, size Grande?”
1. array – The “where” argument.
This argument tells the INDEX where to look in the spreadsheet. I specified $C:$2:$C$8 because this range is the column of prices that I want to return a value from.
Again, it's good practice to make this range an absolute reference so you can copy the formula down.
2. row_num – The “row number” argument
Next, we specify the row number of the value we want to return within the array (range). This is the row number of the array, NOT the row number of the sheet.
For now, we can hard code this number by typing a 4 into the formula.
The Result
The result is $3.95, the value in the 4th cell of the array (range).
Important Note: The number formatting from the array range does not automatically get applied to the cell that contains the formula. If you see a 4 returned by INDEX, this means you need to apply a
number format with decimal places to the cell(s) with the formula.
INDEX is pretty simple on its own. Let's see how to combine it with MATCH.
Combining INDEX and MATCH
By combining the INDEX and MATCH functions, we have a comparable replacement for VLOOKUP.
To write the formula combining the two, we use the MATCH function to for the row_num argument.
In the example above I used a 4 for the row_num argument for INDEX. We can just replace that with the MATCH formula we wrote.
The MATCH function returns a 4 to the row_num argument in INDEX. INDEX then returns the value of that cell, the 4th row in the array (range).
The result is $3.95, the price of the Caffe Mocha size Grande.
Here is a simple guide to help you write the formula until you've practiced enough to memorize it.
Again, you can think of MATCH as the VLOOKUP. It just returns a row number to INDEX. INDEX then returns the value of the cell in a separate column.
VLOOKUP versus INDEX MATCH
Could we have accomplished the same thing with VLOOKUP in our example? Yes. But again, the advantage of using the INDEX MATCH formulas is that it's less susceptible to breaking when the spreadsheet
Inserting and Deleting Columns
If, for example, we were to add a new cup size to our coffee menu and insert a column between Tall and Grande, our Vlookup formula would return the wrong result. This happens because Grande is now
the 4th column, but the index number for VLOOKUP is still 3.
You can also use the MATCH function with VLOOKUP to prevent these types of errors. However, VLOOKUP still can't perform a lookup to the left.
Look to the Left
In this case, the items could be to the right of the prices and INDEX MATCH would still work.
In fact, we don't even have to rewrite the formula if we move the columns around.
Matching Both Row and Column Numbers
I just want to quickly note that you can use two MATCH functions inside INDEX for both the row and column numbers.
In this example, the return range spans multiple rows and columns C4:E8.
We can use MATCH for lookups both vertically or horizontally. So, the MATCH function can be used twice inside INDEX to perform a two-way lookup. Looking up both the item name and price column.
I've added drop-downs (data validation lists) for both the Item and Size to make this an interactive price calculator. When the user selects the item and size, the INDEX MATCH MATCH formula will
automatically perform the lookups and return the correct price.
The Most Common Error with INDEX MATCH Formulas
The most common error you will probably see when combining INDEX and MATCH functions is the #REF error.
This is usually caused when the return range in INDEX is a different size from the lookup range in MATCH. In the image below, you can see that the MATCH range includes row 8, while the INDEX range
only goes up to row 7. When the specified criteria can't be found because of the misalignment, this will cause the formula to return an error that says #REF.
To fix the error, you can simply expand the smaller range to match the larger. In this case, we would change the INDEX range to end at cell D8 instead of D7.
INDEX MATCH will also return the #N/A error when a value is not found, just like VLOOKUP.
Using Excel Tables to Reduce Errors
The example above was easy enough to spot and fix because the data set is so small. When working with larger data sets the mismatch can occur more often because there are blank cells in the data. One
workaround for that is to reference Tables instead of ranges. Here is a tutorial that explains more about Excel Tables and Structured Reference Formulas.
INDEX MATCH can be difficult to understand at first. I encourage you to practice with the example file and you'll have this formula committed to memory in no time.
The new XLOOKUP function is an alternative to VLOOKUP and INDEX MATCH that is easier to write. It will just be limited by availability and backward compatibility in the near future.
Please leave a comment below with questions or suggestions.
Thank you! 🙂
51 comments
• Please send a new link to dơnload Excel File.
□ Hi Hai,
I’m sorry about that. I believe the download is working now. Let us know if you experience any issues. Thanks!
• link to workbook not working
□ never mind working now
☆ Thanks for letting us know Jim. And I apologize for the inconvenience.
• Link to the Excel file is broken
□ Hi Don,
I’m sorry about that. It should be working now.
• Hi John,
the file cannot be downloaded.
□ Hi Guilian,
I’m sorry about that. It should be working now.
• Excellent explanation. Keep it up.!!!!
□ Thank you, Mohammed! I appreciate your support. 🙂
• Jon
I think I’ve mentioned this before, but will say it again: It is just wrong and unfair to compare an INDEX/MATCH combo formula with a naked VLOOKUP!
By doing so, you’re comparing apples with oranges! And raising the performance issue up front is NOT justification for continuing the unbalanced comparison, because using VLOOKUP with CHOOSE
eliminates the following perceived disadvantages of VLOOKUP you’ve then raised in that unbalanced comparison: (1) the look right only (2) wide range vs single column only, and (3) risk of
integrity comprise when columns are inserted within VLOOKUP’s table_array argument.
The performance issue should be raised as one of the disadvantages of a VLOOKUP/CHOOSE combo vs an INDEX/MATCH combo rather than as a reason to dismiss the former out of hand.
VLOOKUP/CHOOSE and INDEX/MATCH are almost identical in functionality – it’s the degraded performance of the former that elevates the benefit of using the latter, but little else.
□ Hi Col,
I kindly disagree. With Excel, there are always many different ways to solve a problem. Part of our job as analysts is to try and find the fastest and most efficient way to produce a result.
Will we always get this right the first time? Absolutely not. Part of the fun is continuing to iterate, learn, and improve.
VLOOKUP and INDEX MATCH can both produce the same result. So, I believe it is absolutely a fair comparison. As does Microsoft.
As I mentioned in the article, VLOOKUP and MATCH are very similar functions. They really just return a different value/property of the matching cell (item in the array).
The new XLOOKUP function is really a combination of VLOOKUP and INDEX MATCH, giving us the best of both worlds. Microsoft points out some of the same issues I mentioned in this article in
their announcement of XLOOKUP. Check out the section titled “Why release a new lookup function?”
Here is a quote from their Facebook page on XLOOKUP compared to VLOOKUP and INDEX MATCH.
VLOOKUP CHOOSE can be used as an alternative, I just don’t recommend it because of poor performance and complexity. Most users have never seen the notation for arrays in formulas with curly
Don’t get me wrong, I’m still a fan of VLOOKUP! I actually voted for it as my favorite in an interview Microsoft did with me on VLOOKUP versus INDEX MATCH. I’ll try to find the video. I’ll
also do a follow-up post on why I like VLOOKUP. This article is highlighting scenarios where it doesn’t work, and an alternative solution with INDEX MATCH.
I hope that helps. Thanks! 🙂
• Hi Jon,
Thanks for that overview. When I go to download the excel file I get taken to an error page.
Can you fix the link and let me know when it is OK or email me the file
□ Hi Jeff,
I’m sorry about that. The download should be working now. Let us know if you still have issues.
Thank you for the nice feedback! 🙂
• Again well presented, thank you for this useful tool.
□ Thank you, Mark! 🙂
• Hi Jon,
Thank you so much for the training!
• I’ve always struggled to understand Index Match. This breaks it down and it explains it step by step. Thank you!!
• VLOOKUP can easily find the sales person, but it has no way to handle the month name automatically. The trick is to to use the MATCH function in place of a static column index.
• Hi Jon, nicely explained.
Using a lot of big VLOOKUPs recently, I will give this option a go.
I would be interested in any comment you have about whether either option is better to manage size of workbook? Basically looking at options to help manage some of the fluctuations in size when
retaining calculated cells.
□ Vlookup creates bigger file size. Also, using tables and named ranges ensures that you do not have to update the formula when adding new columns or rows of data.
• Thanks vary much, Jon. The ability to look left and the ability to do a lookup on two arguments are a knock-out, as far as I can see.
Are there any situations where Vlookup is still better? (large data sets maybe?)
• I’m trying to employ this and didn’t have any luck. I attempted to use VLookup and it didn’t work. My question is, do the worksheets need to be in the same workbook for these to work?
□ I just tested Index and Match using totally different work book as data source and it worked. Hope you’ll get it sorted.
• This problem is difficult to explain, but I will try. I’m creating a database to keep track of stock option spread trades that have numerous legs. Some of the trades are simple and only having
two legs, with one position being long, and the other being short. But, some of the trades have 8-10 legs, and this creates a problem in calculating the value of each leg. For this example, lets
open a credit spread trade and go long an option, and go short an option. This is trade number 500 and each leg is recorded on a separate row. I want to invest $1000 per leg, so we need to take
the ABS difference of the two fill prices and divide it $1000. This will tell us the quantity purchased. Eventually, we’ll close the short leg, and open a new short leg. And each time, we need to
calculate a quantity by performing the above calculation with the long leg. This cycle of closing the short leg and opening a new short leg can continue until the long leg is finally closed,
which ends the trade. So, let’s say we have 8-legs in this trade, the first row contains the long leg, and the 7 rows below contain the short legs. Each row has an ID of 500 to identify all
8-rows as trade number 500. Each time a short leg is closed, and a new short leg is opened, we need to scan the table for ID 500, then scan those rows to find the long leg. We then need to go
over 5 columns to locate the long leg fill price to use in the new quantity calculation. Hope all that makes sense! Thanks, Jeff
• Thanks for the explanation, it worked for me. left lookup is one of the best feature of index match function.
• Hi Jon,
Thank you so much for your explanation, I would like to ask and follow your opinion on my issue. I am trying to build a workbook for my small business so i can track inventory. I’ve done the
codes for the items (35 Items in total) and with VLOOKUP I made the received tab and sold tab. The problem now is every customer is on a separate tab which means in my sold tab I will need to
read and sum the total amounts of every customer for different item…What function would be the best to sort this?
I will really appreciate if you can help me out.
• Can “” be used some how with index match
• Having a real problem with Excel: for years, I’ve been able to create formulas using VLOOKUP and the INDEX/MATCH combo with data from 2 workbooks. Now, in Excel 2019, my formulas do not recognize
other workbooks when I try to select them for the array.
Any ideas as to why this is happening? I know that it won’t work if each workbook is in multiple instances of Excel, but both of my workbooks were opened using the same instance.
• Great video!! I love the Match Row and Column example and the use of the drop down menu. I am wondering though…
If I added a new size column called maxi, and input prices for the first two drinks down the menu and left the next three blank, how can I get Excel to return a “N/A” value?
Currently, if I leave the maxi white chocolate mocha cell blank, and select it from the drop down lists, excel is returning a value of “0” in the price cell. I expect there will be some sort of
true false statement, I’ve already trialled a few combinations but can’t seem to get a formula to work.
□ It will only return #N/A if it can’t find the drink a/o size, so you’d have to add something into your formula to check if the price is 0 and return #N/A if it is. Something like =IF(formula=
0,”#N/A”,formula) where ‘formula’ is your existing index/match.
• Thanks, Great Job.
• […] The INDEX MATCH functions look up and return the value of a cell in a selected array, using 1 or 2 reference values. INDEX MATCH is like VLOOKUP, but INDEX MATCH is more robust and flexible.
• Thanks for the very helpful instruction, Jon. Cheers.
• I was looking for a perfect vidoe on youtube to clear all my question on excel, I found many but none close to Excel Campus… awesome work .. keep it up ! I have a suggestion would like to see
more videos on VBA and Macros.
• Nice job on these training videos. I’m going to be focusing on pivot tables but I thought it was best to get the basics of VLookup, Index & Match. These videos were a perfect. Easy step-by-step
instructions. Nice job Jon :).
• THANK YOU! I finally understand this from your explaining through comparison to vlookup.
• Nice
• Hello. What happens when you use these commands in the same row? For example: my column B has the stock ticker; column C tells you if it is a Buy or a Sell; and column D tells you how many stocks
were bought or sold. I want to know (using index and match or the correct commands) how many stocks from column B were bought or sold; for results, I am using a table that has the tickers on a
column and Buy / Sell on rows. Thanks!
• Really great article explaining INDEX + MATCH. Thanks for writing this.
• I am happy, Nice presentation
• I watched many videos on Youtube to guide me through Index/Match to no avail. Thank you for breaking it down so simply and understandably.
• What’s the point of index match? I thought it was the formula for when you have rows with different data but the same unique identifier and wanted to match the information. From the videos I’ve
seen it just looks like a formula for using ctrl + F to find a value and then read what’s in its row?
• Thanks for your Exel Documents.
• I have used your formula and it works great.
I do have an issue though.
I am making a score sheet for the Super Bowel. when I write the formula it works but if I put in a double fidget. it give me a “#N/A”
if I insert a column and use “=RIGHT(Cell#,1)” it still does n to work. any suggestions?
• Thanks for the post. Another big advantage is that INDEX/MATCH reference the column directly where as VLOOKUP just takes a column number for the lookup value. VLOOKUP has the issue where if you
insert a new column to the lookup table then your function is now wrong, however with INDEX/MATCH the reference so your function still works.
• It’s a great and simple lesson! Bravo!!
• Good Morning from Africa,
am trying to use some of these functions for Payslip….
Looking up on ID, searching whether e.g. Bonus,Overtime,Loan for this ID is relevant – (if not it should be left blank) or in the same row display the item which is relevant to the ID.
Am just not able to combine functions for my needs.
Maybe someone can help?
Warm Regards
• Wow. I have been burned more than once with VLookup. I really like the Index Match alternative. The way you presented it (simple understandable subject matter) and the terminology used to convey
it and remember it, is pretty good.
My only ‘druther’ would have been to include one more extension to the formula by adding an error message to the user when N/A is the result. Like, ‘when NA happens, show ‘name/item not found” or
something like that.
THANK YOU VERY MUCH. This WILL get used in my new budget worksheet.
• Great explanations and very helpful. I really appreciate.
|
{"url":"https://www.excelcampus.com/functions/index-match-formula/","timestamp":"2024-11-04T05:42:11Z","content_type":"text/html","content_length":"285706","record_id":"<urn:uuid:9d7e9b54-fc51-45ac-8eb5-d9b4533b5e0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00729.warc.gz"}
|
Benim, Robert W
Assistant Teaching Professor
Research Areas
selected publications
courses taught
• APPM 1350 - Calculus 1 for Engineers
Primary Instructor - Fall 2020 / Spring 2021 / Fall 2021 / Spring 2022 / Fall 2022 / Spring 2023 / Fall 2023 / Spring 2024 / Fall 2024
Topics in analytical geometry and calculus including limits, rates of change of functions, derivatives and integrals of algebraic and transcendental functions, applications of differentiations
and integration. Students who have already earned college credit for calculus 1 are eligible to enroll in this course if they want to solidify their knowledge base in calculus 1. For more
information about the math placement referred to in the "Enrollment Requirements", contact your academic advisor. Degree credit not granted for this course and APPM 1345 or ECON 1088 or MATH 1081
or MATH 1300 or MATH 1310 or MATH 1330.
• APPM 1351 - Calculus 1 Work Group
Secondary Instructor - Fall 2024
Provides problem-solving assistance to students enrolled in APPM 1350. Student groups work in collaborative learning environment. Student participation is essential.
• APPM 1360 - Calculus 2 for Engineers
Primary Instructor - Spring 2021 / Spring 2022 / Spring 2023
Continuation of APPM 1350. Focuses on applications of the definite integral, methods of integration, improper integrals, Taylor's theorem, and infinite series. Degree credit not granted for this
course and MATH 2300.
• APPM 2350 - Calculus 3 for Engineers
Primary Instructor - Spring 2024
Covers multivariable calculus, vector analysis, and theorems of Gauss, Green, and Stokes. Degree credit not granted for this course and MATH 2400.
• APPM 3170 - Discrete Applied Mathematics
Primary Instructor - Fall 2023 / Spring 2024 / Fall 2024
Introduces students to ideas and techniques from discrete mathematics that are widely used in science and engineering. Mathematical definitions and proofs are emphasized. Topics include formal
logic notation, proof methods; set theory, relations; induction, well-ordering; algorithms, growth of functions and complexity; integer congruences; basic and advanced counting techniques,
recurrences and elementary graph theory. Other selected topics may also be covered.
• APPM 3310 - Matrix Methods and Applications
Primary Instructor - Fall 2020 / Summer 2021 / Fall 2021 / Spring 2022 / Summer 2022 / Fall 2022
Introduces linear algebra and matrices with an emphasis on applications, including methods to solve systems of linear algebraic and linear ordinary differential equations. Discusses vector space
concepts, decomposition theorems, and eigenvalue problems. Degree credit not granted for this course and MATH 2130 and MATH 2135.
|
{"url":"https://experts.colorado.edu/display/fisid_167716","timestamp":"2024-11-03T16:30:10Z","content_type":"text/html","content_length":"23514","record_id":"<urn:uuid:7c8836dd-1a8a-4d37-875f-b8950be180be>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00592.warc.gz"}
|
Soft Decision Trees
In this chapter, we develop the foundation of a new theory for decision trees based on new modeling of phenomena with soft numbers. Soft numbers represent the theory of soft logic that addresses the
need to combine real processes and cognitive ones in the same framework. At the same time, soft logic develops a new concept of modeling and dealing with uncertainty: the uncertainty of time and
space. It is a language that can talk in two reference frames and also suggest a way to combine them. In the classical probability, in continuous random variables, there is no distinguishing between
the probability involving strict inequality and non-strict inequality. Moreover, a probability involves equality collapse to zero, without distinguishing among the values that we would like that the
random variable will have for comparison. This chapter presents soft probability, by incorporating of soft numbers into probability theory. Soft numbers are a set of new numbers that are linear
combinations of multiples of “ones” and multiples of “zeros.” In this chapter, we develop a probability involving equality as a “soft zero” multiple of a probability density function (PDF). Based on
soft probability, we introduced an approach to implement C4.5 algorithm as an example for a soft decision tree.
Original language English
Title of host publication Machine Learning for Data Science Handbook
Subtitle of host publication Data Mining and Knowledge Discovery Handbook, Third Edition
Publisher Springer International Publishing
Pages 143-170
Number of pages 28
ISBN (Electronic) 9783031246289
ISBN (Print) 9783031246272
State Published - 1 Jan 2023
ASJC Scopus subject areas
• General Computer Science
• General Mathematics
Dive into the research topics of 'Soft Decision Trees'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/soft-decision-trees","timestamp":"2024-11-03T09:18:22Z","content_type":"text/html","content_length":"55618","record_id":"<urn:uuid:abd2b829-d275-4e8f-8a3a-465ebb47cb47>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00602.warc.gz"}
|
IFNA Function: Definition, Formula Examples and Usage
IFNA Function
Are you looking to use the IFNA function in Google Sheets to streamline your spreadsheet work? Look no further, because I’ve got you covered.
The IFNA function is a handy tool that allows you to specify a value or action to take if a formula returns the #N/A error. This error occurs when a formula or function can’t find the data it needs
to complete a calculation. By using the IFNA function, you can specify what to do in these situations and avoid errors in your spreadsheet. This can be especially useful if you’re working with large
amounts of data and need to ensure that your calculations are accurate. So, if you want to take your spreadsheet skills to the next level, definitely give the IFNA function a try!
Definition of IFNA Function
The IFNA function in Google Sheets is a logical function that allows you to specify a value or action to take if a formula returns the #N/A error. It has the following syntax: IFNA(value,
value_if_na). The value argument is the formula or function that may return the #N/A error. The value_if_na argument is the value or action that you want to occur if the formula returns the #N/A
error. For example, you could use the IFNA function to replace the #N/A error with a 0, or to display a custom error message instead of the #N/A error. The IFNA function can be useful for cleaning up
and streamlining your spreadsheet work, especially when working with large amounts of data.
Syntax of IFNA Function
The syntax of the IFNA function in Google Sheets is as follows:
=IFNA(value, value_if_na)
The value argument is the formula or function that may return the #N/A error. This can be any formula or function that you want to use in your spreadsheet.
The value_if_na argument is the value or action that you want to occur if the formula returns the #N/A error. This can be any value, such as a number, text, or a reference to another cell. It can
also be another formula or function, as long as it doesn’t return the #N/A error itself.
For example, you could use the following formula to replace the #N/A error with a 0:
=IFNA(A1/B1, 0)
This formula would divide the value in cell A1 by the value in cell B1, and if the result is the #N/A error, it would display a 0 instead.
Examples of IFNA Function
Here are three examples of how you can use the IFNA function in Google Sheets:
1. Replace the #N/A error with a 0:
=IFNA(A1/B1, 0)
This formula would divide the value in cell A1 by the value in cell B1, and if the result is the #N/A error, it would display a 0 instead.
2. Display a custom error message instead of the #N/A error:
=IFNA(A1/B1, "Error: Divide by zero")
This formula would divide the value in cell A1 by the value in cell B1, and if the result is the #N/A error, it would display the text “Error: Divide by zero” instead.
3. Use the IFNA function with the VLOOKUP function:
=IFNA(VLOOKUP(A1, B1:C10, 2, FALSE), "Not found")
This formula would use the VLOOKUP function to search for the value in cell A1 in the range B1:C10, and if the value is found, it would return the value in the second column of the matching row.
If the value is not found, the IFNA function would display the text “Not found” instead of the #N/A error.
Use Case of IFNA Function
Here are a few real-life examples of using the IFNA function in Google Sheets:
1. In a budget spreadsheet, you might use the IFNA function to replace the #N/A error with a 0 when a cell is left blank. For example, you could use the following formula:
=IFNA(A1, 0)
This would display a 0 in the cell if it is left blank, instead of the #N/A error.
2. In a customer database, you might use the IFNA function with the VLOOKUP function to return a custom error message if a customer’s name is not found in the database. For example:
=IFNA(VLOOKUP(A1, B1:D1000, 2, FALSE), "Customer not found")
This formula would use the VLOOKUP function to search for the customer’s name in cell A1 in the range B1:D1000, and if the name is found, it would return the customer’s phone number from the
second column of the matching row. If the name is not found, the IFNA function would display the text “Customer not found” instead of the #N/A error.
3. In a product inventory spreadsheet, you might use the IFNA function to check if a product is in stock. For example:
=IFNA(VLOOKUP(A1, B1:C1000, 2, FALSE), "Out of stock")
This formula would use the VLOOKUP function to search for the product name in cell A1 in the range B1:C1000, and if the product is found, it would return the quantity in stock from the second
column of the matching row. If the product is not found, the IFNA function would display the text “Out of stock” instead of the #N/A error.
Limitations of IFNA Function
The IFNA function in Google Sheets has a few limitations that you should be aware of:
1. The IFNA function only works with formulas or functions that return the #N/A error. If a formula or function returns a different error or a result that you don’t expect, the IFNA function will
not be able to handle it.
2. The value_if_na argument in the IFNA function must be a value or a formula that does not itself return the #N/A error. If the value_if_na argument returns the #N/A error, the IFNA function will
not be able to handle it and will return the #N/A error itself.
3. The IFNA function does not support additional logical tests, such as AND, OR, and NOT. If you need to use multiple logical tests in your formula, you will need to use the IF function instead of
the IFNA function.
4. The IFNA function is not available in all versions of Google Sheets. If you are using an older version of Google Sheets, you may not have access to the IFNA function.
Despite these limitations, the IFNA function can still be a very useful tool for streamlining your spreadsheet work and avoiding errors in your calculations.
Commonly Used Functions Along With IFNA
Here are some commonly used functions that are often used in combination with the IFNA function in Google Sheets:
1. VLOOKUP: The VLOOKUP function is used to search for a value in a table and return a corresponding value from a different column in the same row. This is often used in combination with the IFNA
function to handle cases where the value is not found in the table. For example:
=IFNA(VLOOKUP(A1, B1:D1000, 2, FALSE), "Not found")
This formula would use the VLOOKUP function to search for the value in cell A1 in the range B1:D1000, and if the value is found, it would return the value in the second column of the matching
row. If the value is not found, the IFNA function would display the text “Not found” instead of the #N/A error.
2. INDEX: The INDEX function is used to return the value of a cell from a specific row and column within a range. This is often used in combination with the IFNA function to handle cases where the
cell is not found within the range. For example:
=IFNA(INDEX(B1:D1000, A1, 2), "Not found")
This formula would use the INDEX function to return the value in the second column of the row specified by the value in cell A1 in the range B1:D1000. If the cell is not found within the range,
the IFNA function would display the text “Not found” instead of the #N/A error.
3. MATCH: The MATCH function is used to find the position of a value within a range and return its relative position. This is often used in combination with the IFNA function to handle cases where
the value is not found in the range. For example:
=IFNA(MATCH(A1, B1:D1000, 0), "Not found")
This formula would use the MATCH function to search for the value in cell A1 in the range B1:D1000, and if the value is found, it would return its relative position within the range. If the value
is not found, the IFNA function would display the text “Not found” instead of the #N/A error.
The IFNA function in Google Sheets is a powerful tool that allows you to specify a value or action to take if a formula returns the #N/A error. This can be especially useful when working with large
amounts of data and need to ensure that your calculations are accurate. The IFNA function has the following syntax: IFNA(value, value_if_na). The value argument is the formula or function that may
return the #N/A error, and the value_if_na argument is the value or action that you want to occur if the formula returns the #N/A error.
Some common use cases for the IFNA function include replacing the #N/A error with a 0, displaying a custom error message instead of the #N/A error, and using the IFNA function with other functions
such as VLOOKUP and INDEX to handle cases where a value is not found. The IFNA function has a few limitations, such as not supporting additional logical tests and not being available in all versions
of Google Sheets, but it can still be a very useful tool for streamlining your spreadsheet work and avoiding errors in your calculations.
If you’re looking to take your spreadsheet skills to the next level, definitely give the IFNA function a try! It can save you a lot of time and hassle by allowing you to handle errors and missing
data in a more efficient and organized way.
Video: IFNA Function
In this video, you will see how to use IFNA function. We suggest you to watch the video to understand the usage of IFNA formula.
Related Posts Worth Your Attention
Leave a Comment
|
{"url":"https://sheetsland.com/ifna-function/","timestamp":"2024-11-11T01:07:23Z","content_type":"text/html","content_length":"50768","record_id":"<urn:uuid:933fa64e-4a44-44d3-8283-902d586bd29f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00183.warc.gz"}
|
Maui Sandal Case Study
Essay by savesecond • January 6, 2016 • Case Study • 1,488 Words (6 Pages) • 1,081 Views
It is my recommendation that Shuzworld change their facilities layout to create a better workflow. You can accomplish this by breaking the current structure and reorganizing the workstations to
accomplish tasks in a more time efficient manner and maximizing an employees’ time spent on the product. Utilizing the Case Study we can surmise that each eight hour shift must produce forty-eight
work boots. 8 hours times 60 minutes equals 480 minutes. 480 minutes (The Work Shift) divided by 48 (the number of work boots required) equals 10. One work boot must be completed in 10 minutes to
meet production goals. Let’s take a look at the steps involved in making a work boot so analyze where we can cut production times.
Shuzworld's Rugged Wear
| Tasks | Completion (Minutes) | Predecessors |
A | 10 | None |
B | 6 | A |
C | 3 | A |
D | 8 | B,C |
E | 3 | D |
F | 4 | D |
G | 3 | E,F |
H | 9 | G |
Total time | 46 |
Both Table 1 and Figure 1 provide information of tasks associated with the production of the work boot. The workflow diagram in Figure 1 shows that some tasks are parallel, and others are sequential.
In other words, some tasks can start together, and others cannot start unless the previous task is completed. Table 1 shows that the task A takes 10 minutes to complete; it is the longest task in
this schedule. The production line consumes 46 minutes to complete all tasks. It produces 6 boots in an hour and maintains 40-hour per week work schedule.
The plant’s operation director, Alistair Wu wants to find out how to optimize tasks from A to H, described in the Table 1 and Figure 1. The concept implies that the tasks within the assembly line
must be assigned to meet the production rate while minimizing the idle time. This method is called Balancing the assembly line. Balancing the assembly line method uses the following strategies:
shortest operation time, least number of the following task, longest operation time, most following tasks, and ranked positional weight. The number of tasks that require balancing dictates the
assembly line balancing framework. In Szuzworld’s case, the framework uses the following constraints:
• The shortest cycle time cannot be lower than 10 minutes;
• The entire cycle time must not be longer than 46 minutes.
Answer to question A 1a. The solution is achieved using POM software. Table 2 and 3 show corresponding input and output of assembly line balancing for the rugged work boot.
Table 2.
Input Data
|Task A 2. |
|TASK |
|Station |Task |
|1 |5 units |
|2 |5+10=15 units |
|3 |15+15=30 units |
|4 |30+20=50 units |
B. New Sandal Line
I have gathered the relevant information provided and estimated the cost of producing the new line of sandals. There is a learning curve of 80% according to the information from Shuzworld and the
expectations for production are aggressive. The sandals are prepared in groupings of 10,000. The first month will have 5 groupings produced utilizing 1000 hours of labor billed at $1.08 an hour. The
goal is to have the groupings increase by 5 every month for three months. To help calculate the cost of production to meet these goals I’ve used the coefficient approach. The initial 5 groupings that
are required in month 1 will cost $4,035.96 using 3,737 labor hours to complete. Looking at these numbers and applying the learning curve it will take around 4,775 labor hours to manufacture ten
groupings of the footwear in the second month, a cost of $5,154.62. The third month will take around 5,511 for fifteen groupings and it will come with a cost of labor around $5,950.10. The last month
has around 6,102 hours of labor for twenty groupings equaling a cost of $6,590.16. These projects are made using the estimations provided in the Shuzworld report.
Continuing the Maui Sandal line will lower the costs of production per month based on the information above. The first month, the sandals will have a labor cost of $0.08 per unit, it is further
reduced to $0.05 per unit in month
Only available on AllBestEssays.com
|
{"url":"https://www.allbestessays.com/essay/Maui-Sandal-Case-Study/60716.html","timestamp":"2024-11-07T07:28:30Z","content_type":"application/xhtml+xml","content_length":"85931","record_id":"<urn:uuid:7852153d-f588-45a5-befe-ad7229cd03b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00419.warc.gz"}
|
Order in substituting values for the variables - Predecessors and Successors
+ General Questions (11)
Dear Prof. and Team,
Is there any order to be maintained while substituting for the variables here for 1 or 0. In this case it is a,q1 and q2. Can i take any random order here ?
In this eg. a is first substituted for 0 and 1. Instead of that, can i substitute q1 for 0 and 1 in the beginning and so on ?
P.S : I understand the complexity of the formula increases if i don't substitute for a here first. But my question is in general. Should we substitute for a variable thoughtfully so that the formula
gets reduced to its simplest form.
You can perform the universal quantification of variables in any order since it is commutative. Recall that the definition of universal quantification is ∀x.φ := φ[x:=0] ⋀ φ[x:=1]. Thus, we have
• ∀x.∀y.φ = (∀y.φ)[x:=0] ⋀ (∀y.φ)[x:=1] = φ[x:=0][y:=0] ⋀ φ[x:=0][y:=1] ⋀ φ[x:=1][y:=0] ⋀ φ[x:=1][y:=1]
• ∀y.∀x.φ = (∀x.φ)[y:=0] ⋀ (∀x.φ)[y:=1] = φ[x:=0][y:=0] ⋀ φ[x:=1][y:=0] ⋀ φ[x:=0][y:=1] ⋀ φ[x:=1][y:=1]
which are the same (note that φ[x:=0][y:=1] and φ[y:=1][x:=0] are the same: what we do is replacing occurrences of x and y by 0 and since occurrences of x are not occurrences of y, the two are not in
Hence, we have ∀x.∀y.φ = ∀y.∀x.φ, and by negation, we also see that ∃x.∃y.φ = ∃y.∃x.φ holds. However, you cannot change the order of alternating quantifiers like ∀x.∃y.φ or ∃x.∀y.φ.
And you are right with your observation that you can often significantly save work when you choose that variable first that will reduce the formula as much as possible since one part φ[x:=0] or φ[x:=
1] may be just true or a small formula.
|
{"url":"https://q2a.cs.uni-kl.de/202/order-substituting-values-variables-predecessors-successors","timestamp":"2024-11-02T15:53:42Z","content_type":"text/html","content_length":"59794","record_id":"<urn:uuid:cc1fa6a0-010e-4504-865e-92c41821c096>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00217.warc.gz"}
|
Multiplication On A Number Line Worksheets
Mathematics, particularly multiplication, creates the keystone of countless academic techniques and real-world applications. Yet, for lots of learners, grasping multiplication can posture a
difficulty. To resolve this obstacle, instructors and moms and dads have actually welcomed an effective tool: Multiplication On A Number Line Worksheets.
Introduction to Multiplication On A Number Line Worksheets
Multiplication On A Number Line Worksheets
Multiplication On A Number Line Worksheets -
Multiplication using a number line This set of worksheets includes topics like writing multiplication sentences match numbers with multiplication sentences and much more Division using a number line
Access this 35 division worksheets which makes understanding and practicing division much easier on number lines Elapsed time on a number line
Multiply using a number line worksheets These worksheets employ another strategy for solving multiplication questions skip counting on a number line Numbers up to 10 x 10 Free Printable From K5
Value of Multiplication Technique Comprehending multiplication is crucial, laying a solid foundation for innovative mathematical ideas. Multiplication On A Number Line Worksheets supply structured
and targeted practice, cultivating a much deeper understanding of this basic arithmetic operation.
Evolution of Multiplication On A Number Line Worksheets
Grade 2 Multiplication With A Number Line Worksheets
Grade 2 Multiplication With A Number Line Worksheets
Stick around our printable multiplication models worksheets for practice that helps the budding mathematicians in the 2nd grade 3rd grade and 4th grade get their heads around multiplying numbers and
multiplication sentences
Number Lines Get ample practice on number line multiplication drawing on our free printable multiplication using number lines worksheets This array of fun and engaging worksheets includes exercises
like drawing hops on number lines writing and completing multiplication sentences and much more making practice a cinch
From traditional pen-and-paper workouts to digitized interactive styles, Multiplication On A Number Line Worksheets have actually progressed, accommodating varied learning styles and preferences.
Types of Multiplication On A Number Line Worksheets
Standard Multiplication Sheets Basic workouts concentrating on multiplication tables, helping learners build a solid math base.
Word Trouble Worksheets
Real-life circumstances integrated into issues, improving crucial reasoning and application skills.
Timed Multiplication Drills Tests designed to enhance rate and precision, assisting in quick mental math.
Advantages of Using Multiplication On A Number Line Worksheets
Math Worksheets Number Line Decimals Worksheet Resume Examples
Math Worksheets Number Line Decimals Worksheet Resume Examples
A great addition to your maths lessons this worksheet is perfect for tracking KS1 pupils knowledge of multiplication using a number line Designed by teachers this brilliant resource is here to
support year 1 and 2 children in meeting the National Curriculum aims which include Solving multiplication problems using materials repeated
Number Line Multiplication Worksheet Primary Resources Downloads Multiplication on a Number Line Worksheet Pack 5 0 5 reviews Calculation Multiplication and Division Ages 5 7 Free Account Includes
Thousands of FREE teaching resources to download Pick your own FREE resource every week with our newsletter Suggest a Resource You want it
Improved Mathematical Abilities
Regular method sharpens multiplication effectiveness, enhancing general mathematics abilities.
Enhanced Problem-Solving Talents
Word problems in worksheets establish logical reasoning and method application.
Self-Paced Discovering Advantages
Worksheets accommodate individual understanding speeds, cultivating a comfortable and versatile learning atmosphere.
How to Develop Engaging Multiplication On A Number Line Worksheets
Incorporating Visuals and Colors Dynamic visuals and shades catch focus, making worksheets visually appealing and involving.
Including Real-Life Scenarios
Associating multiplication to daily scenarios includes significance and practicality to workouts.
Tailoring Worksheets to Different Ability Levels Customizing worksheets based on varying proficiency levels ensures comprehensive understanding. Interactive and Online Multiplication Resources
Digital Multiplication Equipment and Games Technology-based sources use interactive learning experiences, making multiplication engaging and enjoyable. Interactive Web Sites and Apps On the internet
systems offer varied and available multiplication practice, supplementing standard worksheets. Personalizing Worksheets for Various Discovering Styles Visual Students Visual help and layouts help
understanding for learners inclined toward aesthetic discovering. Auditory Learners Verbal multiplication troubles or mnemonics accommodate learners who comprehend ideas with auditory ways.
Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic learners in comprehending multiplication. Tips for Effective Application in Discovering Uniformity in Practice Routine
practice strengthens multiplication abilities, advertising retention and fluency. Balancing Repetition and Selection A mix of repeated workouts and varied trouble layouts preserves passion and
comprehension. Offering Positive Comments Comments aids in identifying areas of enhancement, motivating continued progress. Difficulties in Multiplication Technique and Solutions Motivation and
Engagement Difficulties Monotonous drills can cause uninterest; cutting-edge strategies can reignite motivation. Conquering Fear of Mathematics Adverse understandings around math can impede
progression; developing a positive understanding atmosphere is important. Impact of Multiplication On A Number Line Worksheets on Academic Performance Studies and Research Study Searchings For Study
suggests a positive relationship in between constant worksheet usage and enhanced mathematics efficiency.
Multiplication On A Number Line Worksheets emerge as versatile devices, fostering mathematical efficiency in learners while accommodating diverse discovering styles. From basic drills to interactive
on the internet resources, these worksheets not only boost multiplication skills however likewise advertise important reasoning and problem-solving capabilities.
Number Line Worksheets Addition Subtraction 1 20 Etsy
Fraction Number Line Sheets
Check more of Multiplication On A Number Line Worksheets below
Number Line Worksheets Up To 1000
Multiplication Using A Number Line Worksheet Times Tables Worksheets
Number Line Multiplication Worksheets NumbersWorksheet
Multiplication Worksheets And Printouts
Multiplication Using Number Line Worksheets NumbersWorksheet
Worksheet Number Line Addition Grass Fedjp Worksheet Study Site
Multiply using a number line worksheets K5 Learning
Multiply using a number line worksheets These worksheets employ another strategy for solving multiplication questions skip counting on a number line Numbers up to 10 x 10 Free Printable From K5
Multiply on Numbers Line Worksheets Easy Teacher Worksheets
This worksheet explains how to create a multiplication statement based on illustrated jumps along a number line A sample problem is solved Lesson and Practice Students will recreate multiplication
sentences along a number line to describe a math sentence
Multiply using a number line worksheets These worksheets employ another strategy for solving multiplication questions skip counting on a number line Numbers up to 10 x 10 Free Printable From K5
This worksheet explains how to create a multiplication statement based on illustrated jumps along a number line A sample problem is solved Lesson and Practice Students will recreate multiplication
sentences along a number line to describe a math sentence
Multiplication Worksheets And Printouts
Multiplication Using A Number Line Worksheet Times Tables Worksheets
Multiplication Using Number Line Worksheets NumbersWorksheet
Worksheet Number Line Addition Grass Fedjp Worksheet Study Site
Multiplication Using Number Line Worksheets
Multiplication Using Number Line Worksheets For Grade 2 Times Tables Worksheets
Multiplication Using Number Line Worksheets For Grade 2 Times Tables Worksheets
Match Each number line To Its Equivalent multiplication Sentence Multiplication Facts Worksheets
FAQs (Frequently Asked Questions).
Are Multiplication On A Number Line Worksheets ideal for all age groups?
Yes, worksheets can be customized to various age and skill levels, making them adaptable for different learners.
Just how usually should trainees practice using Multiplication On A Number Line Worksheets?
Constant technique is essential. Routine sessions, ideally a few times a week, can generate significant improvement.
Can worksheets alone improve math abilities?
Worksheets are a valuable tool yet should be supplemented with diverse learning approaches for comprehensive skill development.
Are there online systems offering free Multiplication On A Number Line Worksheets?
Yes, lots of instructional web sites use free access to a large range of Multiplication On A Number Line Worksheets.
How can parents sustain their kids's multiplication technique at home?
Motivating regular method, offering help, and developing a positive knowing setting are helpful steps.
|
{"url":"https://crown-darts.com/en/multiplication-on-a-number-line-worksheets.html","timestamp":"2024-11-05T07:28:54Z","content_type":"text/html","content_length":"28576","record_id":"<urn:uuid:ebc96b15-78e9-4bf5-bb90-d147ddf5fa35>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00812.warc.gz"}
|
Non-malleable encryption: Simpler, shorter, stronger
In a seminal paper, Dolev et al. [15] introduced the notion of non-malleable encryption (NM-CPA). This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure
encryption (IND-CCA), and, yet, can be generically built from semantically secure (IND-CPA) encryption, as was shown in the seminal works by Pass et al. [29] and by Choi et al. [9], the latter of
which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security: 1. Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be
improved? 2. Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA? 3. Is there a notion stronger than NM-CPA that has natural
applications and can be achieved from IND-CPA security? We answer all three questions in the positive. First, we improve the rate in the scheme of Choi et al. by a factor O(λ), where λ is the
security parameter. Still, encrypting a message of size O(λ) would require ciphertext and keys of size O(λ^2) times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more
efficient domain extension technique for building a λ-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size O(λ) times that of the NM-CPA one-bit scheme. To achieve our
goal, we define and construct a novel type of continuous non-malleable code (NMC), called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural
“encode-then-encrypt-bit-by-bit” approach to work. Finally, we introduce a new security notion for public-key encryption that we dub non-malleability under (chosen-ciphertext) self-destruct attacks
(NM-SDA). After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results—(faster) construction from IND-CPA and domain
extension from one-bit scheme—also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying (plausibly, strictly?) below
IND-CCA security.
Original language English (US)
Title of host publication Theory of Cryptography - 13th International Conference, TCC 2016-A, Proceedings
Editors Eyal Kushilevitz, Tal Malkin
Publisher Springer Verlag
Pages 306-335
Number of pages 30
ISBN (Print) 9783662490952
State Published - 2016
Event 13th International Conference on Theory of Cryptography, TCC 2016 - Tel Aviv, Israel
Duration: Jan 10 2016 → Jan 13 2016
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 9562
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Other 13th International Conference on Theory of Cryptography, TCC 2016
Country/Territory Israel
City Tel Aviv
Period 1/10/16 → 1/13/16
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Non-malleable encryption: Simpler, shorter, stronger'. Together they form a unique fingerprint.
|
{"url":"https://nyuscholars.nyu.edu/en/publications/non-malleable-encryption-simpler-shorter-stronger","timestamp":"2024-11-01T22:04:20Z","content_type":"text/html","content_length":"60163","record_id":"<urn:uuid:8503beee-a2e8-4547-8b67-2fc42bcef78c>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00411.warc.gz"}
|
How to Count the Months Between 2 dates?
What is the proper formula to use to count the months between the two dates?
Best Answer
• Hi, @ECD Auto Design .
The following formula will get you the exact months.
=(NETDAYS([Contract Start Date (Elliot)]@row, [End Date Based On Waterfall (Elliot)]@row)/365)*12
□ = ROUND(xxx,0) for whole months
□ = ROUND(xxx,1) for months to 1 decimal place
...substituting the formula above for "xxx". (Remove the "=" from the original formula.)
Why it works...
□ NETDAYS() returns the number of days between the start and end dates.
□ Dividing that number (days) by 365 converts your unit of measurement from days to years.
□ Multiplying the number of years by 12 converts the unit of measurement to months.
• @ECD Auto Design try:
=MONTH(Date@row) - MONTH([Date 2]@row) where Date and Date 2 are the two columns with dates.
But you probably need to account for changing years? Perhaps:
=(YEAR(Date@row) - YEAR([Date 2]@row)) * 12 + (MONTH(Date@row) - MONTH([Date 2]@row))
Check a few use cases to make sure that hangs together.
• =(YEAR(Contract Start Date (Elliot)) - YEAR((End Date Based On Waterfall (Elliot))) * 12 + (MONTH(Contract Start Date (Elliot)) - MONTH(End Date Based On Waterfall (Elliot))
That is what the formula looks like when i put my data into it. Now the only issue with it is it made it a negative number. I think its pretty close but not quite there yet
• Hi, @ECD Auto Design .
The following formula will get you the exact months.
=(NETDAYS([Contract Start Date (Elliot)]@row, [End Date Based On Waterfall (Elliot)]@row)/365)*12
□ = ROUND(xxx,0) for whole months
□ = ROUND(xxx,1) for months to 1 decimal place
...substituting the formula above for "xxx". (Remove the "=" from the original formula.)
Why it works...
□ NETDAYS() returns the number of days between the start and end dates.
□ Dividing that number (days) by 365 converts your unit of measurement from days to years.
□ Multiplying the number of years by 12 converts the unit of measurement to months.
• I know this post is old but I just needed such a formula.
@Toufong Vang Thank you for that formula However, I found that it was providing inconsistent data. I opted for using this formula instead.
(MONTH([End Date]@row) - MONTH([Start Date]@row) + 1)
My use-case:
A task is recurring every 01 of every month. In the examples below you can see that I can't get 3 months unless I pass a certain number of days in the ending month.
Round wouldn't give me the 3 months I needed until the decimal was over .5.
RoundUp would not give me the 3 months I needed until it was over 2.0
RoundDown obviously won't work.
I understand that you can get exact decimals, but in an example where I need to get actual months, only my formula worked. Just wanted to provide another formula in case anyone needed it.
• Hi, @Emilio Wright , in that case, you'll want to use...
(YEAR([End Date]@row) - YEAR([Start Date]@row))*12 + MONTH([End date]@row) - MONTH([Start Date]@row) + 1
...to account for Start-to-End dates that span across different years (e.g., "12/01/2022 - 01/01/2023").
As it is, your formula will return "-10" when the start date is "12/01/2022" and the end date is "01/01/2023".
• @Toufong Vang Thank you for the insight on the Year. As our form currently stands, we only allow users to specify one year. For this reason, we won't have values that cross years since the tasks
they are entering will be evaluated on a yearly basis.
• @Toufong Vang I am trying to implement your formula in one of my Smartsheets but it seems to be throwing an extra month in for dates that are in the same month but one year apart. I would be
looking for this example below to return 12 months as the result. Any insight on this? Thank you!
• Hi, @Jags0829, to get the approximate number of months between two dates, use the approach below.
1. Use NETDAYS() to find the number of days between the two dates.
2. Divide that by 30.417 days/month (365 days divided by 12 months).
3. ROUND() it to zero decimal places to return the number of months.
ROUND( (NETDAYS( xxx , xxx)/30.417) , 0 )
ROUND((NETDAYS([Start of Contract]@row, [End of Contract]@row)/30.417), 0)
• Thank you so much, Toufong Vang, for a very helpful formula to count the months across years!
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/80255/how-to-count-the-months-between-2-dates","timestamp":"2024-11-09T10:54:54Z","content_type":"text/html","content_length":"440511","record_id":"<urn:uuid:6d613b41-5886-4faa-aadb-a4e3d268db65>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00579.warc.gz"}
|
Multiplication For Third Graders Worksheets
Mathematics, especially multiplication, forms the foundation of various academic disciplines and real-world applications. Yet, for numerous learners, mastering multiplication can pose an obstacle. To
resolve this hurdle, teachers and moms and dads have welcomed an effective tool: Multiplication For Third Graders Worksheets.
Introduction to Multiplication For Third Graders Worksheets
Multiplication For Third Graders Worksheets
Multiplication For Third Graders Worksheets -
Welcome to the Math Salamanders 3rd Grade Multiplication Table Worksheets Here you will find a wide range of free printable Math Worksheets 3rd Grade which will help your child achieve their
benchmark for Grade 3 Multiplication Table Worksheets 2 3 4 5 and 10 Times Tables
3rd grade Multiplication Show interactive only Sort by 1 Minute Multiplication Interactive Worksheet More Mixed Minute Math Interactive Worksheet Budgeting for a Holiday Meal Worksheet 2 Digit
Multiplication Interactive Worksheet Christmas Multiplication 2 Worksheet Multiplication Division Word Problems Practice Interactive Worksheet
Value of Multiplication Practice Comprehending multiplication is critical, laying a solid foundation for sophisticated mathematical principles. Multiplication For Third Graders Worksheets offer
structured and targeted method, fostering a deeper comprehension of this essential arithmetic procedure.
Development of Multiplication For Third Graders Worksheets
3rd Grade Multiplication Worksheets Best Coloring Pages For Kids
3rd Grade Multiplication Worksheets Best Coloring Pages For Kids
A self teaching worktext for 3rd grade that covers multiplication concept from various angles word problems a guide for structural drilling and a complete study of all 12 multiplication tables
Download 5 20 Also available as a printed copy Learn more and see the free samples See more topical Math Mammoth books
Multiplication facts are the basic facts of multiplying two single digit numbers such as 2 x 3 6 or 9 x 9 81 These facts are also called the times tables because they show how many times a number is
added to itself For example 4 x 5 20 means that 4 is added to itself 5 times 4 4 4 4 4 20
From typical pen-and-paper exercises to digitized interactive formats, Multiplication For Third Graders Worksheets have advanced, catering to diverse knowing styles and choices.
Sorts Of Multiplication For Third Graders Worksheets
Basic Multiplication Sheets Easy exercises concentrating on multiplication tables, assisting students construct a strong arithmetic base.
Word Issue Worksheets
Real-life circumstances incorporated right into problems, enhancing critical reasoning and application skills.
Timed Multiplication Drills Tests created to improve speed and accuracy, aiding in rapid mental math.
Benefits of Using Multiplication For Third Graders Worksheets
Multiplication Worksheet 3rd Grade multiplication worksheets For Grademultiplication And On
Multiplication Worksheet 3rd Grade multiplication worksheets For Grademultiplication And On
This page contains all our printable worksheets in section Multiplication of Third Grade Math As you scroll down you will see many worksheets for understand multiplication facts and strategies
multiplication properties and facts multiply by 1 digit and more A brief description of the worksheets is on each of the worksheet widgets
Browse Printable 3rd Grade Multiplication Fact Worksheets Award winning educational materials designed to help kids succeed Kids completing this third grade math worksheet multiply by 5 to solve each
equation and also fill in a multiplication chart for the number 5 3rd grade Math Interactive Worksheet Christmas Multiplication 3
Improved Mathematical Abilities
Constant technique hones multiplication efficiency, improving general math abilities.
Enhanced Problem-Solving Abilities
Word problems in worksheets create logical reasoning and technique application.
Self-Paced Understanding Advantages
Worksheets suit private understanding rates, cultivating a comfy and versatile discovering setting.
Exactly How to Develop Engaging Multiplication For Third Graders Worksheets
Incorporating Visuals and Colors Dynamic visuals and colors capture interest, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Situations
Connecting multiplication to daily scenarios adds relevance and usefulness to exercises.
Tailoring Worksheets to Various Skill Levels Customizing worksheets based upon differing proficiency degrees ensures inclusive understanding. Interactive and Online Multiplication Resources Digital
Multiplication Equipment and Gamings Technology-based sources provide interactive understanding experiences, making multiplication interesting and satisfying. Interactive Websites and Apps Online
platforms offer varied and available multiplication technique, supplementing traditional worksheets. Customizing Worksheets for Various Learning Styles Visual Students Aesthetic help and layouts help
comprehension for students inclined toward aesthetic understanding. Auditory Learners Spoken multiplication issues or mnemonics deal with students who realize concepts with auditory methods.
Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic learners in comprehending multiplication. Tips for Effective Execution in Learning Uniformity in Practice Routine
practice enhances multiplication abilities, promoting retention and fluency. Balancing Rep and Range A mix of repetitive exercises and varied issue formats keeps interest and comprehension. Giving
Positive Feedback Comments aids in identifying locations of improvement, motivating ongoing progression. Obstacles in Multiplication Practice and Solutions Inspiration and Engagement Hurdles Dull
drills can lead to uninterest; ingenious methods can reignite inspiration. Overcoming Fear of Math Unfavorable assumptions around mathematics can prevent progress; developing a favorable knowing
environment is essential. Influence of Multiplication For Third Graders Worksheets on Academic Performance Researches and Research Study Findings Research suggests a positive connection in between
consistent worksheet usage and enhanced math efficiency.
Multiplication For Third Graders Worksheets emerge as versatile tools, cultivating mathematical effectiveness in learners while fitting diverse discovering designs. From basic drills to interactive
on the internet resources, these worksheets not just enhance multiplication abilities however likewise advertise critical reasoning and problem-solving abilities.
Teach Child How To Read Arrays And Multiplication T 3rd Grade Free Printable Worksheets
3rd Grade Multiplication Worksheets Best Coloring Pages For Kids
Check more of Multiplication For Third Graders Worksheets below
Grade 5 Math Worksheets Arrays Search Results Calendar 2015
5th Grade Math Multiplication Worksheets Printable Math Worksheets Printable
Cool Free Printable 3Rd Grade Worksheets Photos Rugby Rumilly
3rd Grade Multiplication Table Worksheet Times Tables Worksheets
Printable Multiplication Games For 3Rd Grade PrintableMultiplication
Multiplication Practice Worksheets Grade 3 Worksheet Template Tips And Reviews
Search Printable 3rd Grade Multiplication Worksheets
3rd grade Multiplication Show interactive only Sort by 1 Minute Multiplication Interactive Worksheet More Mixed Minute Math Interactive Worksheet Budgeting for a Holiday Meal Worksheet 2 Digit
Multiplication Interactive Worksheet Christmas Multiplication 2 Worksheet Multiplication Division Word Problems Practice Interactive Worksheet
Practicing multiplication tables 3rd grade Math Worksheet GreatSchools
This math worksheet gives your third grader multiplication practice with equations and word problems involving shapes volume money and logic MATH GRADE 3rd Print full size Skills Calculating
measurements Money math Multiplication drills Solving word problems Common Core Standards Grade 3 Operations Algebraic Thinking
3rd grade Multiplication Show interactive only Sort by 1 Minute Multiplication Interactive Worksheet More Mixed Minute Math Interactive Worksheet Budgeting for a Holiday Meal Worksheet 2 Digit
Multiplication Interactive Worksheet Christmas Multiplication 2 Worksheet Multiplication Division Word Problems Practice Interactive Worksheet
This math worksheet gives your third grader multiplication practice with equations and word problems involving shapes volume money and logic MATH GRADE 3rd Print full size Skills Calculating
measurements Money math Multiplication drills Solving word problems Common Core Standards Grade 3 Operations Algebraic Thinking
3rd Grade Multiplication Table Worksheet Times Tables Worksheets
5th Grade Math Multiplication Worksheets Printable Math Worksheets Printable
Printable Multiplication Games For 3Rd Grade PrintableMultiplication
Multiplication Practice Worksheets Grade 3 Worksheet Template Tips And Reviews
Third Grade Multiplication Worksheets
Multiplication Worksheet For Grade School Learning Printable
Multiplication Worksheet For Grade School Learning Printable
3rd Grade Multiplication worksheets For Extra Practice More
FAQs (Frequently Asked Questions).
Are Multiplication For Third Graders Worksheets ideal for all age groups?
Yes, worksheets can be tailored to different age and ability degrees, making them versatile for various students.
Just how typically should students exercise using Multiplication For Third Graders Worksheets?
Regular technique is vital. Regular sessions, preferably a couple of times a week, can yield considerable improvement.
Can worksheets alone enhance math skills?
Worksheets are an important device yet must be supplemented with varied learning approaches for extensive ability advancement.
Exist on-line platforms supplying totally free Multiplication For Third Graders Worksheets?
Yes, lots of educational internet sites provide open door to a variety of Multiplication For Third Graders Worksheets.
How can moms and dads support their kids's multiplication technique in your home?
Motivating regular technique, supplying help, and producing a favorable discovering atmosphere are valuable steps.
|
{"url":"https://crown-darts.com/en/multiplication-for-third-graders-worksheets.html","timestamp":"2024-11-06T10:52:53Z","content_type":"text/html","content_length":"29674","record_id":"<urn:uuid:0ca624da-6ae9-4831-a149-43d0817bdb7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00870.warc.gz"}
|
Numerical data: Normalization | Machine Learning | Google for Developers
After examining your data through statistical and visualization techniques, you should transform your data in ways that will help your model train more effectively. The goal of normalization is to
transform features to be on a similar scale. For example, consider the following two features:
• Feature X spans the range 154 to 24,917,482.
• Feature Y spans the range 5 to 22.
These two features span very different ranges. Normalization might manipulate X and Y so that they span a similar range, perhaps 0 to 1.
Normalization provides the following benefits:
• Helps models converge more quickly during training. When different features have different ranges, gradient descent can "bounce" and slow convergence. That said, more advanced optimizers like
Adagrad and Adam protect against this problem by changing the effective learning rate over time.
• Helps models infer better predictions. When different features have different ranges, the resulting model might make somewhat less useful predictions.
• Helps avoid the "NaN trap" when feature values are very high. NaN is an abbreviation for not a number. When a value in a model exceeds the floating-point precision limit, the system sets the
value to NaN instead of a number. When one number in the model becomes a NaN, other numbers in the model also eventually become a NaN.
• Helps the model learn appropriate weights for each feature. Without feature scaling, the model pays too much attention to features with wide ranges and not enough attention to features with
narrow ranges.
We recommend normalizing numeric features covering distinctly different ranges (for example, age and income). We also recommend normalizing a single numeric feature that covers a wide range, such as
city population.
Consider the following two features:
• Feature A's lowest value is -0.5 and highest is +0.5.
• Feature B's lowest value is -5.0 and highest is +5.0.
Feature A and Feature B have relatively narrow spans. However, Feature B's span is 10 times wider than Feature A's span. Therefore:
• At the start of training, the model assumes that Feature B is ten times more "important" than Feature A.
• Training will take longer than it should.
• The resulting model may be suboptimal.
The overall damage due to not normalizing will be relatively small; however, we still recommend normalizing Feature A and Feature B to the same scale, perhaps -1.0 to +1.0.
Now consider two features with a greater disparity of ranges:
• Feature C's lowest value is -1 and highest is +1.
• Feature D's lowest value is +5000 and highest is +1,000,000,000.
If you don't normalize Feature C and Feature D, your model will likely be suboptimal. Furthermore, training will take much longer to converge or even fail to converge entirely!
This section covers three popular normalization methods:
• linear scaling
• Z-score scaling
• log scaling
This section additionally covers clipping. Although not a true normalization technique, clipping does tame unruly numerical features into ranges that produce better models.
Linear scaling
Linear scaling (more commonly shortened to just scaling) means converting floating-point values from their natural range into a standard range—usually 0 to 1 or -1 to +1.
Click the icon to see the math.
Use the following formula to scale to the standard range 0 to 1, inclusive:
$$ x' = (x - x_{min}) / (x_{max} - x_{min}) $$
• $x'$ is the scaled value.
• $x$ is the original value.
• $x_{min}$ is the lowest value in the dataset of this feature.
• $x_{max}$ is the highest value in the dataset of this feature.
For example, consider a feature named quantity whose natural range spans 100 to 900. Suppose the natural value of quantity in a particular example is 300. Therefore, you can calculate the normalized
value of 300 as follows:
• $x$ = 300
• $x_{min}$ = 100
• $x_{max}$ = 900
x' = (300 - 100) / (900 - 100)
x' = 200 / 800
x' = 0.25
Linear scaling is a good choice when all of the following conditions are met:
• The lower and upper bounds of your data don't change much over time.
• The feature contains few or no outliers, and those outliers aren't extreme.
• The feature is approximately uniformly distributed across its range. That is, a histogram would show roughly even bars for most values.
Suppose human age is a feature. Linear scaling is a good normalization technique for age because:
• The approximate lower and upper bounds are 0 to 100.
• age contains a relatively small percentage of outliers. Only about 0.3% of the population is over 100.
• Although certain ages are somewhat better represented than others, a large dataset should contain sufficient examples of all ages.
Exercise: Check your understanding
Suppose your model has a feature named
that holds the net worth of different people. Would linear scaling be a good normalization technique for
? Why or why not?
Click the icon to see the answer.
Answer: Linear scaling would be a poor choice for normalizing net_worth. This feature contains many outliers, and the values are not uniformly distributed across its primary range. Most people would
be squeezed within a very narrow band of the overall range.
Z-score scaling
A Z-score is the number of standard deviations a value is from the mean. For example, a value that is 2 standard deviations greater than the mean has a Z-score of +2.0. A value that is 1.5 standard
deviations less than the mean has a Z-score of -1.5.
Representing a feature with Z-score scaling means storing that feature's Z-score in the feature vector. For example, the following figure shows two histograms:
• On the left, a classic normal distribution.
• On the right, the same distribution normalized by Z-score scaling.
Figure 4. Raw data (left) versus Z-score (right) for a normal distribution.
Z-score scaling is also a good choice for data like that shown in the following figure, which has only a vaguely normal distribution.
Figure 5. Raw data (left) versus Z-score scaling (right) for a non-classic normal distribution.
Click the icon to see the math.
Use the following formula to normalize a value, $x$, to its Z-score:
$$ x' = (x - μ) / σ $$
• $x'$ is the Z-score.
• $x$ is the raw value; that is, $x$ is the value you are normalizing.
• $μ$ is the mean.
• $σ$ is the standard deviation.
For example, suppose:
• mean = 100
• standard deviation = 20
• original value = 130
Z-score = (130 - 100) / 20
Z-score = 30 / 20
Z-score = +1.5
Click the icon to learn more about normal distributions.
In a classic normal distribution:
• At least 68.27% of data has a Z-score between -1.0 and +1.0.
• At least 95.45% of data has a Z-score between -2.0 and +2.0.
• At least 99.73% of data has a Z-score between -3.0 and +3.0.
• At least 99.994% of data has a Z-score between -4.0 and +4.0.
So, data points with a Z-score less than -4.0 or more than +4.0 are rare, but are they truly outliers? Since
is a concept without a strict definition, no one can say for sure. Note that a dataset with a sufficiently large number of examples will almost certainly contain at least a few of these "rare"
examples. For example, a feature with one billion examples conforming to a classic normal distribution could have as many as 60,000 examples with a score outside the range -4.0 to +4.0.
Z-score is a good choice when the data follows a normal distribution or a distribution somewhat like a normal distribution.
Note that some distributions might be normal within the bulk of their range, but still contain extreme outliers. For example, nearly all of the points in a net_worth feature might fit neatly into 3
standard deviations, but a few examples of this feature could be hundreds of standard deviations away from the mean. In these situations, you can combine Z-score scaling with another form of
normalization (usually clipping) to handle this situation.
Exercise: Check your understanding
Suppose your model trains on a feature named
that holds the adult heights of ten million women. Would Z-score scaling be a good normalization technique for
? Why or why not?
Click the icon to see the answer.
Answer: Z-score scaling would be a good normalization technique for height because this feature conforms to a normal distribution. Ten million examples implies a lot of outliers—probably enough
outliers for the model to learn patterns on very high or very low Z-scores.
Log scaling
Log scaling computes the logarithm of the raw value. In theory, the logarithm could be any base; in practice, log scaling usually calculates the natural logarithm (ln).
Click the icon to see the math.
Use the following formula to normalize a value, $x$, to its log:
$$ x' = ln(x) $$
• $x'$ is the natural logarithm of $x$.
• original value = 54.598
Therefore, the log of the original value is about 4.0:
4.0 = ln(54.598)
Log scaling is helpful when the data conforms to a power law distribution. Casually speaking, a power law distribution looks as follows:
• Low values of X have very high values of Y.
• As the values of X increase, the values of Y quickly decrease. Consequently, high values of X have very low values of Y.
Movie ratings are a good example of a power law distribution. In the following figure, notice:
• A few movies have lots of user ratings. (Low values of X have high values of Y.)
• Most movies have very few user ratings. (High values of X have low values of Y.)
Log scaling changes the distribution, which helps train a model that will make better predictions.
Figure 6. Comparing a raw distribution to its log.
As a second example, book sales conform to a power law distribution because:
• Most published books sell a tiny number of copies, maybe one or two hundred.
• Some books sell a moderate number of copies, in the thousands.
• Only a few bestsellers will sell more than a million copies.
Suppose you are training a linear model to find the relationship of, say, book covers to book sales. A linear model training on raw values would have to find something about book covers on books that
sell a million copies that is 10,000 more powerful than book covers that sell only 100 copies. However, log scaling all the sales figures makes the task far more feasible. For example, the log of 100
~4.6 = ln(100)
while the log of 1,000,000 is:
~13.8 = ln(1,000,000)
So, the log of 1,000,000 is only about three times larger than the log of 100. You probably could imagine a bestseller book cover being about three times more powerful (in some way) than a
tiny-selling book cover.
Clipping is a technique to minimize the influence of extreme outliers. In brief, clipping usually caps (reduces) the value of outliers to a specific maximum value. Clipping is a strange idea, and
yet, it can be very effective.
For example, imagine a dataset containing a feature named roomsPerPerson, which represents the number of rooms (total rooms divided by number of occupants) for various houses. The following plot
shows that over 99% of the feature values conform to a normal distribution (roughly, a mean of 1.8 and a standard deviation of 0.7). However, the feature contains a few outliers, some of them
Figure 7. Mainly normal, but not completely normal.
How can you minimize the influence of those extreme outliers? Well, the histogram is not an even distribution, a normal distribution, or a power law distribution. What if you simply cap or clip the
maximum value of roomsPerPerson at an arbitrary value, say 4.0?
Figure 8. Clipping feature values at 4.0.
Clipping the feature value at 4.0 doesn't mean that your model ignores all values greater than 4.0. Rather, it means that all values that were greater than 4.0 now become 4.0. This explains the
peculiar hill at 4.0. Despite that hill, the scaled feature set is now more useful than the original data.
Wait a second! Can you really reduce every outlier value to some arbitrary upper threshold? When training a model, yes.
You can also clip values after applying other forms of normalization. For example, suppose you use Z-score scaling, but a few outliers have absolute values far greater than 3. In this case, you
• Clip Z-scores greater than 3 to become exactly 3.
• Clip Z-scores less than -3 to become exactly -3.
Clipping prevents your model from overindexing on unimportant data. However, some outliers are actually important, so clip values carefully.
Summary of normalization techniques
Normalization technique Formula When to use
Linear scaling $$ x' = \frac{x - x_{min}}{x_{max} - x_{min}} $$ When the feature is uniformly distributed across a fixed range.
Z-score scaling $$ x' = \frac{x - μ}{σ}$$ When the feature distribution does not contain extreme outliers.
Log scaling $$ x' = log(x)$$ When the feature conforms to the power law.
Clipping If $x > max$, set $x' = max$ When the feature contains extreme outliers.
If $x < min$, set $x' = min$
Exercise: Test your knowledge
Which technique would be most suitable for normalizing a feature with the following distribution?
Z-score scaling
The data points generally conform to a normal distribution, so Z-score scaling will force them into the range –3 to +3.
Linear scaling
Review the discussions of the normalization techniques on this page, and try again.
Log scaling
Review the discussions of the normalization techniques on this page, and try again.
Review the discussions of the normalization techniques on this page, and try again.
Suppose you are developing a model that predicts a data center's productivity based on the temperature measured inside the data center. Almost all of the temperature values in your dataset fall
between 15 and 30 (Celsius), with the following exceptions:
• Once or twice per year, on extremely hot days, a few values between 31 and 45 are recorded in temperature.
• Every 1,000th point in temperature is set to 1,000 rather than the actual temperature.
Which would be a reasonable normalization technique for temperature?
Clip the outlier values between 31 and 45, but delete the outliers with a value of 1,000
The values of 1,000 are mistakes, and should be deleted rather than clipped.
The values between 31 and 45 are legitimate data points. Clipping would probably be a good idea for these values, assuming the dataset doesn't contain enough examples in this temperature range to
train the model to make good predictions. However, during inference, note that the clipped model would therefore make the same prediction for a temperature of 45 as for a temperature of 35.
Clip all the outliers
Review the discussions of the normalization techniques on this page, and try again.
Delete all the outliers
Review the discussions of the normalization techniques on this page, and try again.
Delete the outlier values between 31 and 45, but clip the outliers with a value of 1,000.
Review the discussions of the normalization techniques on this page, and try again.
|
{"url":"https://developers.google.cn/machine-learning/crash-course/numerical-data/normalization?authuser=2","timestamp":"2024-11-13T04:23:25Z","content_type":"text/html","content_length":"194009","record_id":"<urn:uuid:91047fe1-b267-478b-9076-f118c8a36d71>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00375.warc.gz"}
|
A Primer on Mapping Class Groups
by Benson Farb, Dan Margalit
Publisher: Princeton University Press 2011
ISBN/ASIN: 0691147949
ISBN-13: 9780691147949
Number of pages: 509
Our goal in this book is to explain as many important theorems, examples, and techniques as possible, as quickly and directly as possible, while at the same time giving (nearly) full details and
keeping the text (nearly) selfcontained. This book contains some simplifications of known approaches and proofs, the exposition of some results that are not readily available, and some new material
as well.
Download or read it online for free here:
Download link
(3.4MB, PDF)
Similar books
Math That Makes You Go Wow
M. Boittin, E. Callahan, D. Goldberg, J. Remes
Ohio State UniversityThis is an innovative project by a group of Yale undergraduates: A Multi-Disciplinary Exploration of Non-Orientable Surfaces. The course is designed to be included as a short
segment in a late middle school or early high school math course.
Notes on Basic 3-Manifold Topology
Allen HatcherThese pages are really just an early draft of the initial chapters of a real book on 3-manifolds. The text does contain a few things that aren't readily available elsewhere, like the
Jaco-Shalen/Johannson torus decomposition theorem.
The Geometry and Topology of Three-Manifolds
William P Thurston
Mathematical Sciences Research InstituteThe text describes the connection between geometry and lowdimensional topology, it is useful to graduate students and mathematicians working in related fields,
particularly 3-manifolds and Kleinian groups. Much of the material or technique is new.
Algebraic L-theory and Topological Manifolds
A. A. Ranicki
Cambridge University PressAssuming no previous acquaintance with surgery theory and justifying all the algebraic concepts used by their relevance to topology, Dr Ranicki explains the applications of
quadratic forms to the classification of topological manifolds.
|
{"url":"https://www.e-booksdirectory.com/details.php?ebook=5402","timestamp":"2024-11-08T17:30:42Z","content_type":"text/html","content_length":"11236","record_id":"<urn:uuid:6362305d-0497-4455-b1b6-c6ac1391bbf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00477.warc.gz"}
|
Figure 2: Properties of the metafunction over 1000 realisations:...
... One of the main SA methods is Sobol' analysis, which studies how the dispersion of individual components of the input data (and their combinations) affects the dispersion of output data [16][17]
[18][19]. Improvements to this method can be found in recent works [20,21]. ...
|
{"url":"https://www.researchgate.net/figure/Properties-of-the-metafunction-over-1000-realisations-distribution-of-the-fraction-of_fig2_343661871","timestamp":"2024-11-03T12:58:02Z","content_type":"text/html","content_length":"274218","record_id":"<urn:uuid:6d094e0a-7607-43c3-847f-6bec149281af>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00362.warc.gz"}
|
Understanding the types and operations of Linked Lists
Photo by J. Kelly Brito on Unsplash
Originally published at https://www.niit.com/india/
Linked lists are a collection of linear data formation or nodes whose order is not determined by their location in the memory, but each of the nodes signifies towards the next point in the
collection. Altogether these data structures form a sequence.
The form of the node consists of data and links to the subsequent node in the sequence.
A linked list is the most common and simplest data structure. During iteration, this linear data structure offers flexibility, allowing insertion and removal of any element from any position in the
link. The only drawback of this structure is that faster access like, random access is not likely feasible. However, more complex forms that can add additional links can allow you to insert or remove
links at any random position. To understand linked lists in-depth you can choose to take up the course Professional Program in Full Stack Software Engineering offered by NIIT
Types of linked lists
Types of linked lists are as follows:
● Singly-linked list
● Doubly linked list
● Circular linked list
Singly linked list
In singly-linked lists, the nodes are connected in a sequence where each of the nodes contains a value and address field with the reference of the next subsequent node in the link. It is
unidirectional, which simply means that data navigation flows in one single direction.
class Node{
int data; // to store the data
Node *next; // a pointer pointing to the next node in the list
In this form of the linked list, the node contains the value and address of the next node, and the next value in the sequence of the last node is NULL. The NULL signifies the end of the linked list.
It is the simplest form of a linked list and requires much less space as compared to the other types.
Doubly linked list
In the doubly linked list, there’s an extra memory field to store the address of the last node and the address of the subsequent node in a single element. The basic structure is as follows: address
of the previous node-value-address of the next node. It is bidirectional. The data navigation flows from both forward and backward directions.
class Double_LL{
Double_LL *prev; // a pointer to pointing to the previous node in the list
int data; // to store the data of the node
Double_LL *next; // a pointer pointing to the next node in the list
The removal and insertion operation is much more efficient than the singly linked list but the number of modifications increases while performing various operations. Due to the doubly linked list
structure, it requires much more space as it contains an extra memory to store the address of the previous node.
Circular linked list
The circular linked list does not have null values. It can either be a singly linked list or a doubly-linked list. In the case of doubly-linked lists, the first node contains the reference of the
last element as previous and the last node contains the address of the first element as the next. On the other hand, in singly-linked lists, the last node contains the link of the first element.
Its flow of data navigation can be from any point as it has no NULL value, but in this type of structure, you can be stuck in an infinite loop If not traversed properly, as there is no null value to
stop the traversing. Performing various operations like reversing is much more complex in a circular linked list as compared to other types.
It is an important data structure if we want to access the data in a loop where a previous node can be easily accessed which is not possible in singly-linked lists.
Basic operations of linked lists.
Insertion: For the addition of nodes at any selected position.
Traversal: To access all nodes one by one.
Deletion: For removal of nodes at any selected position.
Searching: To search any data by value.
Updating: To update a value.
Sorting: To configure nodes in a link according to a specific format.
Merging: To merge any two linked lists.
The explanation of these operations is for singly-linked lists.
Linked lists traversal
Traversal means accessing all the different nodes in a list one after another, starting the access from the top or the head of the list. Then, the next node data is accessed in the sequence until the
last node is accessed.
The algorithm to access data in a link is as follows:
void print(Node *head)
Node *temp=head;
cout<<temp->data<<” “;
Linked list node insertion
It means to add or insert a node at any point in the link. Insertion can be at the beginning, at the end, or any selected position in the link.
When inserting at the beginning of the link it is not important to find the link. If the link is empty then the new node is inserted as the head of the link, and when the new node is added to an
existing link the new node replaces it as the head of the link.
Algorithm to insert at the beginning:
void insertBG(Node *head,int val)
Node *temp=new Node(val);
When inserting at the end of the link, the user has to access all the nodes present to find the endpoint. In case the list is empty the inserted node acts as both the first and the last node of the
Algorithm to insert at the end:
void insertEND(Node *head,int val)
Node *temp=head;
Node *temp1=new Node(val);
When inserting at any given position, the link is accessed to find the point where the node is to be added. The new node is inserted after the given position. If the address is not given to the
previous node, you can traverse the link to find the desired point.
Algorithm to insert at any selected position:
void insert_POS(Node *head,int pos,int val)
Node *temp=head;
for(int i=0;i<pos-2;i++)
Node *temp1=new Node(val); // insert value of the node
Deletion in a linked list
To remove a node from any list there are three steps involved.
First, you have to find the node previous to the node that is to be removed. Then the next pointer of the previous node is changed in the link. The last step is to delete the memory of the removed
link. In the case where the first node of the link is to be deleted the head of the link is updated.
Algorithm to delete a node:
void delete_LL(Node *head,int node)
Node *temp=head;
for(int i=0;i<node-2;i++)
Node *curr=temp->next; // to point to the node to be deleted
delete curr;
Searching for a node in the linked list.
To search for any node given in a list, we need to access the list and find the value given in the node.
Algorithm to search:
bool search(Node *head,int key)
return false;
Node *temp=head;
return true;
return false;
Updating a node in a linked list
To update a node in a link, we need to update the value of the first node and set the new value.
Algorithm to update a node value:
void update_data(Node *head,val)
Node *temp=head;
We hoped you liked this presentation on understanding the various types of operations in Linked lists. If so, do explore NIIT’s Knowledge Centre for similar resources.
|
{"url":"https://niitdigital.medium.com/understanding-the-types-and-operations-of-linked-lists-4ad69c832b29?source=user_profile_page---------9-------------4f61997f64b3---------------","timestamp":"2024-11-06T18:06:49Z","content_type":"text/html","content_length":"165964","record_id":"<urn:uuid:90440275-2e53-4b4b-965b-199f88d7f2cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00179.warc.gz"}
|
Find the values of m for which both roots of equation x2−mx+1=0... | Filo
Find the values of for which both roots of equation are less than unity.
Not the question you're searching for?
+ Ask your question
From (i), (ii) and (iii), we get
Was this solution helpful?
Video solutions (4)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 9/15/2023
Was this solution helpful?
5 mins
Uploaded on: 8/6/2023
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Complex Number and Quadratic Equations
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Find the values of for which both roots of equation are less than unity.
Updated On Sep 15, 2023
Topic Complex Number and Quadratic Equations
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 4
Upvotes 465
Avg. Video Duration 3 min
|
{"url":"https://askfilo.com/math-question-answers/find-the-values-of-m-for-which-both-roots-of-equation-x2-m-x10-are-less-than","timestamp":"2024-11-10T06:41:18Z","content_type":"text/html","content_length":"368000","record_id":"<urn:uuid:5f8de5da-4dd9-4306-91bc-07baedf3bcae>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00888.warc.gz"}
|
How do you find the related acute angle? + Example
How do you find the related acute angle?
1 Answer
The related acute is an acute angle ($< {90}^{\circ}$) that can be found between the terminal arm and the x-axis when the terminal arm is in quadrants 2, 3, or 4. There is no related acute angle if
the terminal arm lies in quadrant 1.
Here are some examples:
If the terminal arm is in quadrant 2, do ${180}^{\circ}$ minus the principle angle to find the related acute angle.
If the terminal arm is in quadrant 3, do the principle angle minus ${180}^{\circ}$ to find the related acute angle.
If the terminal arm is in quadrant 4, do ${360}^{\circ}$ minus the principle angle to find the related acute angle.
Impact of this question
69228 views around the world
|
{"url":"https://socratic.org/questions/how-do-you-find-the-related-acute-angle","timestamp":"2024-11-15T00:13:44Z","content_type":"text/html","content_length":"33442","record_id":"<urn:uuid:e1298375-eecd-4703-9d32-31538271a502>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00580.warc.gz"}
|
How to Calculate a Time Difference in Google Sheets
To calculate a time difference in Google Sheets, you can use the formula =EndTime - StartTime. You should format the cells containing time data in the appropriate time format before performing the
calculation. Here's a step-by-step guide:
1. Open your Google Sheet.
2. Enter your start time in cell A1 and end time in cell B1. For example, if the start time is 8:00 AM and the end time is 5:00 PM, enter 8:00 AM in A1 and 5:00 PM in B1.
3. Click on cell A1 and then click on the Format menu, select Number, and choose the time format you want (e.g., 1:30 PM).
4. Do the same for cell B1.
5. In cell C1, enter the formula =B1 - A1 to calculate the time difference.
6. Press Enter.
7. Format cell C1 as a duration by clicking on the cell, then going to the Format menu, selecting Number, and choosing Duration.
Here's an example of how to calculate a time difference in Google Sheets:
1. Start Time (A1): 8:00 AM
2. End Time (B1): 5:00 PM
3. Format cells A1 and B1 with the time format.
4. In cell C1, enter the formula: =B1 - A1
5. Press Enter.
6. Format cell C1 as a duration.
The result in cell C1 should display the time difference as 9:00:00, indicating a difference of 9 hours between the start and end times.
Did you find this useful?
|
{"url":"https://sheetscheat.com/google-sheets/how-to-calculate-a-time-difference-in-google-sheets","timestamp":"2024-11-11T14:48:56Z","content_type":"text/html","content_length":"11121","record_id":"<urn:uuid:dc21eed8-32e8-4f8f-834d-d4f51ea58dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00136.warc.gz"}
|
Application of ensemble empirical mode decomposition based on machine learning methodologies in forecasting monthly pan evaporation
Accurate prediction of pan evaporation (P[E]) is one of the crucial factors in water resources management and planning in agriculture. In this research, two hybrid models, self-adaptive
time-frequency methodology, ensemble empirical mode decomposition (EEMD) coupled with support vector machine (EEMD-SVM) and EEMD model tree (EEMD-MT), were employed to forecast monthly P[E]. The
EEMD-SVM and EEMD-MT were compared with single SVM and MT models in forecasting monthly P[E], measured between 1975 and 2008, at Siirt and Diyarbakir stations in Turkey. The results were evaluated
using four assessment criteria, Nash–Sutcliffe Efficiency (NSE), root mean square error (RMSE), performance index (PI), Willmott's index (WI), and Legates–McCabe's index (LMI). The EEMD-MT model
respectively improved the accuracy of MT by 36 and 44.7% with respect to NSE and WI in the testing stage for the Siirt station. For the Diyarbakir station, the improvements in results were less
spectacular, with improvements in NSE (1.7%) and WI (2.2%), respectively, in the testing stage. The overall results indicate that the proposed pre-processing technique is very promising for complex
time series forecasting and further studies incorporating this technique are recommended.
Evaporation is probably the most complicated and difficult parameter to forecast among all the elements of the hydrological cycle due to the complex interactions among the hydrologic components of
water surface, land, atmosphere process, and vegetation. Hence, proper forecasting of evaporation is a significant issue in water resources management and agriculture, particularly in semi-arid and
arid areas. Because of temperature differences, this hydrological phenomenon is a nonlinear process which occurs in nature (Irmak et al. 2002; Trajkovic & Kolakovic 2010; Tezel & Buyukyildiz 2016;
Malik et al. 2017). There are many parameters influencing pan evaporation (P[E]) rates, including air temperature (T[A]), solar radiation (S[R]), sunshine hours (H[S]), relative humidity (R[H]) and
wind speed (W[S]). Although accurate estimation of P[E] is of high importance, especially in studies related to integrated water resources management, the effects of various weather parameters on pan
evaporation in different regions are still not well understood.
In general, there are two main approaches, namely direct and indirect procedures, for calculating and forecasting evaporation. Direct measurements of P[E] are widely applied for predicting
evaporation. Indirect methods such as energy-budget, water budgets, and Penman–Monteith techniques are based on various climate variables (e.g. TA, W[S] and S[R]), which are not easily accessible. It
is not possible nowadays to extract a mathematical relationship that includes all of the physical processes due to the nonlinearity and complexity of the evaporation process. Several researchers
therefore recommend the use of models (Tezel & Buyukyildiz 2016; Wang et al. 2017; Tan et al. 2018) that do not require a priori knowledge of the underlying physical processes or excessive data. Such
models are considered to be more adequate (adaptable) in real-decision support systems compared to the physical-based methods (Kisi 2007; Ch et al. 2014; Deo & Şahin 2015). Considering that
evaporation is highly nonlinear, these techniques are able to map the nonlinearities associated with evaporation and its input parameters to a high degree of accuracy.
In the last decade, machine learning (ML) methods such as Adaptive Neuro Fuzzy Inference System (ANFIS), artificial neural network (ANN), model tree (MT), gene expression programming (GEP), extreme
learning machine (ELM) and support vector machine (SVM) have been successfully employed for solving a wide range of environmental and water engineering problems (Adamowski & Karapataki 2010; Chua et
al. 2011; Sattar 2013; Ch et al. 2014; He et al. 2014; Ebtehaj et al. 2015; Deo et al. 2016; Najafzadeh et al. 2016; Li et al. 2017; Mouatadid & Adamowski 2017; Rezaie-Balf & Kisi 2017; Rezaie-Balf
et al. 2017a, 2017b; Yu et al. 2017). Many studies have also been conducted using ML methods for predicting pan evaporation (Tabari et al. 2012; Lin et al. 2013; Kim et al. 2015; Tezel & Buyukyildiz
2016; Kisi et al. 2016; Malik et al. 2017; Wang et al. 2017), although the development of powerful approaches and methods with high levels of reliability to achieve accurate forecasts remains a major
challenge. This is because time series P[E] data are generally highly nonlinear and seasonal. Under such situations, applying the raw data to the model directly may not produce accurate results if
the data are highly nonlinear. In this context, using a preprocessing technique may enhance the model's performance (Wu et al. 2010). In recent years, researchers using data pre-processing tools such
as principal component analysis (PCA), continuous wavelet transform (CWT), moving average (MA), wavelet multi-resolution analysis (WMRA), maximum entropy spectral analysis (MESA), and singular
spectrum analysis (SSA) have been successful in improving forecasting accuracy (Hu et al. 2007; Kişi 2009; Sang et al. 2009; Wu et al. 2009; Wu & Chau 2011; Nourani et al. 2013; Sang et al. 2013;
Rezaie-Balf et al. 2017a, 2017b; Tiwari & Adamowski 2017; Wen et al. 2017).
More recently, new noise assisted data analysis techniques, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD), were pioneered by Huang et al. (1998) and Wu & Huang
(2009), respectively. The EEMD approach is an advanced form of traditional decomposition techniques (e.g. wavelet decomposition and Fourier decomposition), and is an intuitive, empirical and
self-adaptive data processing tool which was developed for non-linear and non-stationary time series. Recently, several successful applications have been presented which utilized EEMD to forecast
time series phenomena such as rainfall and runoff (Karthikeyan & Kumar 2013; Latifoglu et al. 2015; Wang et al. 2015; Zhao & Chen 2015; Barge & Sharif 2016), water quality parameters (Fijani et al.
2019), soil moisture (Prasad et al. 2018), and electricity demand (Al-Musaylh et al. 2018). For instance, Napolitano et al. (2011) carried out a study to explore several aspects of ANN in forecasting
daily stream flow time series using EMD. They showed that using EMD analysis, the ANN can have a higher prediction accuracy and reliability. Likewise, Sang et al. (2012) presented a hybrid EMD and
maximum entropy spectral analysis method. They proved that this method, in addition to improving the period identification, also can improve overall period identification by being able to
discriminate period, trend, and noise. Wang et al. (2013) applied EEMD as an adaptive data analysis tool for decomposing annual rainfall time series in modeling rainfall-runoff process using a SVM
model. They employed particle swarm optimization (PSO) to calculate optimal SVM control parameters. Similar to previous investigations, they showed their proposed model's efficiency and capability in
rainfall-runoff forecasting. Moreover, a four-stage model, EEMD-based radial basis function neural network (RBFN) and linear neural network (LNN), was developed by Di et al. (2014) for hydrological
time series forecasting. They considered six datasets and showed that their model outperformed the RBFNN, EMD-RBFNN, and EMD-WA-RBFNN-LNN methods.
The overall goal of the current study is to propose a new hybrid model to improve the monthly P[E] forecasting. The SVM and MT techniques were investigated and improvements of the models' accuracy
were tested by coupling them with the EEMD procedure. The data used are the monthly P[E] data recorded at Siirt and Diyarbakir stations, located in Turkey. The rest of the present research is
organized as follows. The next section provides a case study and data analysis, followed by a section describing the methodology of EEMD and each forecasting method (i.e. MT and SVM). Next, the
statistical indicators applied in this study are provided. The penultimate section represents the forecasting results of the hybrid models and the comparison results, and the final section consists
of a summary and conclusion of outline results.
To validate the performance of the proposed approach, the monthly climatic data and P[E] from Siirt station (latitude 37° 55′ N, longitude 41° 56′ E, and altitude 896 m) and Diyarbakir station
(latitude 37° 54′ N, longitude 40° 13′ E, and altitude 649 m), operated by the Turkish Meteorological Organization (TMO) were utilized. All the climatic data were obtained from TMO. It is the only
legal organization that provides meteorological information in Turkey. TMO measures pan evaporation, which are not publicly available, using Standard A pans (World Meteorological Organization
standards). The 187 datasets of climatic variables comprising wind speed (W[S]), temperature (T[A]), relative humidity (R[H]) and solar radiation (S[R]) of the aforementioned stations consisting of
33 years (1975–2008) are described in Table 1. Furthermore, data were not available for January, February, March and December and about 5% of the whole data were missing for May and April. Data were
directly used in this study without any pre-processing (e.g. filling in missing ones). Seventy-five per cent of the whole data was used for training of the applied models and the obtained models were
tested by using the remaining 25% data.
Table 1
Statistical parameters . Meteorological variables with unit .
T (^oc) . R[S] (MJ/m^2) . R[h] (%) . U[2] (m/s) . P[E] (mm/day) .
Siirt station (training data set)
Minimum 10.1 216.16 17.9 0.7 1.4
Maximum 33.5 859.12 72.3 2.8 15.7
Mean 23.34 585.25 39.99 1.67 7.89
S[x] 6.058 152.61 12.36 0.45 3.34
C[sx] –0.37 –0.58 0.48 0.13 0.008
K[x] –0.94 –0.405 –0.52 –0.79 –0.83
Siirt station (testing data set)
Minimum 10.4 321.14 25.6 0.6 2
Maximum 32.3 688.38 69.2 1.4 16.6
Mean 24.72 547.22 40.04 1.11 9.55
S[x] 5.69 116.48 11.62 0.19 3.68
C[sx] –0.48 –0.57 0.77 –0.76 –0.06
K[x] –0.65 –0.91 –0.21 0.19 –0.81
Diyarbakir station (training data set)
Minimum 8 163.41 13.4 0.2 1.2
Maximum 33.4 917.65 81.2 4.9 20.2
Mean 23.21 643.05 42.27 2.51 8.38
S[x] 6.472 176.21 15.01 1.01 4.04
C[sx] –0.37 –0.66 0.48 –0.51 0.12
K[x] –0.96 –0.37 –0.88 –0.24 –0.81
Diyarbakir station (testing data set)
Minimum 10.3 415.35 10.7 1.1 3.6
Maximum 32.6 890.11 79.3 4 16.8
Mean 24.52 694.96 33.61 2.43 10.03
S[x] 6.33 144.52 17.93 0.69 3.76
C[sx] –0.32 –0.39 0.93 0.21 –0.14
K[x] –1.07 –0.97 0.33 –0.35 –1.21
10.3 415.35 10.7 1.1 3.6
Statistical parameters . Meteorological variables with unit .
T (^oc) . R[S] (MJ/m^2) . R[h] (%) . U[2] (m/s) . P[E] (mm/day) .
Siirt station (training data set)
Minimum 10.1 216.16 17.9 0.7 1.4
Maximum 33.5 859.12 72.3 2.8 15.7
Mean 23.34 585.25 39.99 1.67 7.89
S[x] 6.058 152.61 12.36 0.45 3.34
C[sx] –0.37 –0.58 0.48 0.13 0.008
K[x] –0.94 –0.405 –0.52 –0.79 –0.83
Siirt station (testing data set)
Minimum 10.4 321.14 25.6 0.6 2
Maximum 32.3 688.38 69.2 1.4 16.6
Mean 24.72 547.22 40.04 1.11 9.55
S[x] 5.69 116.48 11.62 0.19 3.68
C[sx] –0.48 –0.57 0.77 –0.76 –0.06
K[x] –0.65 –0.91 –0.21 0.19 –0.81
Diyarbakir station (training data set)
Minimum 8 163.41 13.4 0.2 1.2
Maximum 33.4 917.65 81.2 4.9 20.2
Mean 23.21 643.05 42.27 2.51 8.38
S[x] 6.472 176.21 15.01 1.01 4.04
C[sx] –0.37 –0.66 0.48 –0.51 0.12
K[x] –0.96 –0.37 –0.88 –0.24 –0.81
Diyarbakir station (testing data set)
Minimum 10.3 415.35 10.7 1.1 3.6
Maximum 32.6 890.11 79.3 4 16.8
Mean 24.52 694.96 33.61 2.43 10.03
S[x] 6.33 144.52 17.93 0.69 3.76
C[sx] –0.32 –0.39 0.93 0.21 –0.14
K[x] –1.07 –0.97 0.33 –0.35 –1.21
10.3 415.35 10.7 1.1 3.6
X[min], X[max], X[mean], S[x], C[sx], and K[x] denote the minimum, maximum, mean, standard deviation, skewness, and kurtosis, respectively.
The location of the study area is illustrated in Figure 1. In this area, average annual precipitation is between 400 and 800 mm, falling mainly in the winter months of December to January. The mean
values of the pan evaporation during spring and summer (main crop growing seasons) at Siirt and Diyarbakir are 8.81 and 7.47 mm, respectively. Summers are hot and dry and the annual temperature
ranges from 16.0 to 46.8 °C (Keshtegar & Kisi 2017). According to the de Martonne (1926) aridity index, both Siirt and Diyarbakir stations have a dry, semiarid (semi-desert) climate.
Two ML algorithms, SVM and MT, were developed and coupled with EEMD as a data pre-processing procedure for forecasting monthly P[E] in Siirt and Diyarbakir. In this section, the description of EEMD,
SVM and MT are briefly presented.
Ensemble empirical mode decomposition (EEMD)
The EEMD method, proposed by Wu & Huang (2009), is an empirical procedure used to represent a nonlinear and non-stationary signal from original data. This data pre-processing method is an improvement
of the EMD, self-adaptive time-frequency transformation procedure, and does not rely on process information. EMD is used mainly for decomposing the original time series data into a finite and low
number of oscillatory modes depending on the local characteristic time scale (Huang et al. 1998). The oscillatory modes are revealed by intrinsic mode function (IMFs) components, embedded in the
data. These component signals are represented as a sum of zero-mean well behaved fast and slow oscillation modes (IMFs) which have two requirements (Huang et al. 1998; Wu & Huang 2009): (i)
Throughout the whole length of data, the number of zero-crossings and the number of extrema must either be equal or at least differ by one; and (ii) At any point, the mean value of the local maxima
and minima envelope is zero. According to these properties, some meaningful IMFs can be well defined. Generally, an IMF indicates a simple oscillatory mode, compared to simple harmonic function.
Based on this definition, a shifting process of original time series can be briefly expressed as follows (Huang et al. 1998).
Step 1: Identify all extrema (local maxima and minima) points of the given time series y(t);
Step 2: Connect by spline interpolation, to create an upper envelope of local maxima points e[max](t), and a lower envelope e[min](t) of all minima points;
Step 5: If h(t) satisfies the two properties of IMF as indicated by a predefined stopping criterion, h(t) is indicated as the first IMF (written as c[1](t) where 1 is its index); If h(t) is not an
IMF, y(t) is replaced with h(t) and iterate steps 1–4 until h(t) satisfies the two conditions of IMFs.
Step 6: The residue
) =
) is then treated as new data subjected to the same shifting procedure as defined above for the next IMF from
). In the final step, the shifting process can be stopped, when the residue
) has a monotonic trend or has one local maxima and minima point from which no more IMFs can be extracted (
Huang et al. 2003
). At the end of this shifting process, the original signal
) can be reconstructed as the sum of IMFs and residual as: where
) represents the final residue,
is the number of IMFs, and
) are almost orthogonal to each other, and all have zero means. Further information for the EMD technique and the stopping criteria can be obtained from
Huang et al. (1998
). It has been shown that EMD can be unstable due to mode mixing (
Wu & Huang 2009
). Mode mixing can occur when a single IMF comprises widely disparate scale components, or a similar scale component residing in different IMFs (
Lei et al. 2009
). To overcome the scale separation issue without presenting a subjective intermittence test, a new noise-assisted data analysis technique is proposed, the ensemble EMD (EEMD), which characterizes
the true IMF components as the average of an ensemble of trials, each comprising the signal in addition to a white noise of finite amplitude. In this regard, the impacts of the decomposition using
the EEMD are that the added white noise series cancel each other in the final mean of the corresponding IMFs; the mean IMFs stays within the natural dyadic filter windows and thus significantly
reduces the possibility of mode mixing and preserves the dyadic property.
Support vector machine (SVM)
Support vector machines, firstly presented by Cortes & Vapnik (1995), is an effective tool for solving nonlinear problems by relating input variables to a defined output. The least square SVM is an
advanced type of SVM, developed by Suykens & Vandewalle (1999), and is used for chaotic time series prediction. The SVM on the basis of the structural risk minimization (SRM) principle, minimizes the
expected error in order to reduce the over-fitting. In SVM, in order to separate the data patterns, the input data are mapped into a higher dimensional feature space.
Consider a training data set (
,), (
,), … , (
,), where is the input value including
features, is output value relevant to and
is the length of the data set. The SVM regression function is given as: where
are the weights vector in the feature space with the dimension of
and bias term, respectively, and indicates the inner product. The regression issue can be defined as a process for minimization of the regularized risk function as: subject to the condition: where
is a positive constant that determines the degree of penalized loss for prediction error, and are the slack variables to specify the distance from observed values to the corresponding boundary values
. it is expected that the most of the data set fall within the range of and errors and occur when the data fall outside this range. This optimization is solved by maximizing algorithm of quadratic
programming that is illustrated in Equation (
): subject to the condition: where and are the Lagrange multipliers, and
(.) is the kernel function defined as an inner product of points and , defined as: The solution to the objective function in Equation (
) is optimal and unique and the solution can be expressed by Lagrange multipliers as: Various kernel functions have been used in SVM to determine the most efficient model for a given type of dataset
Gu et al. 2010
Kisi & Parmar 2016
Model tree (MT)
The MT is a state-of-the-art hierarchical algorithm, presented by Quinlan (1992), for providing the relation between input–output parameters. This algorithm is used to solve the problem by dividing
the data space into several sub-problems (sub-spaces) and builds piecewise linear regression models for each sub-domain. The data records, in classification trees, are classified by sorting the tree
from root node to some leaf nodes of the tree. The MT is based on the well-known CART algorithm (Breiman et al. 1984), which deals with continuous-class learning attributes. This algorithm is known
to be one of the most efficient methods to present physically meaningful insights for a given phenomenon (Kisi et al. 2016). Figure 2 illustrates the tree-building process, within four linear
regression models and the knowledge extracted from the structure for corresponding sub-domains. Furthermore, a general tree structure of MT approach is illustrated in Figure 2(b). In this figure, if
X[1] and X[2] are less than 3 and 1.5, respectively, the fifth model should be considered as a form of Y = a[0] + a[1]X[1] + a[2]X[2].
are inputs,
is output,
are regression coefficients. Within the MT algorithm, the tree stores a linear model which predicts the class values of the portion of data set reaching to the leaf. According to certain attributes
of the data, the records are split into different portions. To determine the best attribute for splitting the data set at each node, standard deviation is utilized as a splitting criterion. Trees are
constructed using the standard deviation reduction (SDR) which maximizes the expected error reduction for each node as follows (
Quinlan 1992
): where
is the set of instances that reaches the leaf(node);
represents a subset of input data to the parent node; and
is the standard deviation. To overcome the overfitting problem and gain better generalization, pruning processes are performed to prune back the overgrown trees in the next stage. In the pruning
process, the sub-trees (inner nodes) are transformed into leaf nodes by replacing them with linear regression functions. After pruning, a smoothing procedure is applied to the disjointed linear
models of the neighboring leaves. In the smoothing procedure, all the leaf models are amalgamated along the path back to the root to compose the final model in order to provide final predictions.
Detailed information on MT can be found in
Wang & Witten (1996)
Talebi et al. (2017)
In the present research, to assess the EEMD-SVM, EEMD-MT, SVM and MT models, several error indicators were used:
Legates–McCabe's index (LMI): This criterion considers absolute values for computation (
Legates & McCabe 1999
). Therefore, LMI is not inflated by the squared values and is insensitive to outliers making it simple and easy to interpret: where and are the observed and forecasting values, respectively; and
represent the mean of the observed and forecasted values, respectively; and
is the number of data.
Models implementation based on EEMD for P[E] forecasting
The main goal of the EEMD-MT/SVM models is to forecast monthly P[E] at different stations in Turkey. According to Figure 3, the workflow of the proposed approach contains three main steps to enhance
the performance of proposed forecasting techniques that can be expressed as follows:
1. In Step 1, the EEMD procedure is firstly utilized to decompose the all original input/output time series y(t) into several IMF components C[i](t) (i = 1, 2, 3, … , n) and one residual component r
2. Next, for each extracted IMF component and the residual component (for example IMF1), the SVM and MT are established as forecasting models to simulate the decomposed IMF and residual components,
and to predict each component by the same sub-series (IMF1) of four input variables, respectively.
3. In Step 3, the forecasted values of all extracted IMF and residue components by the SVM and MT models are aggregated to generate the modelled P[E] time series.
To summarize, the hybrid EEMD-SVM/MT forecasting tools establish the idea of ‘decomposition and ensemble’. The decomposition is used to simplify the forecasting process, and the ensemble is utilized
to formulate a consensus forecasting on the original time series.
Forecasting of monthly P[E] using proposed models at Siirt station
The SVM, MT, EEMD-SVM, and EEMD-MT models were employed for forecasting monthly pan evaporation using T
, R
, W
, and S
variables as inputs in Siirt station. These are the most effective variables on P
and previous studies (
Tabari et al. 2012
Kisi et al. 2016
Malik et al. 2017
Wang et al. 2017
) also used these variables for P
forecasting. Although standalone SVM and MT use original four inputs, T
, R
, W
, and S
, decomposed IMFs of input variables in each frequency are used to develop EEMD-SVM/MT models. The number of ensemble and the amplitude of white noise should be set before using EEMD. The
relationship of the standard deviation of error (
) which is based on the ensemble number (
) and the amplitude of the added white noise (
) is described by Equation (
) (
Wu & Huang 2009
): According to previous studies (
Wu & Huang 2009
), in general, the amplitude of added noise should be about 0.2 times the standard deviation of the dataset and the number of ensembles used was 500. For more detailed information on the analysis of
noise and ensemble averaging effects on decomposition, the readers are referred to
Wu & Huang (2009)
. As an example, decomposed time series of pan evaporation data are represented in
Figure 4
. As seen from the figure, original monthly P
time series have been decomposed into eight independent IMF components in the order from the highest frequency (IMF1) to the lowest frequency (IMF8), and one residue component. Similarly, T
, R
, W
, and S
inputs were also decomposed in IMF components. IMF1 of each input variable (T
, R
, W
, and S
) was used for forecasting IMF1 of output (P
). Simultaneously, IMF2, IMF3, … of each input variable were used for forecasting IMF2, IMF3, …. of P
Thereafter, all forecasted values of IMSs were summed to obtain original P
Next, the accuracy of the proposed models in forecasting of monthly P[E] was assessed at both training and testing stages (Table 2). Based on the comparison between SVM and MT approaches, the MT is
found to perform better than the SVM in terms of statistical metrics during training and testing stages at Siirt station. The quantitative evaluation results in Table 2 also show that EEMD-MT has a
lower value of RMSE (0.765 mm) and PI (0.079 mm) compared to the EEMD-SVM model in the training stage for the Siirt station. Moreover, the EEMD-MT provided more accurate results in terms of NSE
(0.89), WI (0.969), and LMI (0.699) than the SVM and MT.
Table 2
Models . Statistical error indices .
NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
Total available data in training stage
SVM 0.842 1.304 0.126 0.942 0.672
MT 0.901 1.047 0.110 0.962 0.724
EEMD-SVM 0.929 0.887 0.093 0.978 0.774
EEMD-MT 0.947 0.765 0.079 0.986 0.792
Total available data in testing stage
SVM 0.463 2.663 0.137 0.814 0.341
MT 0.653 2.140 0.101 0.893 0.492
EEMD-SVM 0.796 1.641 0.069 0.934 0.591
EEMD-MT 0.893 1.183 0.051 0.969 0.699
Models . Statistical error indices .
NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
Total available data in training stage
SVM 0.842 1.304 0.126 0.942 0.672
MT 0.901 1.047 0.110 0.962 0.724
EEMD-SVM 0.929 0.887 0.093 0.978 0.774
EEMD-MT 0.947 0.765 0.079 0.986 0.792
Total available data in testing stage
SVM 0.463 2.663 0.137 0.814 0.341
MT 0.653 2.140 0.101 0.893 0.492
EEMD-SVM 0.796 1.641 0.069 0.934 0.591
EEMD-MT 0.893 1.183 0.051 0.969 0.699
The bold numbers represent the values of performance criteria for the best fitted models.
In the testing stage, with respect to the measured value of P[E], it can be seen from Table 2, in terms of NSE and WI, that the EEMD-MT and EEMD-SVM models are able to model P[E] at Siirt station
well. The SVM model has the highest error in estimating P[E] (RMSE = 2.663 mm; PI = 0.137 mm) compared to the other models. In addition, the EEMD-MT model improves the MT model with 44.71 and 49.50%
reductions in term of RMSE and PI, respectively, and a 36, 8 and 42% increase in NSE, WI, and LMI, respectively.
Comparison of the modelled P[E] is made against measured values in the scatter plots (Figure 5) for both training and testing stages. The figure illustrates the superiority of the EEMD-MT model
during the test stage for Siirt station. The superiority of the EEMD-MT model over the SVM, MT, and EEMD-SVM models is also clear from Figure 6. These results support our hypothesis that the EEMD
method is suitable for decomposing monthly P[E] time series, and the idea of ‘decomposition and ensemble’ is feasible and the proposed EEMD-SVM/MT models can overcome drawbacks of the individual SVM
and MT models by generating a synergetic effect in forecasting. Thus the monthly P[E] data decomposed by using the EEMD procedure can improve the forecast results at Siirt.
Forecasting of monthly P[E] using proposed models at Diyarbakir station
Similar to the methodology used for Siirt station, four proposed models (i.e. SVM, MT, EEMD-SVM, and EEMD-MT) were developed for the purpose of forecasting monthly P[E] at Diyarbakir station.
Specifically, SVM and MT were adopted as benchmark models and the results compared with the conjunction models which used EEMD to pre-process the original monthly P[E] data, adopting the phase-space
reconstruction method to design the input vectors. The training and testing data sets were identical across all models. Table 3 indicates the performance results for the four models in terms of the
four aforementioned statistical error metrics. The results show that the hybrid techniques have the lowest RMSE and PI values and the highest NSE and WI, outperforming the SVM and MT models, at both
training and testing stages. In the training stage, the comparison of the SVM and MT models shows that the MT model has better accuracy (NSE = 0.882, WI = 0.953, and LMI = 0.755) in forecasting of P
[E]; however, these models' results are significantly improved by coupling with the EEMD technique. The EEMD-MT approach outperformed all other models (Table 3). By integrating the EEMD with SVM,
RMSE and PI were reduced to 1.270 and 0.138 mm in the training stage, respectively. It can be seen from Table 3 that by coupling with EEMD, the RMSE and PI of single MT were reduced by 25.9–28.9 and
19.4–75.5% in the training and testing stages, respectively. Improvements in the forecast results were approximately 5.2, 6.1 and 4.5% respectively, for NSE, WI and LMI values at the training stage,
and 1.7, 2.2 and 11.6%, respectively, in the testing stage.
Table 3
Models . Statistical error indices .
NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
Total available data in training stage
SVM 0.866 1.521 0.196 0.925 0.711
MT 0.882 1.451 0.176 0.953 0.755
EEMD-SVM 0.901 1.270 0.138 0.972 0.773
EEMD-MT 0.928 1.075 0.125 0.981 0.789
Total available data in testing stage
SVM 0.869 1.462 0.149 0.958 0.703
MT 0.907 1.362 0.139 0.963 0.712
EEMD-SVM 0.902 1.268 0.138 0.973 0.733
EEMD-MT 0.923 1.098 0.034 0.985 0.795
Models . Statistical error indices .
NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
Total available data in training stage
SVM 0.866 1.521 0.196 0.925 0.711
MT 0.882 1.451 0.176 0.953 0.755
EEMD-SVM 0.901 1.270 0.138 0.972 0.773
EEMD-MT 0.928 1.075 0.125 0.981 0.789
Total available data in testing stage
SVM 0.869 1.462 0.149 0.958 0.703
MT 0.907 1.362 0.139 0.963 0.712
EEMD-SVM 0.902 1.268 0.138 0.973 0.733
EEMD-MT 0.923 1.098 0.034 0.985 0.795
The bold numbers represent the values of performance criteria for the best fitted models.
The measured and forecasted values of P[E] during training and testing stages are compared in Figures 7 and 8 for the four models. It can be seen from Figure 8 that SVM and MT models are able to
follow the general trend of measured P[E] quite well, although these models are not able to match the extreme high and low values accurately, especially at the latter part of the testing period. In
fact, the SVM and MT models under-predict P[E] for the testing data from 2006 onwards, whereas the EEMD-SVM/MT models are able to provide accurate estimates during this period of time (with the
exception of 2008 where it is noticed that P[E] is under-predicted by all models). In general, the results of this analysis illustrate that the proposed EEMD-SVM/MT models are able to obtain better
results than the SVM/MT models, shown by the improvements in different evaluation measures.
Models assessment at spring and summer seasons
To further evaluate the performance of the proposed models, the results were assessed in the spring and summer seasons only, since these are the main crop growing periods. Hence, further analysis was
conducted for the period from May to October for 1975–2008 in both stations. Table 4 illustrates the results of the EEMD-SVM and EEMD-MT models in forecasting P[E] from May to October for Siirt
station. As can be seen in Table 4, the EEMD-MT has a lower error in terms of RMSE and PI for all months compared to the EEMD-SVM model. It is notable that the models' accuracy (NSE, WI and LMI) from
May to July reduces but then increases for the months of August, September, and October.
Table 4
Models . . Statistical error indices .
Month . NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
May 0.639 0.849 0.079 0.888 0.527
June 0.546 1.319 0.0732 0.786 0.485
EEMD-SVM July 0.428 1.357 0.062 0.748 0.453
August 0.377 1.311 0.063 0.691 0.396
September 0.570 0.736 0.048 0.849 0.568
October 0.301 1.045 0.112 0.762 0.459
May 0.704 0.768 0.069 0.908 0.690
June 0.655 1.149 0.0621 0.858 0.621
EEMD-MT July 0.724 0.943 0.041 0.895 0.636
August 0.684 0.933 0.043 0.864 0.562
September 0.642 0.671 0.043 0.882 0.656
October 0.423 0.801 0.0653 0.827 0.596
Models . . Statistical error indices .
Month . NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
May 0.639 0.849 0.079 0.888 0.527
June 0.546 1.319 0.0732 0.786 0.485
EEMD-SVM July 0.428 1.357 0.062 0.748 0.453
August 0.377 1.311 0.063 0.691 0.396
September 0.570 0.736 0.048 0.849 0.568
October 0.301 1.045 0.112 0.762 0.459
May 0.704 0.768 0.069 0.908 0.690
June 0.655 1.149 0.0621 0.858 0.621
EEMD-MT July 0.724 0.943 0.041 0.895 0.636
August 0.684 0.933 0.043 0.864 0.562
September 0.642 0.671 0.043 0.882 0.656
October 0.423 0.801 0.0653 0.827 0.596
In Diyarbakir station, the forecasted P[E] values in both the spring and summer seasons with the EEMD-SVM/MT models have a similar trend to the Siirt station (Table 5). Also, it can be said that the
EEMD-SVM has inadequate accuracy in most of the months and it could just be a drawback of this model. Moving to a more detailed analysis on seasonal forecasting of the proposed models, Figure 9
illustrates RMSE and PI indices along May–October for the aforementioned period. As can be seen in both stations, the EEMD-MT model has lower error compared to EEMD-SVM. This indicates that the
EEMD-SVM model may not provide a complete solution to every case. This also shows that accurate prediction of monthly P[E] is a complex issue.
Table 5
Models . . Statistical error indices .
Month . NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
May 0.677 0.802 0.068 0.924 0.723
June 0.664 1.159 0.092 0.927 0.715
EEMD-SVM July 0.183 1.498 0.085 0.812 0.562
August 0.380 0.973 0.055 0.825 0.593
September 0.026 1.039 0.082 0.760 0.463
October 0.535 0.820 0.106 0.849 0.617
May 0.805 0.623 0.053 0.953 0.753
June 0.523 1.381 0.073 0.832 0.622
EEMD-MT July 0.620 0.848 0.035 0.892 0.662
August 0.652 0.729 0.029 0.896 0.678
September 0.601 0.666 0.038 0.903 0.692
October 0.866 0.440 0.037 0.969 0.783
Models . . Statistical error indices .
Month . NSE . RMSE (mm/day) . PI (mm/day) . WI . LMI .
May 0.677 0.802 0.068 0.924 0.723
June 0.664 1.159 0.092 0.927 0.715
EEMD-SVM July 0.183 1.498 0.085 0.812 0.562
August 0.380 0.973 0.055 0.825 0.593
September 0.026 1.039 0.082 0.760 0.463
October 0.535 0.820 0.106 0.849 0.617
May 0.805 0.623 0.053 0.953 0.753
June 0.523 1.381 0.073 0.832 0.622
EEMD-MT July 0.620 0.848 0.035 0.892 0.662
August 0.652 0.729 0.029 0.896 0.678
September 0.601 0.666 0.038 0.903 0.692
October 0.866 0.440 0.037 0.969 0.783
Predicting a hydro-meteorological time series is often complicated and frustrating. The series may appear completely random; a volatile sequence showing no signs of predictability. Time series are
full of patterns and relationships. Decomposition aims to identify and separate them into distinct components, each with specific properties and behavior. In this study, the EEMD procedure was
applied to tackle the trends and random behavior of time series data which may impact on accuracy of P[E] forecasting. This research attempted to improve SVM and MT models in the prediction of pan
evaporation by coupling with the EEMD procedure. The P[E] time series from two climatic stations in Turkey, Siirt and Diyarbakir, were utilized for validation of the proposed coupled models. Firstly,
the SVM and MT models alone were applied to forecast monthly P[E]. In order to enhance the forecasting accuracy of the typical models, coupled EEMD-SVM and EEMD-MT models were then used. For the
coupled models, the original time series data were decomposed into eight IMFs and one residual for P[E] modeling process. Input variables included T[A], R[H], W[S], and S[R]. The performance of the
proposed models was assessed in terms of NSE, RMSE, PI, WI, and LMI in the training and testing phases. At Siirt station, the EEMD-MT model provided more accurate results in terms of NSE (0.89), WI
(0.97), and LMI (0.70) than the SVM (NSE = 0.46, WI = 0.81, and LMI = 0.34), MT (NSE = 0.65, WI = 0.89, and LMI = 0.49), and EEMD-SVM (NSE = 0.80, WI = 0.93, and LMI = 0.60) in the testing stage. For
the Diyarbakir station, the highest P[E] forecasting accuracy was achieved with the EEMD-MT model (NSE = 0.92, WI = 0.98, and LMI = 0.80) compared to others. The results showed that the SVM and MT
models coupling EEMD generally provide better accuracy and are superior to the SVM and MT models alone in forecasting monthly P[E] at both Siirt and Diyarbakir stations. The proposed models can be
applied to different climate conditions and EEMD may be compared with other pre-processing methods such as CEEMDAN and improved CEEMDAN in future studies.
|
{"url":"https://iwaponline.com/hr/article/50/2/498/64607/Application-of-ensemble-empirical-mode","timestamp":"2024-11-14T21:37:17Z","content_type":"text/html","content_length":"443889","record_id":"<urn:uuid:67aef516-0a6d-449d-9bae-19379fd5ff5c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00324.warc.gz"}
|
Multiple Solutions
About Multiple Solutions
You obtain multiple solutions in an object by calling run with the syntax
[x,fval,exitflag,output,manymins] = run(...);
manymins is a vector of solution objects; see GlobalOptimSolution. The manymins vector is in order of objective function value, from lowest (best) to highest (worst). Each solution object contains
the following properties (fields):
• X — a local minimum
• Fval — the value of the objective function at X
• Exitflag — the exit flag for the local solver (described in the local solver function reference page: fmincon exitflag, fminunc exitflag, lsqcurvefit exitflag , or lsqnonlin exitflag
• Output — an output structure for the local solver (described in the local solver function reference page: fmincon output, fminunc output, lsqcurvefit output , or lsqnonlin output
• X0 — a cell array of start points that led to the solution point X
manymins contains only those solutions corresponding to positive local solver exit flags. If you want to collect all the local solutions, not only the ones corresponding to positive exit flags, use
the @savelocalsolutions output function. See Output Functions for GlobalSearch and MultiStart.
There are several ways to examine the vector of solution objects:
• In the MATLAB^® Workspace Browser. Double-click the solution object, and then double-click the resulting display in the Variables editor.
• Using dot notation. GlobalOptimSolution properties are capitalized. Use proper capitalization to access the properties.
For example, to find the vector of function values, enter:
fcnvals = [manymins.Fval]
fcnvals =
-1.0316 -0.2155 0
To get a cell array of all the start points that led to the lowest function value (the first element of manymins), enter:
• Plot some field values. For example, to see the range of resulting Fval, enter:
This results in a histogram of the computed function values. (The figure shows a histogram from a different example than the previous few figures.)
Change the Definition of Distinct Solutions
You might find out, after obtaining multiple local solutions, that your tolerances were not appropriate. You can have many more local solutions than you want, spaced too closely together. Or you can
have fewer solutions than you want, with GlobalSearch or MultiStart clumping together too many solutions.
To deal with this situation, run the solver again with different tolerances. The XTolerance and FunctionTolerance tolerances determine how the solvers group their outputs into the GlobalOptimSolution
vector. These tolerances are properties of the GlobalSearch or MultiStart object.
For example, suppose you want to use the active-set algorithm in fmincon to solve the problem in Example of Run with MultiStart. Further suppose that you want to have tolerances of 0.01 for both
XTolerance and FunctionTolerance. The run method groups local solutions whose objective function values are within FunctionTolerance of each other, and which are also less than XTolerance apart from
each other. To obtain the solution:
% Set the random stream to get exactly the same output
% rng(14,'twister')
ms = MultiStart('FunctionTolerance',0.01,'XTolerance',0.01);
opts = optimoptions(@fmincon,'Algorithm','active-set');
sixmin = @(x)(4*x(1)^2 - 2.1*x(1)^4 + x(1)^6/3 ...
+ x(1)*x(2) - 4*x(2)^2 + 4*x(2)^4);
problem = createOptimProblem('fmincon','x0',[-1,2],...
[xminm,fminm,flagm,outptm,someminsm] = run(ms,problem,50);
MultiStart completed the runs from all start points.
All 50 local solver runs converged with a
positive local solver exit flag.
someminsm =
1x5 GlobalOptimSolution
In this case, MultiStart generated five distinct solutions. Here “distinct” means that the solutions are more than 0.01 apart in either objective function value or location.
Related Topics
|
{"url":"https://it.mathworks.com/help/gads/multiple-solutions.html","timestamp":"2024-11-04T12:35:15Z","content_type":"text/html","content_length":"74126","record_id":"<urn:uuid:141664c1-42a0-419b-9442-817d5b70607d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00759.warc.gz"}
|
Math You Can Use
Refactoring If Statements with De Morgan's Laws
When I was in college, years ago, I discovered a little trick to use De Morgan's Laws, a basic principle from Discrete Math, to manipulate logical statements while Programming and I employ it
whenever it fits. I started doing this so long ago that I do not recall if I came up with it on my own, I did take a lot of extra Logic classes in college, or if I picked it up from someone else. So
naturally I was excited to write a blog entry about it and did, however, as I was finishing it up it occurred to me to Google it, and although I was not really surprised to find other mention of the
idea, I was, however, dismayed to find out that Steve McConnell mentions it in his book, Code Completerecommended and also has been on my list of books to read, I guess I am disheartened because now
it might seem like I am just taking an idea from a fairly prominent book, and in looking through the book I see many good ideas, some that I think have been expressed by others as well, so the
re-expressing a good software idea seems both justifiable and one that is consistent with the goals of my blog, so I am glad that I did not Google it earlier which might have discouraged me from
writing this.
So let’s run through how this works. To apply this to Programming mechanics we can look at this from a symbol manipulation point of view, after all Programming in some senses is the manipulation of
lexical symbols. Strictly speaking, it involves symbolic variants, if you will, of the distributive property and of factoring, in regards to the negation symbol.
The two forms of Demorgan’s Law, described using standard Java/C++ logical operators are:
!(a || b) == (!a && !b)
!(a && b) == (!a || !b)
Looking at either of the above logical forms of the De Morgan’s statements, identities actually, you can see that going left to right, the negation symbol is "distributed" across the two terms and
going right to left, the negation symbol is "factored" out from the two terms and applied to the whole statement. Each operation causes the "and" symbol to be swapped with the "or" symbol or vice
versa. A simple mnemonic might be "either distribute and flip or factor and flip." Please note that flip means switching between these two operators, and that these are not inverse operations.
Ok, so let’s put this to use, a recommended refactoring, also mentioned in Code Complete, is the "Reverse Conditional," . If you were to come across the following Java/C++ code, you would look at it
and see that it is negative, if you have been using the Demorgan’s Law refactoring trick, you will instantly know how to refactor it:
if((a != b) || (c != d)) {
} else {
So we need to get the positive conditional which means we need to negate the current if statement’s logic, let’s walk through it:
First we’ll pull the negative’s out, using the following rule:
(x != a) == !(x ==a), I don’t know what this is called, but it should be pretty obvious that it is true.
So now we have:
if(!(a == b) || !(c == d)) {
Now we factor and flip:
if(!((a == b) && (c == d))) {
Negating gets rid of the "!" because !!x == x, which is called double negation, and the final refactored code with the if and else blocks reversed is:
if((a == b) && (c == d)) {
} else {
So now, you either already knew this or now know it, if this is new to you I would recommend that practice it. Perhaps write some test code and play with the various forms translating each form back
and forth and seeing how the logic works.
Now let’s look at this from a more formal perspective:
¬ (p ∨ q) ⇔ (¬ p) ∧ (¬ q)
¬ (p ∧ q) ⇔ (¬ p) ∨ (¬ q)
Just a couple quick notes on the notation here, you most likely already know these ideas, but in case you are not familiar with this notation: The symbol ¬ means logical negation, the symbol ∧ is
logical conjunction , aka "and," the symbol ∨ is logical disjunction aka "or," and the double arrow ⇔ means logical equivalence which can be read as "if and only if." The parentheses serve the usual
De Morgan’s laws also apply to set theory, and the Set Theoretic forms of De Morgan's Laws are:
Where ∪ is set union, ∩ is set intersection and X is set complementation.
If you received a CS degree then most likely you took some form Discrete Math which probably included Set Theory and possibly some basic Propositional Logic, if for whatever reason you didn’t learn
them, I highly recommend learning them, and if you’ve forgotten them, I highly recommend reviewing them.
Set theory and Propositional Logic are intertwined, with a number of concepts which run parallel between the two disciplines and De Morgan's Law is not the only one. Additionally it seems that De
Morgan's Law shows up a few other places as well.
Logic is a central construct in Programming and Math, in many respects it is one of the foundations of all of Math, of course like all things in Math it can also become very involved and complex once
you dig just slightly below the surface, but this is really a good thing because it will never get boring. Also the interrelationship between Logic and Set Theory seems, to me, to be pretty intricate
and extremely profound.
If you are a non Math oriented Programmer but want to be, my recommendation is that you take this refactoring trick and its associated Logic related Math and study it and make it part of your
repertoire of skills. To put in mantra form: Learn it. Know it. Live it.
3 comments:
1. You converted an equation with negative attitude "if((a != b) || (c != d))" into an equation with positive attitude "if((a == b) && (c == d))" :) . I did the same an year back, without knowing
any maths. Let me share my experience with you.
I am very poor at maths and there was this old code at work where there are 6 or 8 variables spread over 3 lines combined in various odd ways with && and || along with weird negations (!). The
whole conditional-statement was totally unclear, clumsy and complete mess when it comes to one question: What this conditional check is doing ? (How it is doing something was a bigger mess). I
was supposed to add some feature to that software and that conditional-statement was coming in the way, not to mention previous maintainers faced the same issue but that code still lied there for
years. And then I decided to clean it up out of frustration.
Guess What, I read your article only today but I did the same thing at that time, plus I grouped several statements into one, learned the "why" behind the conditional by reading code and even
removed some statements. I did this without any knowledge of logic, set theory or De Morgan's laws. In fact, I learned all of this technique from comp.lang.c --> "positivity is easier to
understand than negation". I have been to comp.lang.c, wrote several thousand lines of code at work and I stil gang in there to learn.
My question you just used Maths as language to do that while I read the same English from comp.lang.c . Does that mean Maths gives same benefits as experience in programming ? That it is actually
not Maths but a certain kind of thinking required to solve problems in programming ?
2. I think this specific example isn't a strong case for the importance of these mathematical theories in programming. Your refactored code merely makes sense; it doesn't require anyone to
understand the theory to be able to simply understand, by pure logic, that the refactored code has the same effect as the original.
In the original code we basically have a Pass/Fail scenario where Condition 1 is Fail and Condition 2 is Pass: If A!=B OR C!=D, Fail, otherwise, Pass (meaning A=B AND C=D because either statement
returning false would result in Failure).
It seems only logical to reform the statement so your pass conditions are strictly met and your failure conditions are met via "else" aka "any other result." So if A=B & C=D, Pass, otherwise,
1. Note: The issue here may be that I simply understand what you're putting out but am simply unfamiliar with the formal representation of it. I would be remiss to assume it makes sense to
|
{"url":"https://www.elegantcoding.com/2011/04/math-you-can-use.html","timestamp":"2024-11-11T00:41:30Z","content_type":"application/xhtml+xml","content_length":"88935","record_id":"<urn:uuid:1c431385-53c3-494a-b62d-6863fbca7962>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00065.warc.gz"}
|
We are so glad you are here.
Ready to make some beautiful diagrams? Penrose is accessible to people coming from a variety of backgrounds including mathematical domain experts and individuals with no programming experience.
Table of Contents
In this section, we will introduce Penrose's general approach and system, talk about how to approach diagramming, and explain what makes up a Penrose diagram.
The real fun begins when you dive into the series of tutorials we have prepared for you. You can navigate through tutorials using the navigation bar on the left, and jump between sections of a single
page using the navigation bar on the right.
Note that each tutorial builds on top of the last, so we highly recommend that you work through the tutorials in order. Each one contains a detailed walk-through of a particular example and several
exercises for you to consolidate your knowledge.
This section provides both concrete and conceptual descriptions of how to work within the Penrose environment. Feel free to dive into the tutorials if you are ready.
How do we create diagrams by hand?
Recall how you would normally create a diagram of a concept using a pen or pencil. It will most likely involve the following steps:
1. Decide what you are diagramming
• Let's say we are making a diagram of things in your house. Then the domain of objects that we are working with includes everything that is in your house. Subsequently, any items that can be found
in your house (furniture, plants, utensils, etc.) can be thought of as specific types of objects in your household domain.
2. Make a list of the objects you want to include in the diagram
• We either write down or mentally construct a list of all the objects that will be included in our diagram. In Penrose terms, these objects are considered substances of our diagram.
• For example, your chair 🪑 is a particular instance of an object in the house domain 🏠. If the chair is in the diagram, then it is a substance of the diagram.
3. Figure out the relationships between the objects
• If we only put the list of items on paper one by one, that would not be a particularly interesting or useful diagram. Diagrams are more interesting when they visualize relationships.
• For example, we could group the plants in your house based on the number of times they need to be watered on a weekly basis. Then we would have visual clusters of elements.
4. Explore different visual styles
• Drawings commonly require explorations and various attempts with colors, sizes, and compositions. The same concept can be visualized in a number of different styles.
The process of creating a Penrose diagram is similar to our intuitive process of analog diagramming. 🎉
Let's circle back to what Penrose is meant to do: create beautiful diagrams from mathematical statements. If a chair is an object within a house, then it follows that a vector is also an object
within Linear Algebra. With Penrose, you can build any mathematical domain with concepts that you wish to visualize. 🖌️
What makes up a Penrose program?
As discussed above, it is important to keep track of any objects that we want to include in our Penrose diagram. The way we do that is by writing code in three specific files.
• First, we need to define our domain of objects because Penrose does not know what is in your house or what a chair is. In addition to defining the types of objects in your domain, you will need
to describe the possible operations in your domain. For example, you can push a chair, or sit on a chair, which are operations related to a chair.
• Second, we need to store the specific substances we want to include in our diagrams, so Penrose knows exactly what to draw for you.
• Lastly, we need to define the styles that we want to visualize our substances with.
Each of these corresponds to a specific file with an intuitive file extension designed for accessibility:
• A .domain file that defines the language specific to the domain.
• A .substance file that creates substances of mathematical content.
• A .style file that specifies the style of the visual representation.
In general, for each diagram, you will have a unique .substance file that contains the specific instances for the diagram, while the .domain and .style files can be applied to a number of different
diagrams. For example, we could make several diagrams in the domain of Linear Algebra that each visualize different concepts with different .substance files, but we would preserve a main
linearAlgebra.domain file that describes the types and operations that are possible in Linear Algebra, and select from any of several possible linearAlgebra.style files to affect each diagram's
Now, you are equipped to embark on your Penrose journey and make your first Penrose diagram! Here's a quick sneak peek of what we will be building in the following tutorials:
Sneak peek of the tutorials
• Tutorial 1: Diagram containing 2 sets
• Tutorial 2: Diagram illustrating the concept of subset
• Tutorial 3: Diagram showing vector addition
|
{"url":"https://penrose.cs.cmu.edu/docs/tutorial/welcome","timestamp":"2024-11-09T21:52:31Z","content_type":"text/html","content_length":"31503","record_id":"<urn:uuid:4b831d82-aed5-48c4-bf21-e5ee920d8dfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00529.warc.gz"}
|
4. (3 points) Let X and Y be two random variables with their covarianc
4. (3 points) Let X and Y be two random variables with their covariance matrix denoted by E. In other
(a) Show that is equivalent to
o OXY
σYX o}
- HY
(b) Is Σ symmetric?
(c) Let A be an arbitrary 2 by 2 matrix and b be an arbitrary two-dimensional vector. Let Z denote
a two-dimensional random vector defined as Z = A
+ b. What is the covariance matrix
of Z? Provide a detailed derivation.
(Hint: Note that the mean vector of Z is given by z = A
this question, the covariance matrix of Z can be obtained as E [(Z - µz)(Z - Hz)¹].)
+ b. According to part (a) of
[Oppenheim & Verghese] A. Oppenheim and G. Verghese, Signals, Systems, and Inference, 1st Ed.,
Fig: 1
|
{"url":"https://tutorbin.com/questions-and-answers/4-3-points-let-x-and-y-be-two-random-variables-with-their-covariance-matrix-denoted-by-e-in-other-words-a-show-that-is","timestamp":"2024-11-07T06:46:37Z","content_type":"text/html","content_length":"63791","record_id":"<urn:uuid:aae75c8a-0ece-4718-88b4-04fa239eddc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00343.warc.gz"}
|
How to Calculate Margin Percentage in Excel?
How to Calculate Margin Percentage in Excel?
Are you looking to find out how to calculate margin percentage in Excel? You’ve come to the right place! In this article, we will provide a step-by-step guide on how to calculate the margin
percentage in Excel. We will also discuss the advantages of using Excel to calculate margin percentages, as well as some tips and tricks to make the process easier. By the end of this article, you
will have all the knowledge you need to calculate margin percentage in Excel quickly and accurately. So let’s get started!
To calculate margin percentage in Excel, follow these steps:
• Open the Excel worksheet containing the data for which the margin percentage is to be calculated.
• Input the cost and revenue data into the worksheet.
• Calculate gross margin by subtracting the cost from the revenue.
• Divide the gross margin by the revenue.
• Multiply the result by 100 to calculate the margin percentage.
Calculating Margin Percentage in Excel
Margin percentage is used to measure profit and loss. It helps to analyze the financial performance of a company or organization. This can be calculated in Excel using various formulas. With this
guide, you will learn how to calculate margin percentage in Excel.
Understand the Margin Percentage Formula
The margin percentage formula is as follows:
Margin Percentage = (Gross Profit / Total Revenue) * 100
Gross profit is the difference between the cost of goods sold and the total revenue. Total revenue is the total income generated from the sale of goods and services.
Set Up the Excel Sheet
To calculate the margin percentage in Excel, you will need to set up an Excel sheet. Start by setting up two columns. In the first column, list the cost of goods sold and in the second column, list
the total revenue. Once this is done, add a third column and label it “Gross Profit”.
Calculate Gross Profit
To calculate gross profit, subtract the cost of goods sold from the total revenue. This can be done by entering the following formula in the “Gross Profit” column:
Calculate Margin Percentage
Once the gross profit is calculated, you can now calculate the margin percentage. To do this, enter the following formula in the “Margin Percentage” column:
Interpret Margin Percentage
The margin percentage can be interpreted in a few different ways. A low margin percentage indicates that the company is not making much profit on the sale of goods and services. A high margin
percentage indicates that the company is making a large profit. The higher the margin percentage, the more profitable the company is.
Calculating margin percentage in Excel can be a useful tool for analyzing the financial performance of a company or organization. With this guide, you should now have a better understanding of how to
calculate margin percentage in Excel.
Frequently Asked Questions
What is Margin Percentage?
Margin percentage is a financial metric that measures the amount of profit a company earns on the sale of a product or service relative to the cost of the product or service. It’s a useful metric for
business owners and investors to assess the profitability of a company’s operations and compare it to competitors. Margin percentage can also be used to calculate the markup on a product or service,
which is the difference between the product or service’s cost and its retail price.
How to Calculate Margin Percentage in Excel?
Calculating margin percentage in Excel is quite simple. First, you’ll need to enter the cost of the product or service and the retail price into two separate cells. Then, you’ll need to subtract the
cost from the retail price, and divide the result by the cost. Finally, multiply that result by 100 to get the margin percentage. For example, if the cost of the product or service is $10, and the
retail price is $15, the margin percentage would be 50% ($15–$10= $5; $5/$10= 0.5; 0.5*100= 50%).
What is the Formula for Calculating Margin Percentage in Excel?
The formula for calculating margin percentage in Excel is (Retail Price – Cost)/Cost * 100. This formula can be used to calculate the margin percentage for any product or service.
What are Some Examples of Margin Percentage?
Some examples of margin percentage include: a 10% margin on a product that costs $10 and is sold for $11; a 25% margin on a service that costs $200 and is sold for $250; a 50% margin on a product
that costs $100 and is sold for $150.
What is Markup?
Markup is the difference between the cost of a product or service and its retail price. It is used to calculate the margin percentage, and is usually expressed in terms of a percentage. For example,
if the cost of a product is $10 and the retail price is $15, the markup would be 50% ($15–$10= $5; $5/$10= 0.5; 0.5*100= 50%).
What is the Difference Between Margin and Markup?
The difference between margin and markup is that margin is a financial metric that measures the amount of profit a company earns on the sale of a product or service relative to the cost of the
product or service, while markup is the difference between the cost of a product or service and its retail price. Margin is usually expressed in terms of a percentage, while markup is usually
expressed in terms of a dollar amount.
How to Calculate Profit Margin Percentage in Excel (Fastest Method)
If you’ve been looking for a way to quickly and easily calculate margin percentage in Excel, then this article has provided you with the necessary steps. From setting up your spreadsheet to
calculating the margin percentage, you now have the tools to take your knowledge of Excel to the next level. With the help of this article, you can now be sure that your margin percentage
calculations are accurate and up-to-date.
|
{"url":"https://keys.direct/blogs/blog/how-to-calculate-margin-percentage-in-excel","timestamp":"2024-11-02T22:22:02Z","content_type":"text/html","content_length":"348140","record_id":"<urn:uuid:0051abb6-29ba-42b8-8c12-b3bf6b9aa22c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00400.warc.gz"}
|
Fraction Strips Up To 20 - Free Hands-On and Fun Way To Learn
Fraction Strips Up to 20
Need to print fraction strips up to 20 - You'll find them here. You may already know that fraction strips are a great tool to help kids as they learn about fractions, but if not, check out why use
fraction strips.
On this page you'll find fraction strips up to 12 that you can print off and use in color or in black-and-white.
You'll also find links to other size fraction strips, including fraction strips up to 12, fraction strips up to 30, fraction strips up to 50, etc.
"Greater Than" or "Less than"
Can your students or child tell if one fraction is larger or smaller when they have unlike denominators?
Which fraction is larger or smaller -
"Thirteen-Twentieths" or "Eleven-Fifteenths"?
From the fraction strips in the picture above, it's pretty easy for kids to visualize and understand that thirteen-twentieths is less than eleven-fifteenths!
The worksheets below give students practice comparing fractions. Further down page, are worksheets for equivalent fractions.
Fraction Strips Up to 20
"Comparing Fractions" Worksheets
Below is a comparing fractions worksheet for kids. Using fraction strips will gain a better understanding of how sizes of fractions compare to each other.
Fraction Strips Up to 20
"Equivalent Fractions" Worksheets
Equivalent fractions help us understand how fractions can be related to each other. For example, if you have a fraction like 1/2, you can easily find the equivalent fraction 2/4 by multiplying both
the numerator and denominator.
Check out the equivalent fractions worksheet below. Be sure to have students use the fraction strips to help them, especially if their new to equivalent fractions or need to to build up their
fraction skills!
Adding Fractions with Fraction Strips
How about adding fractions with unlike denominators using fraction strips.
Using the Fraction Strip method
• Lay out the fraction strips for both fractions side by side as the picture shows below.
• Now go to the fraction strips and locate the fractions with the denominators that will exactly cover your side by side fraction addition in previous step. See picture below.
In picture above, the "15"- denominator exactly covers the addition of our two fractions. It may take some trial and error, but you will soon find it by trying the different denominators.
"Adding Fractions" with Fraction Strips - Worksheets
For some practice adding fractions with unlike denominators, here's a worksheet below.
The fraction strips found above on this page can be a helpful guide to help kids when they are just starting to learn to add fractions with unlike denominators.
"Subtracting Fractions" with Fraction Strips - Worksheets
So now let's look at subtracting fractions with unlike denominators.
Using the Fraction Strip method
You can have your students or child experimenting with their fraction strips to subtract fractions without using the method shown below, but I wanted write out the steps for this method in case they
needed some guidance.
For the Method below:
Materials Needed:
• Two printed copies of the set of fractions strips
• Pair of scissors
• A ruler
• With on set of fraction strips, cut out the individual fraction pieces along the dashed lines.
• (Optional) If you have access to a laminator, it can be helpful to laminate the individual pieces after cutting them out, since they'll last longer for future use.
• Lay the fraction strip page on the table in front of you.
• Take note of the larger of the fractions you are subtracting, in this case, it's 4/12. Place your ruler vertically from the right edge of the 4/12, down the fraction strip page.
• Take note of which fraction or fractions the ruler lines up exactly with exactly. In the picture below, I've put a brown rectangle around those fractions that line up and a red rectangle around
the 4/12.
From the picture above, we see the fraction strips of 1/3, 2/6, 3/9, 5/15, and 6/18 line up exactly with our 4/12 fraction.
• Next, we take the ruler and do the same thing with the fraction we are taking away, which in this case, is 3/15. See picture below.
From the picture above, we see the fraction strip of 1/5 lines up exactly with our fraction of 3/15.
• Now we look at all of the fractions that I put rectangles around and try to find the same denominator for both times we applied the ruler. The denominator that is the same in 15.
This 15 is called the common denominator of the two fractions we are subtracting. This lets us know that we will use the fraction strip with the denominator of 15 to do the subtraction!
We see from the picture above that 4/12 is equivalent to 5/15.
• Now we use the fraction strips with the denominator of 15 and take 3 of them away from the 5 that are there, and we have two remaining.
If you need it, here's a short video subtraction fraction with fraction strips method used above.
Return from Fraction-Strips-Up-to-20 to Learn With Math Games Home
|
{"url":"https://www.learn-with-math-games.com/fraction-strips-up-to-20.html","timestamp":"2024-11-02T23:31:41Z","content_type":"text/html","content_length":"64495","record_id":"<urn:uuid:554a6c23-2675-482f-b471-38adb5e546e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00767.warc.gz"}
|
Factoring - Combined
To view this video please enable Javascript
The content focuses on advanced factoring techniques for tackling the most challenging Quant problems on the GMAT exam, emphasizing the importance of combining basic factoring methods to solve
complex expressions.
• Introduction to combining various factoring techniques to address difficult algebraic expressions.
• Examples illustrating the process of factoring out the greatest common factor, difference of squares, and ordinary quadratics.
• Demonstration of factoring complex expressions involving both numbers and variables, highlighting the necessity of recognizing patterns such as even coefficients and squares.
• Insight into the practical application of these advanced factoring skills in solving higher-level Quant problems on the GMAT, not as standalone questions but as steps within larger
problem-solving contexts.
• A preview of utilizing these factoring techniques in the context of solving algebraic equations, as part of a broader discussion on algebraic expressions.
Advanced Factoring Techniques Overview
Factoring Complex Expressions
Integrating Factoring into Larger Problem Solving
|
{"url":"https://gmat.magoosh.com/lessons/237-factoring-combined","timestamp":"2024-11-09T22:57:21Z","content_type":"text/html","content_length":"115882","record_id":"<urn:uuid:c1caa0e2-f4c8-4b77-bedf-ee6dc80f3f14>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00659.warc.gz"}
|
Mathematical Economics
While statistics has now become an irreplacable component of economic study, mathematical analysis in academic content is growing exponentially. Mathematical modelling is particularly helpful in
analysing a number of aspects of economic theory.
Students need to learn mathematical application in economics and MRU is the only platform capable of teaching the same in an engaging manner. Please consider. Would be sharing the academic resources
available to me as well.
Proposed syllabus:
Techniques of constrained optimisation.
This is a rigorous treatment of the mathematical techniques used for solving constrained optimisation problems, which are basic tools of economic modelling. Topics include: Definitions of a feasible
set and of a solution, sufficient conditions for the existence of a solution, maximum value function, shadow prices, Lagrangian and Kuhn Tucker necessity and sufficiency theorems with applications in
economics, for example General Equilibrium theory, Arrow-Debreu securities and arbitrage.
Intertemporal optimisation.
Bellman approach. Euler equations. Stationary infinite horizon problems. Continuous time dynamic optimisation (optimal control). Applications, such as habit formation, Ramsey-Kass-Coopmans model,
Tobin’s q, capital taxation in an open economy, are considered.
Tools for optimal control: ordinary differential equations.
These are studied in detail and include linear 2nd order equations, phase portraits, solving linear systems, steady states and their stability.
An error occurred while saving the comment
|
{"url":"https://feedback.mru.org/forums/256087-suggest-a-course/suggestions/40487380-mathematical-economics?edit=1","timestamp":"2024-11-06T23:42:23Z","content_type":"text/html","content_length":"60116","record_id":"<urn:uuid:415ba14a-2cf2-45bb-ad38-e2633e85c7f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00512.warc.gz"}
|
Demystifying a key self-supervised learning technique: Non-contrastive learning
What the research is:
Self-supervised learning (where machines learn directly from whatever text, images, or other data they’re given — without relying on carefully curated and labeled data sets) is one of the most
promising areas of AI research today. But many important open questions remain about how best to teach machines without annotated data.
We’re sharing a theory that attempts to explain one of these mysteries: why so-called non-contrastive self-supervised learning often works well. With this approach, an AI system learns only from a
set of positive sample pairs. For example, the training data might contain two versions of the same photo of a cat, with the original in color and one in black and white. The model is not given any
negative examples (such as an unrelated photo of a mountain).
This is different from contrastive self-supervised learning, which includes both negative and positive examples and is one of the most effective methods to learn good representations. The loss
function of contrastive learning is intuitively simple: minimize the distance in representation space between positive sample pairs while maximizing the distance between negative sample pairs.
Non-contrastive self-supervised learning is counterintuitive, however. When trained with only positive sample pairs (and only minimizing the distance between them), it might seem like the
representation will collapse into a constant solution, where all inputs map to the same output. With a collapsed representation, the loss function would reach zero, the minimal possible value.
In fact, these models can still learn good representations. We’ve found the training of non-contrastive self-supervised learning framework converges to a useful local minimum but not the global
trivial one. Our work attempts to show why this is.
We’re also sharing a new method called DirectPred, which directly sets the predictor weight instead of training it with gradient update. Using a linear predictor, DirectPred performs on par with
existing non-contrastive self-supervised approaches like Bootstrap Your Own Latent (BYOL).
How it works:
We focused on analyzing the model’s dynamics during training: how the weights change over time and why it doesn’t collapse to trivial solutions.
To learn this, we started with a highly simplified version of a non-contrastive self-supervised learning model — in this case, one that contains a linear trunk neural network W (and its
moving-averaged version, Wa) plus an extra linear predictor, Wp. Despite this setting’s simplicity, our analysis is surprisingly consistent with real-world circumstances, where the trunk network is
highly nonlinear.
We first showed that two things are essential for non-contrastive self-supervised learning: There needs to be an extra predictor on the online side, and the gradient cannot be back-propagated on the
target side. We were able to demonstrate that if either of these conditions is not met, the model will not work. (The weight of the trunk network simply shrinks to zero, and no learning would
happen.) This phenomenon was previously verified empirically in two non-contrastive self-supervised methods, BYOL and SimSiam, but our work now shows it theoretically.
Despite our example model’s simplicity, it is still difficult to analyze and no close form of the dynamics can be derived. Although it has a linear predictor and linear trunk weight, its dynamics are
still highly nonlinear. Fortunately, we still managed to land on an interesting finding: a phenomenon called eigenspace alignment between the predictor Wp and the correlation matrix F = WXW^T, if we
assume that Wp is a symmetric matrix.
Roughly speaking, the eigenspace of a symmetric matrix characterizes how it behaves along different directions in the high-dimensional space. Our analysis shows that during the gradient update in
training, under certain conditions, the eigenspace of the predictor will gradually align with that of the correlation matrix of its input. This phenomenon is shown not only in our simple theoretical
model, but also with ResNet18 as the trunk network in real experiments with the CIFAR10 data set, where the eigenspace starts to align almost perfectly after ~50 epochs.
While the training procedure itself may be very complex and hard to interpret, the significance of this alignment is clear and easy to understand: The two matrices, the predictor and the correlation
matrix of the input, finally reach an “agreement” after the training procedure.
Two natural questions follow. First, what is the most important part of the training? Is it the process of reaching an agreement or the final agreement between the two matrices? Second, if the
predictor and correlation matrix need many epochs of training in order to reach an agreement, why not make it happen immediately?
The first question leads to our detailed analysis on why non-contrastive SSL doesn’t converge into a trivial solution: It turns out that if we choose to use gradient descent to optimize, then the
procedure itself is important to keep the weights from arriving at trivial solutions and instead to converge on a meaningful result. From this analysis, we also gain insights on the role played by
the three hyperparameters: the relative learning rate of the predictor (compared to that of the trunk), the weight decay, and the rate of exponential moving average. (More details are available in
this paper.)
With this understanding of the importance of the training procedure itself, it is reasonable to wonder whether the basic building block in deep model training, that is, the gradient update, is the
culprit. Will non-contrastive learning still work if we don’t use gradient descent and instead reach an agreement faster? It turns out we can circumvent the gradient update of the predictor and
directly set it according to the correlation matrix at each training stage. This ensures that there is always an agreement throughout the training.
Following this idea, we’ve developed our new DirectPred. Surprisingly, on ImageNet, the downstream performance obtained by pretraining with DirectPred is better than that obtained by gradient update
on linear predictor and is comparable with SoTA non-contrastive SSL methods like BYOL that uses a 2-layer nonlinear predictor with BatchNorm and ReLU nonlinearity. For 300 epoch training, the Top-5
metric is even better by 0.2 percent than vanilla BYOL. On the CIFAR-10 and STL-10 data sets, DirectPred also achieves downstream performance comparable to that of other non-contrastive SSL methods.
Why it matters:
Because it doesn’t rely on annotated data, self-supervised learning enables AI researchers to teach machines in new, more powerful ways. Machines can be trained with billions of examples, for
instance, since there is no need to hand-curate the data set. They can also learn even when annotated data simply isn’t available.
The AI research community is in the early stages of applying self-supervised learning, so it’s important to develop new techniques such as the non-contrastive methods discussed here. Compared to
contrastive learning, non-contrastive approaches are conceptually simple, and do not need a large batch size or a large memory bank to store negative samples, thereby saving both memory and
computation cost during pretraining. Furthermore, with a better theoretical insight into why non-contrastive self-supervised learning can work well, the AI research community will be able to design
new approaches to further improve the methods, and focus on the model components that matter most.
The finding that our DirectPred algorithm rivals that of existing non-contrastive self-supervised learning methods is also noteworthy. It shows that by improving our theoretical understanding of
non-contrastive self-supervised learning, we can achieve strong performance in practice and use our discoveries to design novel approaches. Novel self-supervised representation learning techniques
have progressed astonishingly quickly in recent years. But we hope that our work, following a long stream of scientific efforts to develop a theoretical understanding of neural networks (e.g.,
understanding “lottery tickets,” or the phenomenon of student specialization), will show other researchers that with a deep theoretical understanding of existing methods, it is possible to come up
with valuable fundamentally different new approaches.
Read the full paper
|
{"url":"https://ai.meta.com/blog/demystifying-a-key-self-supervised-learning-technique-non-contrastive-learning/","timestamp":"2024-11-04T11:39:46Z","content_type":"text/html","content_length":"151515","record_id":"<urn:uuid:cb665e49-58fd-4d83-b980-b9eb16ec50e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00693.warc.gz"}
|
The Grand Janitor
I wasn’t very productive in blogging for the last two months. Here are couple of worthy blog posts and news you might feel interested.
GJB also reached the milestone of 100 posts, thanks for your support !
Google Buys Neural Net Startup, Boosting Its Speech Recognition, Computer Vision Chops
Future Windows Phone speech recognition revealed in leaked video
Google Keep
Feel free to connect with me on Plus, LinkedIn and Twitter.
sphinxtrain Thought training training scripts
Good ASR Training System
The term “speech recognition” is a misnomer.
Why do I say that? I have explained this point in an old article “Do We Have True Open Source Dictation?, which I wrote back in 2005: To recap, a speech recognition system consists of a Viterbi
decoder, an acoustic model and a language model. You could have a great recognizer but bad accuracy performance if the models are bad.
So how does that related to you, a developer/researcher of ASR? The answer is ASR training tools and process usually become a core asset of your inventories. In fact, I can tell you when I need to
work on acoustic model training, I need to spend full time to work on it and it’s one of the absorbing things I have done.
Why is that? When you look at development cycles of all tasks in making an ASR systems. Training is the longest. With the wrong tool, it is also the most error prone. As an example, just take a look
of Sphinx forum, you will find that majority of non-Sphinx4 questions are related to training. Like, “I can’t find the path of a certain file”, “the whole thing just stuck at the middle”.
Many first time users complain with frustration (and occasionally disgust) on why it is so difficult to train a model. The frustration probably stems from the perception that “Shouldn’t it be
well-defined?” The answer is again no. In fact how a model should be built (or even which model should be built) is always subjects to change. It’s also one of the two subfields in ASR, at least IMO,
which is still creative and exciting in research. (Another one: noisy speech recognition.) What an open source software suite like Sphinx provide is a standard recipe for everyone.
Saying so, is there something we can do better for an ASR training system? There is a lot I would say, here are some suggestions:
1. A training experiment should be created, moved and copied with ease,
2. A training experiment should be exactly repeatable given the input is exactly the same,
3. The experimenter should be able to verify the correctness of an experiment before an experiment starts.
Ease of Creation of an Experiment
You can think of a training experiment as a recipe …… not exactly. When we read a recipe and implement it again, we human would make mistakes.
But hey! We are working with computers. Why do we need to fix small things in the recipe at all? So in a computer experiment, what we are shooting for is an experiment which can be easily created and
moved around.
What does that mean? It basically means there should be no executables which are hardwired to one particular environment. There should also be no hardware/architecture assumption in the training
implementations. If there is, they should be hidden.
Repeatability of an Experiment
Similar to the previous point, should we allow difference when running a training experiment? The answer should be no. So one trick you heard from experienced experimenters is that you should keep
the seed of random generators. This will avoid minute difference happens in different runs of experiments.
Here someone would ask. Shouldn’t us allow a small difference between experiments? We are essentially running a physical experiment.
I think that’s a valid approach. But to be conscientious, you might want to run a certain experiment many times to calculate an average. In a way, I think this is my problem with this thinking. It is
slower to repeat an experiment. e.g. What if you see your experiment has 1% absolute drop? Do you let it go? Or do you just chalk it up as noise? Once you allow yourself to not repeat an experiment
exactly, there will be tons of questions you should ask.
Verifiability of an Experiment
Running an experiment sometimes takes day, how do you make sure running it is correct? I would say you should first make sure trivial issues such as missing paths, missing models, or incorrect
settings was first screened out and corrected.
One of my bosses used to make a strong point and asked me to verify input paths every single time. This is a good habit and it pays dividend. Can we do similar things in our training systems?
Apply it on Open Source
What I mentioned above is highly influenced by my experience in the field. I personally found that sites, which have great infrastructure to transfer experiments between developers, are the strongest
and faster growing.
To put all these ideas into open source would mean very different development paradigm. For example, do we want to have a centralized experiment database which everyone shares? Do we want to put
common resource such as existing paramatized inputs (such as MFCC) somewhere in common for everyone? Should we integrate the retrieval of these inputs into part of our experiment recipe?
Those are important questions. In a way, I think it is the most type of questions we should ask in open source. Because regardless of much volunteer’s effort. Performance of open source models is
still lagging behind the commercial models. I believe it is an issue of methodology.
C++ g2p sphinxbase sphinxtrain Thought
sphinxbase 0.8 and SphinxTrain 1.08
I have done some analysis on sphinxbase0.8 and SphinxTrain 1.08 and try to understand if it is very different from sphinxbase0.7 and SphinxTrain1.0.7. I don’t see big difference but it is still a
good idea to upgrade.
• (sphinxbase) The bug in cmd_ln.c is a must fix. Basically the freeing was wrong for all ARG_STRING_LIST argument. So chances are you will get a crash when someone specify a wrong argument name
and cmd_ln.c forces an exit. This will eventually lead to a cmd_ln_val_free.
• (sphinxbase) There were also couple of changes in fsg tools. Mostly I feel those are rewrites.
• (SphinxTrain) sphinxtrain, on the other hands, have new tools such as g2p framework. Those are mostly openfst-based tool. And it’s worthwhile to put them into SphinxTrain.
One final note here: there is a tendency of CMUSphinx, in general, starts to turn to C++. C++ is something I love and hate. It could sometimes be nasty especially dealing with compilation. At the
same time, using C to emulate OOP features is quite painful. So my hope is that we are using a subset of C++ which is robust across different compiler version.
Aaron Swartz acoustic score Dragon goldman sach Kurzweil list Sphinx4 sphinxtrain subword units
January 2013 Write-up
Miraculously, I still have some momentum for this blog and I have kept on the daily posting schedule.
Here is a write up for this month: Feel free to look at this post on how I plan to write this blog:
Some Vision of the Grand Janitor’s Blog
Sphinx’ Tutorials and Commentaries
SphinxTrain1.07’s bw:
Commentary on SphinxTrain1.07’s bw (Part I)
Commentary on SphinxTrain1.07’s bw (Part II)
Part I describes the high-level layout, Part II and describe half the state network was built.
Acoustic Score and Its Sign
Subword Units and their Occasionally Non-Trivial Meanings
Sphinx 4 from a C background : Material for Learning
Goldman Sachs not Liable
Aaron Swartz……
Other writings:
On Kurzweil : a perspective of an ASR practitioner
I was once asked by a fellow who didn’t work in ASR on how the estimation algorithms in speech recognition work. That’s a tough question to answer. From the high level, you can explain how properties
of Q function would allow an increase of likelihood after each re-estimation. You can also explain how the Baum-Welch algorithm is derived from the Q-function and how the estimation algorithm can
eventually expressed by greeks, and naturally link it to the alpha and bet pass. Finally, you can also just write down the reestimation formulae and let people perplex about it.
All are options, but this is not what I wanted nor the fellow wanted. We hoped that somehow there is one single of entry in understanding the Baum-Welch algorithm. Once we get there, we will grok.
Unfortunately, that’s impossible for Baum-Welch. It is really a rather deep algorithm, which takes several type of understanding.
In this post, I narrow down the discussion to just Baum-Welch in SphinxTrain1.07. I will focus on the coding aspect of the program. Two stresses here:
1. How Baum-Welch of speech recognition in practice is different from the theory?
2. How different parts of the theory is mapped to the actual code.
In fact, in Part I, I will just describe the high level organization of the Baum-Welch algorithm in bw. I assumed the readers know what the Baum-Welch algorithm is. In Part II, I will focus on the
low level functions such as next_utt_state, foward, backward_update, accum_global .
(At a certain point, I might write another post just to describe Baum-Welch, This will help my Math as well……)
Unlike the post of setting up Sphinx4. This is not a post for faint of heart. So skip the post if you feel dizzy.
Some Fun Reading Before You Move On
Before you move on, here are three references which I found highly useful to understand Baum-Welch in speech recognition. They are
1. L. Rabiner and B. H. Juang, Fundamentals of Speech Recognition. Chapter 6. “Theory and Implementation of Hidden Markov Model.” p.343 and p.369. Comments: In general, the whole Chapter 6 is
essential to understand HMM-based speech recognition. There are also a full derivation of the re-estimation formulae. Unfortunately, it only gives the formula without proof for the most important
case, in which observation probability was expressed as Gaussian Mixture Model (GMM).
2. X. D. Huang, A. Acero and H. W. Hon, Spoken Language Processing. Chapter 8. “Hidden Markov Models” Comments: written by one of the authors of Sphinx 2, Xuedong Huang, the book is a very good
review of spoken language system. Chapter 8 in particular has detailed proof of all reestimation algorithms. If you want to choose one book to buy in speech recognition. This is the one. The only
thing I would say it’s the typeface of greeks are kind of ugly.
3. X. D. Huang, Y. Ariki, M. A. Jack, Hidden Markov Models for Speech Recognition. Chapter 5, 6, 7. Comments: again by Xuedong Huang, I think this is the most detail derivations I ever seen on
continuous HMM in books. (There might be good papers I don’t know of). Related to Sphinx, it has a chapter of semi-continuous HMM (SCHMM) as well.
bw also features rather nice code commentaries. My understanding is that it is mostly written by Eric Thayer, who put great effort to pull multiple fragmented codebase together and form the embryo of
today’s SphinxTrain.
Baum-Welch algorithm in Theory
Now you read the references, in a very high-level what does a program of Baum-Welch estimation does? To summarize, we can think of it this way
* For each training utterance
1. Build an HMM-network to represent it.
2. Run Forward Algorithm
3. Run Backward Algorithm
4. From the Forward/Backward, calculate the statistics (or counts or posterior scores depends on how you call it.)
* After we run through all utterances, estimate the parameters (means, variances, transition probability etc….) from the statistics.
Sounds simple? I actually skipped a lot of details here but this is the big picture.
Baum-Welch algorithm in Practice
There are several practical concerns on doing Baum-Welch in practice. These are particularly important when it is implemented for speech recognition.
1. Scaling of alpha/beta scores : this is explained in detail in Rabiner’s book (p.365-p.368). The gist is that when you calculate the alpha or beta scores. They can easily exceed the range of
precision of any machines. It turns out there is a beautiful way to avoid this problem.
2. Multiple observation sequences: or stream. this is a little bit archaic, but there are still some researches work on having multiple streams of features for speech recognition (e.g. combining the
lip signal and speech signal).
3. Speed: most implementation you see are not based on a full run of forward or backward algorithm. To improve speed, most implementations use a beam to constrained the search.
4. Different types of states: you can have HMM states which are emitting or non-emitting. How you handle it complicates the implementation.
You will see bw has taken care of a lot of these practical issues. In my opinion, that is the reason why the whole program is a little bit bloated (5000 lines total).
Tracing of bw: High Level
Now we get into the code level. I will follow the version of bw from SphinxTrain1.07. I don’t see there are much changes in 1.08 yet. So this tracing is very likely to be applicable for a while.
I will organize the tracing in this way. First I will go through the high-level flow of the high-level. Then I will describe some interesting places in the code by line numbers.
main() – src/programs/bw/main.c
This is the high level of main.c (Line 1903 to 1914)
main ->
if it is not mmie training
We will first go forward with main_initialize()
-> initialize the model inventory, essentially means 4 things, means (mean) variances (var), transition matrices (tmat), mixture weights (mixw).
-> a lexicon (or .... a dictionary)
-> model definition
-> feature vector type
-> lda (lda matrix)
-> cmn and agc
-> svspec
-> codebook definition (ts2cb)
-> mllr for SAT type of training.
Interesting codes:
• Line 359: extract diagonal matrix if we specified a full one.
• Line 380: precompute Gaussian distribution. That’s usually mean the constant and almost always most the code faster.
• Line 390: specify what type of reestimation.
• Line 481: check point. I never use this one but it seems like something that allow the training to restart if network fails.
• Line 546 to 577: do MLLR transformation for models: for SAT type of training.
(Note to myself: got to understand why svspec was included in the code.)
Now let’s go to main_reestimate. In a nutshell, this is where the looping occurred.
-> for every utterane.
-> corpus_get_generic_featurevec (get feature vector (mfc))
-> feat_s2mfc2feat_live (get the feature vector)
-> corpus_get_sent (get the transcription)
-> corpus_get_phseg (get the phoneme segmentation.)
-> pdumpfn (open a dump file, this is more related Dave's constrained Baum-Welch research)
-> next_utt_states() /*create the state sequence network. One key function in bw. I will trace it more in detail. */
-> if it is not in Viterbi mode.
-> baum_welch_update() /*i.e. Baum-Welch update */
-> viterbi() /*i.e. Viterbi update)
Interesting code:
• Line 702: several parameter for the algorithm was initialized including abeam, bbeam, spthres, maxuttlen.
□ abeam and bbeam are essentially the beam sizes which control forward and backward algorithm.
□ maxuttlen: this controls how large an utterance will be read in. In these days, I seldom see this parameter set to something other than 0. (i.e. no limit).
□ spthres: “State posterior probability floor for reestimation. States below this are not counted”. Another parameter I seldom use……
-> for each utterance
forward() (forward.c) (<This is where the forward algorithm is -Very complicated. 700 lines)
if -outphsegdir is specified , dump a phoneme segmentation.
backward_update() (backward.c Do backward algorithm and also update the accumulator)
(<- This is even more complicated 1400 lines)
-> accum_global() (Global accumulation.)
(<- Sort of long, but it's more trivial than forward and backwrd.)
Now this is the last function for today. If you look back to the section of “Baum-Welch in theory”. you will notice how the procedure are mapped onto Sphinx. Several thoughts:
1. One thing to notice is that forward, backward_update and accum_global need to work together. But you got to realize all of these are long complicated functions. So like next_utt_state, I will
separate the discussion on another post.
2. Another comment here: backward_update not only carry out the backward pass. It also do an update of the statistics.
Conclusion of this post
In this post, I went through the high-level description of Baum-Welch algorithm as well as how the theory is mapped onto the C codebase. My next post (will there be one?), I will focus on the low
level functions such as next_utt_state, forward, backward_update and accum_global.
Feel free to comment.
cmu sphinx grandjanitor hieroglyph HTK language pocketsphinx Programming Sphinx sphinx3 Sphinx4 sphinxbase sphinxtrain Thought wfst
Me and CMU Sphinx
As I update this blog more frequently, I noticed more and more people are directed to here. Naturally, there are many questions about some work in my past. For example, “Are you still answering
questions in CMUSphinx forum?” and generally requests to have certain tutorial. So I guess it is time to clarify my current position and what I plan to do in future.
Yes, I am planning to work on Sphinx again but no, I probably don’t hope to be a maintainer-at-large any more. Nick proves himself to be the most awesome maintainer in our history. Through his
stewardship, Sphinx prospered in the last couple of years. That’s what I hope and that’s what we all hope.
So for that reason, you probably won’t see me much in the forum, answering questions. Rather I will spend most of my time to implement, to experiment and to get some work done.
There are many things ought to be done in Sphinx. Here are my top 5 list:
1. Sphinx 4 maintenance and refactoring
2. PocketSphinx’s maintenance
3. An HTKbook-like documentation : i.e. Hieroglyphs.
4. Regression tests on all tools in SphinxTrain.
5. In general, modernization of Sphinx software, such as using WFST-based approach.
This is not a small undertaking so I am planning to spend a lot of time to relearn the software. Yes, you hear it right. Learning the software. In general, I found myself very ignorant in a lot of
software details of Sphinx at 2012. There are many changes. The parts I really catch up are probably sphinxbase, sphinx3 and SphinxTrain. One PocketSphinx and Sphinx4, I need to learn a lot.
That is why in this blog, you will see a lot of posts about my status of learning a certain speech recognition software. Some could be minute details. I share them because people can figure out a lot
by going through my status. From time to time, I will also pull these posts together and form a tutorial post.
Before I leave, let me digress and talk about this blog a little bit: other than posts on speech recognition, I will also post a lot of things about programming, languages and other
technology-related stuffs. Part of it is that I am interested in many things. The other part is I feel working on speech recognition actually requires one to understand a lot of programming and
languages. This might also attract a wider audience in future.
In any case, I hope I can keep on. And hope you enjoy my articles!
grandjanitor Sphinx sphinx3 sphinxbase sphinxtrain Thought
The Grand Janitor’s Blog
For the last year or so, I have been intermittently playing with several components of CMU Sphinx. It is an intermittent effort because I am wearing several hats in Voci.
I find myself go back to Sphinx more and more often. Being more experienced, I start to approach the project again carefully: tracing code, taking nodes and understanding what has been going on. It
was humbling experience – speech recognition has changed, Sphinx has more improvement than I can imagine.
The life of maintaining sphinx3 (and occasionally dip into SphinxTrain) was one of the greatest experience I had in my life. Unfortunately, not many of my friends know. So Sphinx and I were pretty
much disconnected for several years.
So, what I plan to do is to reconnect. One thing I have done throughout last 5 years was blogging so my first goal is to revamp this page.
Let’s start small: I just restarted RSS feeds. You may also see some cross links to my other two blogs, Cumulomaniac, a site on my take of life, Hong Kong affairs as well as other semi-brainy topics,
and 333 weeks, a chronicle of my thoughts on technical management as well as startup business.
Both sites are in Chinese and I have been actively working on them and tried to update weekly.
So why do I keep this blog then? Obviously the reason is for speech recognition. Though, I start to realize that doing speech recognition has much more than just writing a speech recognizer. So from
now on, I will post other topics such as natural language processing, video processing as well as many low-level programming information.
This will mean it is a very niche blog. Could I keep up at all? I don’t know. As my other blogs, I will try to write around 50 messages first and see if there is any momentum.
|
{"url":"http://thegrandjanitor.com/category/sphinxtrain/","timestamp":"2024-11-03T00:33:53Z","content_type":"text/html","content_length":"105212","record_id":"<urn:uuid:350eeab6-5041-4f16-a39c-2f9728a16a9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00693.warc.gz"}
|
The CBO Multipliers Project : A Methodology for Analyzing the Effects of Alternative Economic Policies
A CBO Technical Analysis Paper
August 1977
U.S. CONGRESS
WASHINGTON, D.C.
A Methodology for Analyzing the Effects of Alternative
Economic Policies
The Congress of the United States
Congressional Budget Office
For sale by the Superintendent of Documents, U.S. Government Printing Office
Washington, D.C. 20402
Stock No. 052-070-04180-3
This study presents some of the technical material used by
the Congressional Budget Office in analyzing the impact of alternative economic policies. It was written by Mary Kay Plantes and
Frank de Leeuw. Nancy Morawetz and Michael Owen performed many of
the model simulations. The paper was typed by Dorothy J. Kornegay
and edited by Patricia H. Johnston
Alice M. Rivlin
Chapter I*
Chapter II. Factoring Fiscal Policy Multipliers
Basic Multipliers Model
Key Components of Fiscal Policy
The Fiscal Multiplier Formula
Chapter III. Some Extensions of the Multipliers
Monetary Policy
Corporate Tax Changes
Chapter IV. Price Changes Versus Real Changes
Chapter V. Other Relationships Used in Measuring
Policy Impacts
Quarterly Values of a,
Quarterly Values of b
Estimated Values of b , b_, c , and c
Basic Multipliers Model
Used In the
Quarterly Values of a~
Quarterly Values of a~
Change in GNP Resulting From A Permanent Increment in Policy Instruments
Changes in 3-month Treasury Bill Rate and GNP Resulting from a Permanent Increment in Federal
Government Purchases and a Permanent Increment
in Unborrowed Reserves
Quarterly Values of d , d 2 , and d , and b. for
Unborrowed Reserve Changes
Changes in GNP Resulting From a Permanent Increment in Unborrowed Reserves
Corporate Tax Cut Parameters Used in the Basic
Multipliers Model
Changes in GNP Resulting From a Permanent Increment in Corporate Tax Payments
Division of Government Purchases GNP$ Multiplier,
a., and a« Into Real and Nominal Effects for Four
Econometric Models
Measuring Policy Impacts
At the peak of the budget season, CBO is called upon
several times a week to estimate the impact on the economy
of a change in the federal budget•
These estimates are quite
sensitive to which macroeconomic model or other procedure is used
to prepare them.
The multipliers project is an attempt to
understand and deal with the diversity of results that various
models may produce*
The project consists of (a) systematic comparison of
econometric model estimates of the impact of changes in fiscal
policies, and (b) selection of a uniform set of procedures fcr
calculating policy impacts.
The systematic comparison involves
the factoring of an overall GNP multiplier—the amount of GNP
generated per dollar of a spending increase or a tax cut—into a
number of key components, including:
the ratio of the change in consumption
in disposable income;
the ratio of the change in investment
plant, and equipment to the change in GNP;
the ratio of the change in "other GNP" (inventories,
state and local purchases, net exports) to the change
in GNP;
the fraction of a change in GNP going into wages and
salaries and other labor income and nonwage income;
the fraction of a change in GNP serving
transfer payments; and
the fraction of a change in wages and salaries and other
labor income and nonwage income going into personal
tax payments.
94-329 O - 77 - 2
to the change
in housing,
to reduce
With th§ aid of this factoring, differences in policy
multipliers among models can be traced back to differences
in one or more of these key ratios.
The selection of a uniform set of procedures begins with
choosing a value (or, more precisely, a set of quarterly values)
for each of these key ratios on the basis of reasonableness,
other empirical studies, and, when necessary, simply averaging
across models.
The values of the key ratios determine CBO's
overall GNP multipliers.
To go beyond GNP to real and price
effects, employment impacts, etc., CBO makes use of a number of
simple relationships, including Okun's law and a two-equation,
wage-price model. This paper focuses largely on the first step
in the multipliers project, the factoring of GNP multipliers into
their major components. The final section discusses briefly the
other relationships used by CBO.
At the peak of the budget season, CBO is called on several
times a week to evaluate the impact on the economy of a change in
the federal budget.
Occasionally clients ask for a specific
econometric model, but usually the choice of methodology is left
to CBO. Most clients are not aware of how sensitive the results
can be to the model chosen. For example, one well-known econometric model says a $10 billion annual sustained increase in
government purchases causes a decrease in prices lasting for one
and one-half years under current conditions; another says that
the same policy action causes a 0.3 percentage point increase in
the rate of inflation during the same interval.
While continually reminding users of the uncertainty of policy impact estimates, CBO also strives to make sense out of the diverse estimates.
This diversity of results is the basic reason for the
multipliers project.
Not only does CBO need a uniform set of
procedures for measuring policy impacts, but it ought to understand why models now widely used on Capitol Hill differ so much
in their policy implications.
Furthermore, CBO is frequently
called on to deal with quite specialized fiscal instruments, such
as public service employment and countercyclical revenue sharing,
that are not incorporated in most models. It is highly useful to
have a procedure that enables CBO to standardize the treatment of
the rate of fiscal displacement in public service employment, the
average salary paid to public employees, and other matters
relevant to these instruments.
The multipliers project consists of (a) systematic comparison of econometric model results for step changes in fiscal
policies, and (b) selection of a uniform set of procedures for
calculating policy impacts.
The systematic comparison involves
factoring an overall Gross National Product (GNP) multiplier into
a number of key components—for example, the ratio of investment
change to GNP change or the ratio of consumption change to a
change in disposable income.
(The relation of overall multipliers to these key ratios is shown in equation (8) on page
With the aid of this factoring, differences in policy
multipliers among models can be traced back to differences in one
or more of these key ratios.
The selection of a uniform set of procedures begins with
choosing a value (or, more precisely, a set of quarterly values)
of each of these key ratios on the basis of reasonableness, other
empirical studies, and when necessary, simple averaging across
models. The values of the key ratios determine the size of the
overall GNP multipliers.
To go beyond current-dollar GNP to
output price effects, employment impacts, etc., CBO uses a number
of simple relationships, including Okun's law and a two-equation,
wage-price model.
Most of the remainder of this paper describes the first
step in the multipliers project, the factoring of GNP multipliers
into their major components and comparing these components across
models. The final section discusses briefly the other relationships that are used in addition to the GNP multipliers in
measuring policy impacts.
The model simulation results that have been used as the
starting point of this analysis depend on initial conditions.
They depend, for example, on whether interest rate levels
are high or low and whether there is a lot of excess capacity
in the economy or not. The multipliers presented in this paper
refer to the conditions of the U.S. economy as of early 1977.
The analysis needs to be redone whenever there is a substantial
change in initial conditions. CBO's tentative plan is to redo it
once a year.
A simple income-expenditure model is the starting point
for factoring GNP multipliers into key components. This chapter
presents the simple model; the following chapter presents a
number of extensions.
The basic model, which is shown on the following page,
consists of an identity expressing GNP identity as the sum of
five components and six additional equations relating changes in
the components of GNP to their determinants. The parameter
estimates for the income-expenditure model are derived from
simulating step changes in fiscal policy in full-scale econometric models. While the basic multiplier model itself is simple
in structure, each of its coefficients summarizes a wide range
of price and wealth responses as well as income-expenditure
relationships incorporated in the full-scale models.
For example, one of the coefficients in the simple model
(a2) is the ratio of a change in fixed investment to a change
in GNP.
The value of this ratio in a particular model is not
simply a naive accelerator coefficient but rather reflects the
net outcome of all the investment determinants in that model,
including accelerator-type forces, cost of capital components,
and a range of other influences (all as of early 1977).
ratio could
be less than zero in a model with very strong
"crowding-out" forces, or it could be greater than zero in a
model with strong accelerator-type forces.
The same is true
of the other coefficients in the simple income-expenditure
model; they too summarize net outcomes of complex influences
represented in the actual models CBO has used.
The coefficients of the basic model are thus reduced-form
rather than structural relationships.
While they are reducedform relationships, they are, however, much closer to observable
economic magnitudes—for example, the share of personal income in
GNP—than are the fiscal policy multipliers of which they are
It is scarcely possible to develop any a^ priori
judgments to which to compare policy multipliers, but it is
possible to form judgments about some of the ratios investigated
in this study.
94-329 O - 77 - 3
AGNP$(t) = AC$(t) + AFI$(t) + AGG$(t) + AGE$(t) + AX$(t)
AC$(t) = a u ( A I N C $ ( t ) + A T R $ ( t ) - ATP$(t))
AINC$(t) = b l t AGNP$(t) + c u A G E $ ( t )
ATR$(t) = ~ b 2 t A G N P $ ( t ) - c 2 t AGE$(t) + ATRO$(t)
ATP$(t) = b 3 t A I N C $ ( t ) - c 3 t AGE$(t) + ATPO$(t)
AFI$(t) = a 2t [AGNP$(t) - AGE$(t)]
AX$(t) =a 3 t AGNP$(t) +AXO$(t)
(all variables are in current dollars)
Gross National Product
Fixed investment (business and residential)
Federal government purchases except public employment
Public employment spending net of displacement, federal and
state and local (displaced funds used for tax reduction or general
state and local spending enter as TP$ or GG$)
Rest of GNP$: inventory investment, net exports, state and local
spending other than public service employment
Wages and salaries and other labor income and nonwage income
Federal transfer payments
Federal personal tax revenues (including employee payroll taxes)
Intercept, transfer payments
Intercept, personal tax revenues
Intercept, other spending
Time, in quarters
Simulations of fiscal policy in full-scale models are used
to derive coefficients for the simple model.
Each econometric
model simulation yields a specific set of values for the key
components that together capture the total change in GNP implied
by that model.
The GNP multiplier is an algebraic function of
the coefficients of the basic model, called "key components11 of
the multiplier.
As the tables below show, wide disparities sometimes
exist between different estimates of the key components.
Frequently, an unusually high or low estimate can be traced
to an unreasonable structural specification in the underlying
model. Values of key components chosen for this policy simulation work were based on what CBO felt were the more reliable
model estimates. In cases where CBO had little insight into the
reliability of values derived from the econometric models,
simple averaging across models was necessary.
The key components depend on the period of time over
which relationships between components of GNP and their determinants are measured.
The (approximate) marginal propensity to
consume, aj_, for example, can refer to consumption changes
divided by income changes during the first quarter of a sustained change in fiscal policy, during the second quarter, or
during a later quarter. CBO's procedure has been to derive
quarterly values for each of the coefficients for the first
through the tenth quarter. In effect, the model is 10 different
models, which together measure the dynamic adjustment path
accompanying a policy change.
Equation 1, the GNP identity, expresses changes in GNP
as the sum of changes in consumption, fixed investment, government spending on goods and services other than public employment
programs, government spending on public employment programs, and
"other GNP" (namely, inventory investment, net exports, and state
and local spending other than federally financed public service
The next block of four equations relate to consumption
and its determinants.
The first, (2) expresses changes in
consumption as a fraction of changes in disposable income.
Changes in disposable income include changes in wages and salaries, plus other labor income plus nonwage income, changes in
personal transfer receipts minus changes in personal tax payments. Values of the parameter of this equation, a^, are shown
quarter by quarter for five econometric models—Data Resources,
Inc., (DRI), Wharton, Chase, MIT-Penn (MPS), and Fair-—and for
the multipliers model in Table 1. jL/ The latter values can be
adjusted if the policy change is targeted on population groups
whose (approximate) marginal propensities to spend are significantly different from the average values reported here.
TABLE 1.
QUARTERLY VALUES OF ax
Mo d e 1 s
Basic Multipliers
Parameter values from econometric models reported in Tables
1, 2, 4, and 5 were derived by simulating a change in
federal government purchases, holding the path of unborrowed
reserves constant.
For a number of models, results vary
significantly with the monetary variable selected as exogenous.
Selecting unborrowed reserves implies that both
interest rates and the money supply rise moderately in
response to an expansionary fiscal move.
The quarterly values of a^ are considerably lower than the
average ratio of total consumption to total disposable personal
This difference arises from the existence of wealthinduced consumption flows that are relatively insensitive to
changes in disposable income and, therefore, not measured by
The ratio of total consumption, which includes wealthinduced consumption, to disposable income averaged over 0.9 for
both the past five- and ten-year periods*
The next equation in the consumption block,(3), relates
changes in wages and salaries and other labor income and nonwage
income to changes in GNP and changes in public service employment
The first parameter of this equation, b^, is estimated on the basis of econometric model results that are shown in
Table 2.
TABLE 2.
Basic Multipliers
The share of wages and salaries, other labor income, and
nonwage income in GNP has averaged about .75 over the past fiveand ten-year periods. Quarterly values of bj used in the multipliers model are initially lower than the average share due
to the disproportionate rise in profits immediately following a
policy-induced income change. In later quarters, nonprofit
income shares rise above their average value.
This occurs
because depreciation, which is subtracted from GNP to obtain
national income levels, is very slow to change in response to
changes in GNP.
Nonprofit income shares will return to their
average value after the depreciation adjustment is complete.
The second parameter in the income equation, c j , reflects
the difference between the fraction of public employment programs
going into wage i n c o m (more precisely, wages and salaries plus
other labor income plu* Honwage income), and the fraction of
other components of GNP that goes into wage income. Its value is
not estimated from econometric models but rather is estimated on
the basis of experience under public employment programs and
legislative provisions of such programs.
The next equation, (4), in the consumption block relates
changes in transfer payments to changes in GNP and outlays on
public employment programs. Estimates of the first parameter in
this equation, b£, were derived from econometric model results
and empirical studies of transfer payments. The second parameter
in this equation, C2f represents the difference between the
transfer reduction rate of public employment programs and that of
other changes in GNP. Like c^, it is estimated on the basis of
experience under public service employment and program design
A public employment program targeted at youth,
for example, would have a lower value of C£ than one targeted
at adults. The final term in the equation, ATRO$, measures
policy-induced changes in transfer payments.
The final equation in the consumption block, (5), relates
changes in personal tax payments to changes in GNP and outlays on
public employment programs.
Its specification is analogous to
that discussed above for transfer payments and its parameters
were estimated in an identical fashion. Table 3 lists the values
of equation (4) and (5) parameters used in the basic multipliers
TABLE 3.
ESTIMATED VALUES OF b 2 , b 3 , c 2 , a/ and c 3 a./ USED IN
P a r a m e t e r s
2 - 10
c 2 and c 3 vary according to program design. Values reported in Table 3 are used for public employment programs
that are directed at long-term unemployed adults.
Equation (6) expresses changes in fixed investment (business
and residential) as a fraction of changes in GNP other than
government employment spending. Quarterly values of this ratio,
a 2 , are shown for five econometric models and the basic multipliers model in Table 4. Historically, the share of fixed
investment in GNP has averaged 0.14 in the past five- and tenyear periods.
Quarterly values of a 2 may be expected to approach the average shared after accelerator influences, which
raise the value of a 2 above the average share, cease operating.
TABLE 4.
QUARTERLY VALUES OF a 2
M o d
Basic Multipliers
The final equation, (7), in the basic multipliers model
relates changes in the remaining GNP components—inventory
investment, net exports, and state and local spending other than
federally financed public service employment—to changes in GNP.
The intercept-change term reflects exogenous changes in net
exports and state and local spending.
Table 5 lists quarterly
values of the parameter of this equation, 33, for five econometric models and the basic multipliers model*
The models yield different values for key parameters
when the policy simulated is a change in federal government
purchases rather than when the policy is a change in personal tax
The differences are significant,. however, only
with respect to a3«
This is due to the more rapid change
in import spending that occurs following an exogenous change in
personal taxes.
The estimated values of a 3 used in the multipliers model differ, therefore, depending on whether the policy
being considered is similar to a tax change or similar to a
purchase change.
Table 5 reports only values of a3 based on a
purchase change for the five models, but presents both sets of
values for the multipliers model.
The share of the remaining GNP components in GNP has
averaged 0.144 over the past five years and 0.138 over the
past ten years, These ratios are significantly higher than
quarterly values of a3 reported in Table 5, principally because
of the relative insensitivity of state and local spending to
changes in GNP.
The marginal response of these components to
changes in GNP, in other words, has been much smaller than the
average response.
TABLE 5.
QUARTERLY VALUES OF a 3 a./
M o d e l s
Basic Multipliers Model
Values reported for models 1 through 5 are based on simulations of a change in federal purchases.
The seven equations listed in the basic multipliers model on
page 4 can be combined through simple algebra to yield the
following multiplier expression for standard changes in fiscal
AGNP$(t) =
AX0$(t) + a l t [ATRO$(t)-ATPO$(t)]
U ^ c ^ (1b
" °2t + C 3t } " a 2t l A G B $ ( t ) }
The first expression on the right-hand side of the equation
is the multiplier for changes in government purchases other than
public employment programs. It depends on six of the parameters
of the model, namely:
the ratio of a change in consumption to a change
in disposable income
the ratio of a change in investment
in GNP
the ratio of a change in "other GNP" to a change
in GNP
the fraction of a change in GNP going into wages
and salaries and other labor income and nonwage
the fraction of a change in GNP serving to reduce
transfer payments
the fraction of a change in wages and salaries
and other labor income and nonwage income going
into personal tax payments
to a change
The government spending multiplier also applies to AXO$,
changes in the intercept term of "other GNP;11 that is, to
exogenous changes in exports, inventory investment, or state and
local spending*
The multiplier for shifts in personal taxes and transfers is
equal to a^ times the GNP multiplier, a common result in incomeexpenditure models.
The multiplier for nondisplaced changes in government
employment program spending is a bit more complex*
It is equal
to the basic government spending multiplier times
l+a l t (c l t (l-b 3 t ) - c 2 t+c 3 t ) -a 2 t
If cj « c.2 = C3 = 0, then the expression for the government
employment multiplier is slightly less than that for the government purchases multiplier due to the absence of any direct
inducement to fixed investment from public employment spending.
The two multipliers differ further to the extent that (a) a
higher fraction of spending on government employment (higher by
C].) goes into compensation than is the case for changes in the
rest of GNP; (b) a higher fraction of spending on government
employment (higher by c 2 ) is offset by a reduction in transfer
payments than is the case for other components of GNP; and (c) a
lower marginal personal tax rate (lower by C3) is applicable to
government employment income than is the case for the rest of
GNP* These deviations can have offsetting effects on the public
employment multiplier*
A high fraction of spending devoted to
compensation, for example, could increase it while targeting at
the long-term unemployed could increase the transfer-reduction
rate and thereby reduce the size of the multiplier*
Table 6 presents the GNP multipliers
basic multipliers model*
TABLE 6.
Federal Government
Purchases of Goods
Public Service
Federal Taxes or
(With Opposite Sign)
The preceding chapter covered standard fiscal policies.
The basic model presented there is in fact the one CBO has
used for nearly all its policy simulation work.
This chapter
adds two policy instruments to the basic model—monetary policy
and corporate tax rates.
Parameter estimates used in the basic model are based
on policy simulations in which the Federal Reserve Board holds
the path of unborrowed reserves constant. The monetary response
to an expansionary fiscal move, therefore, cannot include any
change in unborrowed reserves, but it can include an increase in
the money supply and some (at least temporary) increase in
interest rates.
Econometric models differ considerably in their specification of the monetary sector. To highlight these differences,
Table 7 presents changes in three-month Treasury bill rates and
in GNP occurring in five econometric models as a result of a step
increase in unborrowed reserves.
An increase in unborrowed reserves also expands the money
supply but (at least temporarily) reduces interest rates.
effects are, therefore, not a simple multiple of fiscal policy
effects, and additional parameters are needed to capture them.
This is accomplished by adding an unborrowed reserve term to
equations (2), (6), and (7), changing them to:
AC$(t) = a lt (AINC$(t) + ATR$(t) - ATP$(t))
+d lt ARU$(t)
AFI$(t) = a 2t AGNP$(t) + d 2t ARU$(t)
AX$(t) = a 3t AGNP$(t) + AXO$(t) + d 3 t ARU$(t).
RU$ = unborrowed reserves.
TABLE 7:
RESERVES a/
Model 1
Model 2
3.4 -.60
5.4 -.42
Model 3
Model 4
Model 5
.5 -1.26
-.8 1.8 -1.24
-.8 4.2 -.97
-.8 7.4 -.97 10.0
-.7 11.4 -.83 14.2
All models were simulated for a $1 billion step increase in
unborrowed reserves. Bill rate differences from baseline
are reported in percentage points. GNP differences from
baseline are reported in billions of dollars.
The parameters d^, d£> and d3 measure the direct spending changes that result from a change in unborroved reserves,
holding government spending and transfer and tax rates constant.
In equation (2)', dj represents the ratio of additional consumption spending (above the fiscal policy-derived response to disposable income) to changes in unborroved reserves. This additional
consumption arises from wealth and interest rate effects. Similarly, d£ and d3—the ratio of changes in fixed investment and
"other GNP," respectively, to changes in unborrowed reserves—
reflect the effects on capital spending, inventory investment, and
state and local spending of wealth and interest rate changes. 1/
A simple model may clarify the relation of equations
(2)', (6)', and (7)' to business and household behavior.
Suppose that investment (I) depends on income (Y) and an
interest rate (R),
I • ai + a£ Y - a3 R
a 2 , a$ > 0
that money (M) demanded also depends on income and interest
M - bxY-b2R
bi» b 2 > 0
and that money supplied depends on unborrowed reserves (RU)
and interest rates.
Eliminating M and solving for R by combining the second and
third equations gives
Substituting this expression for R into the first equation
= a, -i- la. - :
1 Y +
1 RU
Equation (6)' above resembles this equation.
The coefficient of income reflects both accelerator effects and
crowding-out or interest rate effects, while the coefficient
of unborrowed reserves reflects interest rate effects and
the money-supply and money-demand linkages between unborrowed reserves and interest rates.
To estimate dj,, d£> and d3 the values of the a's from
fiscal policy simulations are used to deduct from total changes
in C, FI, or X the amounts due to income or GNP changes.
Substituting (2)', (6)', and (7)' into the basic multipliers
model yields the following unborrowed reserve GNP multiplier:
[ ( d j + d ^ ) ARU$ (t) ]
AGNP$(t) =
Cl-(a l t (b l t (l.b 3 t )-b 2 t ) + a 2 t + a 3 t )]
The denominator on the right-hand side of equation (9) is the
same as the denominator of a simple government purchases multiplier. The numerator represents the total direct GNP increment
originating from a change in unborrowed reserves.
Estimates of d^, d£> and d$ vary significantly across
econometric models. Differences exist not only in the level of
each parameter (d£ in the tenth quarter is 13.4 in one model
and 0.32 in another) but also in the relative size of d^, d£»
and d3 (in one model d£ is greatest and d^ is smallest whereas in another the reverse occurs). Not having any prior information on the size of these parameters, CBO used a simple averaging
procedure to estimate d]_f d2> and d^ for the multipliers model.
Table 8 presents these estimates as well as the range of each
parameter provided by the five econometric models.
Estimates of b^ also differed between an unborrowed reserve simulation and a fiscal policy simulation. This difference arises from the contrasting interest rate paths in the two
simulations, a contrast which affects corporate profits and
personal interest income. The parameter values for b^ used in
simulating monetary policy changes are presented in Table 8, and
Table 9 presents the resulting unborrowed reserve multiplier
Unborrowed reserve multiplier values grow from 1.0 in the
initial quarter to over 25 by the end of three years, as shown in
Table 9. Even 25, however, is only % about half of the average
ratio of GNP to unborrowed reserves. GNP in recent quarters has
been 5 to 6 times as large as the narrowly defined money supply,
which in turn has been about 9 times as large as unborrowed
reserves. The multiplier estimates in Table 9 imply that a
step increase in unborrowed reserves above a baseline path lowers
the velocity of money relative to its baseline path.
TABLE 8.
and b
Rangei Provided by
Low High
Range Providedl by
Low High
Range Provided by
Low High
Range Provided by
Low High
TABLE 9.
The extensions needed to measure corporate tax change
effects on GNP are procedurally similar to those discussed
above for monetary policy.
The basic multipliers model does
not include dividends, corporate cash flow, or the corporate
tax rate as separate determinants of consumption and investment spending and, therefore, cannot account for the effect of
changes in corporate taxes on spending. The effect of corporate
taxes can be incorporated into the basic model by changing
equations (2), (3), (5), and (6) to:
(2)" AC$(t)
a lt (AINC$(t) + ATR$(t) - ATP$(t)) + dltA
RU$(t) + g lt ABUSTAX$(t)
(3)" AlNC$(t)
bltAGNP$(t) + cltAGE$(t)-g2tABUSTAX$(t)
(5)" ATP$(t)
b3tAINC$(t) - c3tAGE$(t) + ATPO$(t)
- c 4t ABUSTAX$(t)
(6)" AFI(t)
a 2t AGNP$(t) + d 2
- g 3t ABUSTAX$(t).
In equation (2)", gi t is the proportion of the business
tax change going into consumption.
Equations (3)", (5)", and
(6)" allow for departures from the standard fiscal-policyinduced relationships explaining personal income, personal
taxes, and fixed investment.
Parameter values from econometric models were reasonably similar for simulations in which the corporate tax equation
intercept was changed. The models varied dramatically, however,
in their estimates for a change in corporate tax rate. (A rate
change that provides a $10 billion change in corporate taxes
leads to a $54 billion addition to GNP in one model and a $5.8
billion reduction in GNP in another.) The econometric models and
empirical tax studies were, therefore, used to derive g^, g£f
g3, and C4 only for a lump-sum change in corporate taxes.
Results are reported in Table 10.
Estimating parameters for
changes in corporate tax rates will be undertaken at a later date
after CBO studies the differences in the econometric models'
simulations more carefully.
P a r a m e t e r s
Substituting these equations into the basic multipliers
model generates the following multiplier expression for changes
in corporate taxes:
" [g it +a it (g 2t (1 " b 3t ) " C 4t )+g 3t ]
,. ~ , r-r r7- TT
The denominator of this equation is the denominator of the
simple government purchases multiplier* The numerator represents
the direct spending induced by a change in corporate taxes. This
multiplier is useful in analyzing fiscal policies that directly
change corporate cash flow (for example, employment tax credits,
and training programs implemented in the private sector). Table
11 presents the multiplier values.
TABLE 11.
Although CBO uses a two-equation, wage-price model to divide
changes in nominal GNP between prices and real output, it is easy
and interesting to compare a number of models with respect to
their output and price effects.
The differences among models
arise in large part from differences in labor market specifications and productivity behavior.
Table 12 shows the division of two key components, a^ (the
ratio of changes in consumption to changes in GNP) and a2 (the
ratio of changes in fixed investment to changes in GNP) into
quantity and price effects implied by each econometric model.
The division is based on the formula
where q is a quantity (or constant-dollar value) and p is a price
index (or deflator). The first term on the right-hand side shows
the contributions of quantity change to the total dollar change,
while the second shows the contribution of price change.
factor a ratio such as a^ into quantity and price effects, each
term in the formula is divided by the dollar change in the
denominator of the ratio; thus, for factoring a^, q in the
formula refers to constant-dollar consumption, p to the consumption deflator, and each term is divided by the current-dollar
change in disposable income.
TABLE 12.
AND a 2 INTO REAL
Model 2
Model 1
Model 4
Model 3
AC$(t)/ADisposable Income
Model 2
Model 1
Model 4
Model 3
Model 1
Model 2
Model 4
Model 3
R e a l a n d nominal d o not necessarily s u m to actual p a r a m e t e r v a l u e d u e t o r o u n d i n g
e r r o r s . Both a a n d a a r e estimated from a government p u r c h a s e s p o l i c y simulation.
The multipliers model described above provides nominal
GNP changes resulting from changes in fiscal and monetary
policies. Additional relationships are used to measure the
impact of policy changes on real GNP, prices, employment, and
other economic variables of interest.
While this paper is
concerned primarily with nominal GNP multipliers, a brief description of the other relationships used by CBO may be helpful.
Figure 1 presents a flow diagram of the relationships
used to translate a change in economic policy into a change
in nominal GNP and other economic variables.
The first step
is the conversion of policy changes into nominal GNP changes,
using the multipliers described in this paper.
Next, employment and unemployment changes resulting from a
policy change are derived from an Okun's law type of relationship
between unemployment and the real GNP gap, lagged one quarter.
If a policy has a direct impact on employment beyond that implied
by Okun's law, a procedure is used (described in the next paragraph) that accounts for this differential.
A two-equation,
wage-price jL/ model and a CPI-GNP deflator relationship are then
used to derive a GNP deflator consistent with the unemployment
The deflator, together with the level of nominal GNP,
determines real GNP.
The new real GNP gap determines the next
period's unemployment.
Direct employment programs can (but do not necessarily)
create more jobs per dollar spent than conventional fiscal
policy changes.
The reasons are twofold: (1) direct employment programs may entail less "slippage" in restoring productivity and profits and increasing the average hours of existing
jobholders; and (2) direct employment programs may pay a relatively low average wage. The first step in CBO's procedure
See "A Simplified Wage-Price Model," available from the
Fiscal Analysis Division, Congressional Budget Office
(September 1975).
Figure 1.
Fiscal and
Real GNP
Real GNP
(multipliers model)
Nominal GNP
Employment, \
Labor Force J
Wage Rate
Food and
Fuel Prices
^ model)
Consumer \
Price Index
j ^
Pay Increases
GNP Deflator
Real GNP
for estimating the job impact of a direct employment program
reflecting these special features is to estimate the nondisplaced
outlays quarter by quarter, and divide these outlays by an
estimate of average cost per job. This provides an estimate of
the direct jobs created by a public employment program.
The next step is to calculate an estimate of the indirect
unemployment and employment impact of the policy using GNP
multipliers and Okun's law. The indirect impact arises from two
additional spending induced by- the nondisplaced
outlays, and the use of displaced funds for tax reduction and/or
increased purchases. The/"indirect" GNP multiplier for the first
source, nondisplaced outlays, is equivalent to the overall public
employment multiplier, dealt with explicitly in this paper, less
The GNP multiplier for displaced outlays is a weighted
combination of the government purchases multiplier and the
personal tax multiplier. The weights CBO used are based on the
views of experts about how state and local governments use
general revenue sharing funds. The overall multiplier of a
direct employment program is thus a combination of the multipliers described in this paper.
The final step in calculating overall job impacts is to
add the direct effects from step one to the indirect effects
from step two.
As the example of direct employment programs makes clear, it
is possible to use the multipliers framework to introduce explicitly a wide range of special assumptions about fiscal policy
changes, including assumptions about the timing of outlays,
fiscal substitution, average cost per job, proportions of grantsin-aid going to tax relief, and others.
Typically it is much
more difficult to incorporate many of these assumptions in
large-scale econometric models.
The multipliers model thus has
the advantage of introducing clear links between program details
and economic effects.
U.S. GOVERNMENT PRINTING OFFICE : 1977 O - 94-329
|
{"url":"https://fraser.stlouisfed.org/title/cbo-multipliers-project-5954/fulltext","timestamp":"2024-11-11T18:20:44Z","content_type":"text/html","content_length":"59729","record_id":"<urn:uuid:422ca9ed-407f-4791-90db-f6aa010b658e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00597.warc.gz"}
|
Extension:Math/Announcement - MediaWiki
Introducing Math rendering 2.0
Dear Wikipedians,
We'd like to announce a major update of the Math (rendering) extension.
For registered Wikipedia users, we have introduced a new math rendering mode using MathML, a markup language for mathematical formulae. Since MathML is not supported in all browsers [1], we have also
added a fall-back mode using scalable vector graphics (SVG).
Both modes offer crisp rendering at any resolution, which is a major advantage over the current image-based default. We'll also be able to make our math more accessible by improving screenreader and
magnification support.
We encourage you to enable the MathML mode in your Appearance preferences. As an example, the URL for this section on the English Wikipedia is: https://en.wikipedia.org/wiki/Special:Preferences#
For editors, there are also two new optional features:
1) You can set the "id" attribute to create math tags that can be referenced. For example, the following math tag
can be referenced by the wikitext
This is true regardless of the rendering mode used.
2) In addition, there is the attribute "display" with the possible values "block" or "inline". This attribute can be used to control the layout of the math tag with regard to centering and size of
the operators. See https://www.mediawiki.org/wiki/Extension:Math/Displaystyle for a full description, of this feature.
Your feedback is very welcome. Please report bugs in Bugzilla against the Math extension, or post on the talk page here: https://www.mediawiki.org/wiki/Extension_talk:Math
All this is brought to you by Moritz Schubotz and Frédéric Wang (both volunteers) in collaboration with Gabriel Wicke, C. Scott Ananian, Alexandros Kosiaris and Roan Kattouw from the Wikimedia
Foundation. We also owe a big thanks to Peter Krautzberger and Davide P. Cervone of MathJax for the server-side math rendering backend.
Best Gabriel Wicke (GWicke) and Moritz Schubotz (Physikerwelt)
[1]: Currently MathML is supported by Firefox & other Gecko-based browsers, and accessibility tools like Apple's VoiceOver. There is also partial support in WebKit.
|
{"url":"https://m.mediawiki.org/wiki/Extension:Math/Announcement","timestamp":"2024-11-15T00:38:10Z","content_type":"text/html","content_length":"26643","record_id":"<urn:uuid:20baded4-0f9e-4902-97fd-f2b7eea27f2c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00046.warc.gz"}
|
Data Analysis in Origin with R Console
25.5.1 Data Analysis in Origin with R Console
Data Analysis in Origin with R Console
R Console and Rserve Console tools have been added to Origin 2016. By using them you can easily transfer data to Origin from R and make use of advanced graphic feature in Origin. In addition, you can
access a wide range of statistical functions or packages from R to help you analysis the data in your Origin OPJ file with the usable R Console dialog and Command. Here we provide some visualized
example to show the flexible applications of R Console for statistic data analysis and simulation.
Install packages
To run these examples, you need to have R installed on you computer and download the boot and GenSA R packages, please open your R main window (not R console in Origin) and Run the Script to
installand load:
Calculating the Correlation Coefficient by Using Bootstrap
The R package boot allows a user to easily generate bootstrap samples and perform statistic analysis, in this example we will introduce how to calculating the correlation coefficient by using
bootstrap with this package in R Console.
1. Create a new Workbook, click Import Single ASCII to import the LogRegData.dat from \OriginLab\Origin2016\Samples\Statistics
2. Select Connectivity: R Console in Origin menu to open the R Console dialog.
3. Click the button and Select the first two column in worksheet, Select Data as Data Frame, enter the object name data in R Object box, click to pass the data from origin to R space.
4. Then we need to run the Script below in the R console Script inbox, please paste the data into the box, click Enter to run:
f <- function(d, i){
d2 <- d[i,]
return(cor(d2$Age, d2$Salary))
bootcorr <- boot(data, f, R=500)
5. The result is save in the R object bootcorr, the Correlation for each bootstrap is save in bootcorr$t, now we create a new workbook, click to get the R object bootcorr$t to column A as vector. now
you will see column A has results data with 500 rows.
6. Select Column A, click Plot: Statistic: Histogram to make a histogram plot, then you can plot the distribution curve and customize the color, you will have a result similar to the graph below:
Simulate Random Walk in 2D Lattice
R provide efficiency function for random samples and matrix manipulation, here we demonstrate a example of Simulate Random Walk in 2D lattice, which generate random walk data in R and show the route
in with a colored line plot in Origin.
1. Create a new workbook with 3 columns
2. Click Connectivity: R Console in Origin menu to open the dialog, run the Script below to generate the random walk data:
step <- 2000
walk <- matrix(0, ncol = 3, nrow=step)
index <- cbind(seq(step), sample(c(1, 2), step, TRUE))
walk [index] <- sample(c(-1, 1), step, TRUE)
walk [,1] <- cumsum(walk[, 1])
walk [,2] <- cumsum(walk[, 2])
walk [,3] <-seq(step)
Click enter to run.
3. Send the R object walk to [Book1]Sheet1!A:C as Matrix
4. Select Column B in Book1_sheet1 to make a line plot, use data in Column C to control line plot color and customize the color scale and color hue.
Simulated Annealing to find the global minima
Please install GenSA package before running example.
Simulated Annealing is a method for finding global minima solution for nonlinear problems. Here we use the GenSA package in R to perform the Simulated Annealing process with our testing function, to
find the grobal minima of z in range $X,Y\in [-10,10]$
After calculating in R, we send the data results to Origin, and make contour plot to visualize the results, and make line plot to depict the minimum value obtained at each iteration step.
1. Select Connectivity: R Console in Origin menu, paste the script below into the script input box, click Enter to run.
fr <- function(vx){
x <- vx[1]
y <- vx[2]
dimension <-2
global.min <- 0
tol <- 1e-13
lower <- c(-10,-10)
upper <- c(10,10)
out <- GenSA(lower = lower, upper = upper, fn = fr,
#output the results
sprintf("Global minima for the function is: %.3f at (%.3f, %.3f)", out$value, out$par[1],out$par[2])
The results output will be:
[1] "Global minima for the function is: 0.000 at (0.000, 0.000)"
To create a contour plot display together with the minimum point, please:
2. Click the New 3D Plot button in Standard Tool bar, set the function as the graph shown below:
3. Click OK to get the 3D surface plot, you can specify the colormap in Plot Details: Colormap dialog tab.
4. Create a new workbook with 3 columns designated as XYZ, enter the data (0,0,0) in the first row, drag the dataset into the 3D graph, then a scatter will be added on the graph, you can further edit
the label text for the scatter in Plot Details: Label dialog tab.
The finished graph will be similar to graph below:
Manova analysis
This example will peform a multivariate analysis on the Cost of Labor on a group of Nursing House. There are 2 predictors which are: Ownership and Certification, and 3 response variables: Cost of
Nursing Labor, Cost of Housekeeping Labor,and Cost of Maintenance Labor. To learn about whether the Cost of Labor will be different for different Ownership and Certification, we perform a MANOVA on
data with R Console.
1. Download the data from LaborCost.zip, and unzip the package to get the LaborCost.dat, then create a new workbook in Origin, and import the data by drag and drop.
2. Click the button and Select the all 5 columns in worksheet, Select Data as Data Frame, enter the object name CostData in R Object box, click to pass the data from Origin to R space.
3. Run the R script in the input box:
fit <- manova(Cost ~ Ownership*Certification)
summary(fit, test="Pillai")
We can convert the results into a Data Frame, then pass the df to Origin worksheet as Data Frame
sum<-summary(fit, test="Pillai")
Because the P value for Ownership is less than 0.01, so there are no ownership effects in average cost of labor. However, the P value for Certification is 0.089, which means that the result is
near-significant, and there could be difference in average cost of labor among the 3 type of certifications.
|
{"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/Origin-Help/Analysis-Origin-Rconsole","timestamp":"2024-11-01T19:31:27Z","content_type":"text/html","content_length":"165523","record_id":"<urn:uuid:8864bced-30a3-41f1-aa71-c8429690e4d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00017.warc.gz"}
|
seminars - Dynamical analysis of skew-product models and applications
We give a detailed exposition of transfer operator techniques for skew-product dynamical systems. We plan to discuss two examples associated to Euclidean transformations and applications to number
theory - distribution of modular symbols/ diophantine approximation. (The first talk at 3-4pm will on the background related to the subject. The second talk will be the main talk.)
|
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&page=68&sort_index=Time&order_type=desc&document_srl=798234","timestamp":"2024-11-10T05:39:09Z","content_type":"text/html","content_length":"46978","record_id":"<urn:uuid:e26231e8-7e3e-419d-8fbe-1faeddf9c13e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00423.warc.gz"}
|
Year Representing Round Opposition AheadBehind OnfieldPosition Played Goals Points Comment
1923 Northcote 9 Geelong A W W Y
1923 Northcote 11 North Melbourne L W Y
1923 Northcote 12 Brighton L W Y
1923 Northcote 13 Prahran L W Y
1923 Northcote 14 Hawthorn L W Y
1923 Northcote 15 Port Melbourne L W Y
1923 Northcote 16 Brunswick W W Y
1923 Northcote 17 Williamstown L W Y
1923 Northcote 18 Geelong A L Unk Y
Total 9 0
Year Representing Round Opposition AheadBehind OnfieldPosition Played Goals Points Comment
1924 Northcote 2 Hawthorn W W Y
1924 Northcote 3 Williamstown L W Y
1924 Northcote 18 Prahran W Unk Y
1924 Northcote 2SF Footscray L Unk Y
Total 4 0
Year Representing Round Opposition AheadBehind OnfieldPosition Played Goals Points Comment
1925 Northcote 1 Brighton W Unk Y
1925 Northcote 2 Prahran W Unk Y
1925 Northcote 8 Brighton W Unk Y 2
Total 3 2
Many Games are reported in year by year lists even if the player did not play in the game. This can be a questionable record that is still to be determined, or may represent a newspaper record that
the player did not play that game. Sometimes it is useful to know that the player was actually absent.
|
{"url":"https://thevfaproject.org/pages/Players/Cassidy,__Ncte24.php","timestamp":"2024-11-12T05:34:09Z","content_type":"text/html","content_length":"28334","record_id":"<urn:uuid:bd881b81-23c7-4394-bfdf-ce5ac4aa3cd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00286.warc.gz"}
|
Reviewing Timing Slack - 2020.2 English
Several factors can impact the setup and hold slacks. You can easily identify each factor by reviewing the setup and hold slack equations when written in the following simplified form:
Slack (setup/recovery) = setup path requirement
- datapath delay (max)
+ clock skew
- clock uncertainty
- setup/recovery time
Slack (hold/removal) = hold path requirement
+ datapath delay (min)
- clock skew
- clock uncertainty
- hold/removal time
For timing analysis, clock skew is always calculated as follows:
• Clock Skew = destination clock delay - source clock delay (after the common node if any)
During the analysis of the violating timing paths, you must review the relative impact of each variable to determine which variable contributes the most to the violation. Then you can start analyzing
the main contributor to understand what characteristic of the path influences its value the most and try to identify a design or constraint change to reduce its impact. If a design or constraint
change is not practical, you must do the same analysis with all other contributors starting with the worst one. The following list shows the typical contributor order from worst to least.
For setup/recovery:
Datapath delay
Subtract the timing path requirement from the datapath delay. If the difference is comparable to the (negative) slack value, then either the path requirement is too tight or the datapath delay is
too large.
Datapath delay + setup/recovery time
Subtract the timing path requirement from the datapath delay plus the setup/recovery time. If the difference is comparable to the (negative) slack value, then either the path requirement is too
tight or the setup/recovery time is larger than usual and noticeably contributes to the violation.
Clock skew
If the clock skew and the slack have similar negative values and the skew absolute value is over a few 100 ps, then the skew is a major contributor and you must review the clock topology.
Clock uncertainty
If the clock uncertainty is over a few 100 ps, then you must review the clock topology and jitter numbers to understand why the uncertainty is so high.
For hold/removal:
Clock skew
If the clock skew is over 300 ps, you must review the clock topology.
Clock uncertainty
If the clock uncertainty is over 200 ps, then you must review the clock topology and jitter numbers to understand why the uncertainty is so high.
Hold/removal time
If the hold/removal time is over a few 100 ps, you can review the primitive data sheet to validate that this is expected.
Hold path requirement
The requirement is usually zero. If not, you must verify that your timing constraints are correct.
Assuming all timing constraints are accurate and reasonable, the most common contributors to timing violations are usually the datapath delay for setup/recovery timing paths, and skew for hold/
removal timing paths. At the early stage of a design cycle, you can fix most timing problems by analyzing these two contributors. However, after improving and refining design and constraints, the
remaining violations are caused by a combination of factors, and you must review all factors in parallel to identify which to improve.
See Performing Timing Analysis for more information on timing analysis concepts, and see Timing Analysis Features for more information on timing reports (report_timing_summary/report_timing) in the
Vivado Design Suite User Guide: Design Analysis and Closure Techniques (UG906).
|
{"url":"https://docs.amd.com/r/2020.2-English/ug949-vivado-design-methodology/Reviewing-Timing-Slack","timestamp":"2024-11-11T17:33:37Z","content_type":"text/html","content_length":"119999","record_id":"<urn:uuid:928955ef-172f-4c08-b9ca-7446474e00ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00280.warc.gz"}
|
Zi-Xia Song (宋梓霞), Ramsey numbers of cycles under Gallai colorings - Discrete Mathematics Group
Zi-Xia Song (宋梓霞), Ramsey numbers of cycles under Gallai colorings
Tuesday, October 15, 2019 @ 4:30 PM - 5:30 PM KST
Room B232, IBS (기초과학연구원)
For a graph $H$ and an integer $k\ge1$, the $k$-color Ramsey number $R_k(H)$ is the least integer $N$ such that every $k$-coloring of the edges of the complete graph $K_N$ contains a monochromatic
copy of $H$. Let $C_m$ denote the cycle on $m\ge4 $ vertices. For odd cycles, Bondy and Erd\H{o}s in 1973 conjectured that for all $k\ge1$ and $n\ge2$, $R_k(C_{2n+1})=n\cdot 2^k+1$. Recently, this
conjecture has been verified to be true for all fixed $k$ and all $n$ sufficiently large by Jenssen and Skokan; and false for all fixed $n$ and all $k$ sufficiently large by Day and Johnson. Even
cycles behave rather differently in this context. Little is known about the behavior of $R_k(C_{2n})$ in general. In this talk we will present our recent results on Ramsey numbers of cycles under
Gallai colorings, where a Gallai coloring is a coloring of the edges of a complete graph without rainbow triangles. We prove that the aforementioned conjecture holds for all $k$ and all $n$ under
Gallai colorings. We also completely determine the Ramsey number of even cycles under Gallai colorings.
Joint work with Dylan Bruce, Christian Bosse, Yaojun Chen and Fangfang Zhang.
|
{"url":"https://dimag.ibs.re.kr/event/2019-10-15/","timestamp":"2024-11-12T00:37:13Z","content_type":"text/html","content_length":"149666","record_id":"<urn:uuid:6150c973-9fe4-4685-bcb2-dca62f772a12>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00826.warc.gz"}
|
The term has different actual meanings, depending on the context (hierarchy level) being considered. Usually, it refers to the classical flat background space, but we believe that is not fundamental,
but is instead derived from the unique phase-dependent solutions that fermions find.
• Vacuum: how space is not fundamental.
• Physicality: how fermions find solutions at unique points.
Fundamentally, our waves are simple: they oscillate on one axis, with constant angular momentum when considered as a pair in a boson. In other words, the rate of phase change for all waves (bosons)
is universal, and operate in one dimension.
In our physicality article, we describe how fermions form only at unique positional solutions, but in saying this we do not specify anything about the background space; we do not specify the number
of dimensions, nor do we say whether it is flat nor curved.
What we do say is that many one-dimensional solutions (simply distance from a source point, and all unique conditions at that distance) in a complicated system will generalize to three-dimensional
Euclidean flat space at larger scales. In very general terms, we can think of space as flat and Euclidean (or Minkowski with time), and therefore treat de-constituted matter as a propagating radial
(spherical) shell, but fundamentally the shell is just the 'phase-localized' point on a one-dimensional line, and we project that into space.
This general picture would be inaccurate at the smallest scales where few waves interact, because simple unique phase solutions may exist for the network. This detail is only of interest in very
specialized circumstances, leaving an option for exploration of the highest energies or abstract systems, but the generalization is sufficient for most purposes.
Curved space? - No
In this model, generalized space is flat. We appreciate that some readers might be here to understand how this relates to general relativity, but we can only offer analogues rather than plug-in
equivalents to GR.
Every boson has a mass-energy attribute, which is transmitted with its propagation, and changes the positional solution of other overlapping bosons by phase modulation. This phase modulation either
advances or retards the solution for the other waves, depending on the sign of the mass-energy, which looks like a positional change transmitted by a force, or space curvature with a time error.
We may construct statistics of vacuum, energy flux, classical momentum, and mass-energy, or other aether-like qualities, but none of these are fundamental in this model. However, we think that a
bridge to GR is possible, using this model as a basis, on the understanding that it operates in a generalized flat space (like SR), not curved space (like GR).
Lorentz-violating? - No
Phase modulation do not result in Lorentz violations: all solutions are on the propagating shell, and the modulation results only in advancement or retardation of the condition that allows a fermion
to form. [More...]
Action at a distance? - No
On a larger scale, fermions may have de-constituted and propagate for a long time without collapsing (due to its mass-energy, or the vacuum energy). Its availability for interaction will resemble an
expanding spherical shell in 3D space. Constitutionally, fermions contain four waves, as two bosons, two of which will be available for interaction. This presents a system that approximates the EPR
Paradox thought experiment, where action-at-a-distance is alleged for the collapse of entangled wavefunctions. Our interpretation of this scenario is as follows: the two wavefunctions may be
collapsed independently, and the collapse of one wavefunction makes the remaining waves available for interaction. This does not transmit any useful information, but it does change the wavefunction,
remotely enabling different opportunities for remaining wavefunctions from the same source event. Interestingly, this enables selection of spin states and geometric phase, without violating
|
{"url":"http://johnvalentine.co.uk/po8.php?art=space","timestamp":"2024-11-10T01:49:09Z","content_type":"text/html","content_length":"14048","record_id":"<urn:uuid:50033d9d-85e0-421b-8a1b-d83c71fb5c57>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00417.warc.gz"}
|
Elements of Geometry
It is a remarkable fact in the history of science, that the oldest book of Elementary Geometry is still considered as the best, and that the writings of EUCLID, at the distance of two thousand years,
continue to form the most approved introduction to the mathematical sciences. This remarkable distinction the Greek Geometer owes not only to the elegance and correctness of his demonstrations, but
to an arrangement most happily contrived for the purpose of instruction,-advantages which, when they reach a certain eminence, secure the works of an author against the injuries of time more
effectually than even originality of invention. The Elements of EUCLID, however, in passing through the hands of the ancient editors during the decline of science, had suffered some diminution of
their excellence, and much skill and learning have been employed by the modern mathematicians to deliver them from blemishes which certainly did not enter into their original composition. Of these
mathematicians, Dr. SIMSON, as he may be accounted the last, has also been the most successful, and has left very little room for the ingenuity of future editors to be exercised in, either by
amending the text of EUCLID, or by improving the translations from it.
Such being the merits of Dr. SIMSON's edition, and the reception it has met with having been every way suitable, the work now offered to the public will perhaps appear unnecessary. And indeed, if the
geometer just named had written with a view of accommodating the Elements of EUCLID to the present state of the mathematical sciences, it is not likely that any thing new in Elementary Geometry would
have been soon attempted. But his design was different; it was his object to restore the writings of EUCLID to their original perfection, and to give them to Modern Europe as nearly as possible in
the state wherein they made their first appearance in Ancient Greece. For this undertaking, nobody could be better qualified than Dr. SIMSON; who, to an accurate knowledge of the learned languages,
and an indefatigable spirit of research, added a profound skill in the ancient Geometry, and an admiration of it almost enthusiastic. Accordingly, he not only restored the text of EUCLID wherever it
had been corrupted, but in some cases removed imperfections that probably belonged to the original work : though his extreme partiality for his author never permitted him to suppose that such honour
could fall to the share either of himself, or of any other of the moderns.
But, after all this was accomplished, something still remained to be done, since, notwithstanding the acknowledged excellence of EUCLID'S Elements, it could not be doubted that some alterations might
be made that would accommodate them better to a state of the mathematical sciences, so much more improved and extended than at the period when they were written. Accordingly, the object of the
edition now offered to the public, is not so much to give the writings of EUCLID the form which they originally had, as that which may at present render them most useful.
One of the alterations made with this view, respects the Doctrine of Proportion, the method of treating which, as it is laid down in the fifth of EUCLID, has great advantages accompanied with
considerable defects; of which, however, it must be observed, that the advantages are essential, and the defects only accidental. To explain the nature of the former requires a more minute
examination than is suited to this place, and must therefore be reserved for the Notes; but, in the mean time, it may be remarked, that no definition, except that of EUCLID, has ever been given, from
which the properties of proportionals can be deduced by reasonings, which, at the same time that they are perfectly rigorous, are also simple and direct. As to the defects, the prolixness and
obscurity that have so often been complained of in the fifth Book, they seem to arise chiefly from the nature of the language employed, which being no other than that of ordinary discourse, cannot
express, without much tediousness and circumlocution, the relations of mathematical quantities, when taken in their utmost generality, and when no assistance can be received from diagrams. As it is
plain that the concise language of Algebra is directly calculated to remedy this inconvenience, I have endeavoured to introduce it here, in a very simple form however, and without changing the nature
of the reasoning, or departing in any thing from the rigour of geometrical demonstration. By this means, the steps of the reasoning which were before far separated, are brought near to one another,
and the force of the whole is so clearly and directly perceived, that I am persuaded no more difficulty will be found in understanding the propositions of the fifth Book than those of any other of
the Elements.
In the second Book, also, some algebraic signs have been introduced, for the sake of representing more readily the addition and subtraction of the rectangles on which the demonstrations depend. The
use of such symbolical writing, in translating from an original, where no symbols are used, cannot, I think, be regarded as an unwarrantable liberty for, if by that means the translation is not made
into English, it is made into that universal language so much sought after in all the sciences, but destined, it would seem, to be enjoyed only by the mathematical.
The alterations above mentioned are the most material that have been attempted on the books of EUCLID. There are, however, a few others, which, though less considerable, it is hoped may in some
degree facilitate the study of the Elements. Such are those made on the definitions in the first Book, and particularly on that of a straight line. A new axiom is also introduced in the room of the
12th, for the purpose of demonstrating more easily some of the properties of parallel lines. In the third Book, the remarks concerning the angles made by a straight line, and the circumference of a
circle, are left out, as tending to perplex one who has advanced no farther than the elements of the science. Some propositions also have been added; but for a fuller detail concerning these changes,
I must refer to the Notes, in which several of the more difficult, or more interesting subjects of Elementary Geometry are treated at considerable length.
COLLEGE OF Edinburgh,
Dec. 1, 1813.
BOOK I.
THE PRINCIPLES.
EXPLANATION OF TERMS AND SIGNS.
1. Geometry is a science which has for its object the measurement of magnitudes.
Magnitudes may be considered under three dimensions,-length, breadth, height or thickness.
2. In Geometry there are several general terms or principles; such as, Definitions, Propositions, Axioms, Theorems, Problems, Lemmas, Scholiums, Corollaries, &c.
3. A Definition is the explication of any term or word in a science, showing the sense and meaning in which the term is employed.
Every definition ought to be clear, and expressed in words that are common and perfectly well understood.
4. An Axiom, or Maxim, is a self-evident proposition, requiring no formal demonstration to prove the truth of it; but is received and assented to as soon as mentioned.
Such as, the whole of any thing is greater than a part of it; or, the whole is equal to all its parts taken together; or, two quantities that are each of them equal to a third quantity, are equal to
each other. 5. A Theorem is a demonstrative proposition; in which some property i asserted, and the truth of it required to be proved.
Thus, when it is said that the sum of the three angles of any plane triangle is equal to two right angles, this is called a Theorem; and the method of collecting the several arguments and proofs, and
laying them together in proper order, by means of which the truth of the proposition becomes evident, is called a Demonstration.
6. A Direct Demonstration is that which concludes with the direct and certain proof of the proposition in hand.
It is also called Positive or Affirmative, and sometimes an Ostensive Demonstration, because it is most satisfactory to the mind.
7. An Indirect or Negative Demonstration is that which shows a proposition to be true, by proving that some absurdity would necessarily follow if the proposition advanced were false.
This is sometimes called Reductio ad Absurdum; because it shows the absurdity and falsehood of all suppositions contrary to that contained in the proposition.
8. A Problem is a proposition or a question proposed, which requires a solution.
As, to draw one line perpendicular to another; or to divide a line into two equal parts.
9. Solution of a problem is the resolution or answer given to it. A Numerical or Numeral solution, is the answer given in numbers. Geometrical solution, is the answer given by the principles of
Geometry. And a Mechanical solution, is one obtained by trials.
10. A Lemma is a preparatory proposition, laid down in order to shorten the demonstration of the main proposition which follows it.
11. A Corollary, or Consectary, is a consequence drawn immediately from some proposition or other premises.
12. A Scholium is a remark or observation made on some foregoing proposition or premises.
13. An Hypothesis is a supposition assumed to be true, in order to argue from, or to found upon it the reasoning and demonstration of some proposition.
14. A Postulate, or Petition, is something required to be done, which is so easy and evident that no person will hesitate to allow it.
15. Method is the art of disposing a train of arguments in a proper order, to investigate the truth or falsity of a proposition, or to demonstrate it to others when it has been found out. This is
either Analytical or Synthetical.
16. Analysis, or the Analytic method, is the art or mode of finding out the truth of a proposition, by first supposing the thing to be done, and then reasoning step by step, till we arrive at some
known truth. This is also called the Method of Invention, or Resolution; and is that which is commonly used in Algebra.
17. Synthesis, or the Synthetic Method, is the searching out truth, by first laying down simple principles, and pursuing the consequences flowing from them till we arrive at the conclusion. This is
also called the Method of Composition; and is that which is commonly used in Geometry. 18. The sign (or two parallel lines), is the sign of equality; thus, A=B, implies that the quantity denoted by A
is equal to the quantity denoted by B, and is read A equal to B.
19. To signify that A is greater than B, the expression A7B is used. And to signify that A is less than B, the expression A/B is used.
20. The sign of Addition is an erect cross; thus A+B implies the sum of A and B, and is called A plus B.
21. Subtraction is denoted by a single line; as A-B, which is read A minus B; A-B represents their difference, or the part of A remaining; when a part equal to B has been taken away from it.
In like manner, A-B+C, or A+C—B, signifies that A and C are to be added together, and that B is to be subtracted from their sum. 22. Multiplication is expressed by an oblique cross, by a point, or by
simple apposition: thus, AX B, A. B, or AB, signifies that the quantity denoted by A is to be multiplied by the quantity denoted by B. The expression AB should not be employed when there is any
danger of confounding it with that of the line AB, the distance between the points A and B. The multiplication of numbers cannot be expressed by simple apposition.
23. When any quantities are enclosed in a parenthesis, or have a line drawn over them, they are considered as one quantity with respect to other symbols: thus, the expression A×(B+C—D), or A× B+C—D,
represents the product of A by the quantity B+C-D. In like manner, (A+B)×(A−B+C), indicates the product of A+B by the quantity
24. The Co-efficient of a quantity is the number prefixed to it: thus, 2AB signifies that the line AB is to be taken 2 times; AB signifies the half of the line AB.
25. Division, or the ratio of one quantity to another, is usually denoted by placing one of the two quantities over the other, in the form of a fraction:
signifies the ratio or quotient arising from the division of the quantity A by B. In fact, this is division indicated.
26. The Square, Cube, &c. of a quantity, are expressed by placing a small figure at the right hand of the quantity: thus, the square of the line AB is denoted by AB2, the cube of the line AB is
designated by AB3; and so on.
27. The Roots of quantities are expressed by means of the radical sign √, with the proper index annexed; thus, the square root of 5 is indicated √5; √(A ×B) means the square of the product of A and
B, or the mean proportional between them. The roots of quantities are sometimes expressed by means of fractional indices: thus, the cube root of A×B×C may be expressed by VA×B×C, or (A×B×C)31, and
so on.
28. Numbers in a parenthesis, such as (15. 1.), refers back to the number of the proposition and the Book in which it has been announced or demonstrated. The expression (15. 1.) denotes the fifteenth
proposition, first book, and so on. In like manner, (3. Ax.) designates the third axiom; (2. Post.) the second postulate; (Def. 3.) the third definition, and so on.
« AnteriorContinuar »
|
{"url":"https://books.google.co.ve/books?id=0XowAQAAMAAJ&pg=PP16&vq=centre&dq=related:ISBN8474916712&lr=&output=html_text&hl=es-419","timestamp":"2024-11-14T06:55:54Z","content_type":"text/html","content_length":"33311","record_id":"<urn:uuid:50eef424-741d-4f7b-a3c8-39c8f64563e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00621.warc.gz"}
|
5 15 In Simplest Form
In the busy digital age, where screens dominate our lives, there's a long-lasting charm in the simplicity of printed puzzles. Among the huge selection of ageless word games, the Printable Word Search
stands apart as a beloved classic, providing both entertainment and cognitive benefits. Whether you're a seasoned challenge lover or a newcomer to the world of word searches, the allure of these
printed grids full of concealed words is universal.
Square Root Of 15 In Radical Form
5 15 In Simplest Form
Web Step 1 Enter the fraction you want to simplify The Fraction Calculator will reduce a fraction to its simplest form You can also add subtract multiply and divide fractions as well
Printable Word Searches supply a wonderful getaway from the consistent buzz of modern technology, allowing people to submerse themselves in a globe of letters and words. With a pencil in hand and an
empty grid prior to you, the difficulty starts-- a trip through a labyrinth of letters to discover words smartly concealed within the challenge.
Simplest Form 5 5 Why Is Simplest Form 5 5 So Famous Simplest Form
Simplest Form 5 5 Why Is Simplest Form 5 5 So Famous Simplest Form
Web What is 5 15 Simplified to Simplest Form 5 15 Simplified Fraction Simplifier Calculator Simplify the Fraction What is 5 15 Simplified Answer Fraction 5 15 simplified to lowest
What sets printable word searches apart is their ease of access and versatility. Unlike their electronic equivalents, these puzzles do not require an internet connection or a device; all that's
needed is a printer and a desire for psychological stimulation. From the convenience of one's home to class, waiting spaces, and even during leisurely exterior barbecues, printable word searches
supply a portable and engaging method to develop cognitive skills.
Simplifying Ratios Worksheet
Simplifying Ratios Worksheet
Web What is the Simplified Form of 5 15 A simplified fraction is a fraction that has been reduced to its lowest terms In other words it s a fraction where the numerator the top
The appeal of Printable Word Searches prolongs past age and history. Children, adults, and elders alike discover joy in the hunt for words, cultivating a feeling of accomplishment with each
exploration. For educators, these puzzles act as valuable devices to enhance vocabulary, spelling, and cognitive capacities in an enjoyable and interactive manner.
C3 Ch02 01
C3 Ch02 01
Web 2 sept 2022 nbsp 0183 32 In this video we will simplify reduce the fraction 5 15 into its simplest form The key to simplifying fractions is to find a number that goes into both the numerator and
denominator Show
In this era of continuous electronic barrage, the simplicity of a published word search is a breath of fresh air. It permits a conscious break from screens, encouraging a moment of leisure and
concentrate on the tactile experience of resolving a challenge. The rustling of paper, the damaging of a pencil, and the contentment of circling around the last concealed word produce a sensory-rich
task that transcends the limits of innovation.
Download 5 15 In Simplest Form
Fraction Calculator Mathway
Web Step 1 Enter the fraction you want to simplify The Fraction Calculator will reduce a fraction to its simplest form You can also add subtract multiply and divide fractions as well
What Is 5 15 Simplified To Simplest Form Calculatio
Web What is 5 15 Simplified to Simplest Form 5 15 Simplified Fraction Simplifier Calculator Simplify the Fraction What is 5 15 Simplified Answer Fraction 5 15 simplified to lowest
Web Step 1 Enter the fraction you want to simplify The Fraction Calculator will reduce a fraction to its simplest form You can also add subtract multiply and divide fractions as well
Web What is 5 15 Simplified to Simplest Form 5 15 Simplified Fraction Simplifier Calculator Simplify the Fraction What is 5 15 Simplified Answer Fraction 5 15 simplified to lowest
PPT Lesson 2 3 6 PowerPoint Presentation Free Download ID 4956437
How To Simplify A Ratio To Its Simplest Form YouTube
What Is 3 15 In Simplest Form Jorden has Bullock
What Is The Simplest Form Of Automation Express Each Of The
What Is 15 100 In Simplest Form Brainly
What Is 15 100 In Simplest Form Brainly
Multiplying Mixed Numbers
|
{"url":"https://reimbursementform.com/en/5-15-in-simplest-form.html","timestamp":"2024-11-02T08:19:46Z","content_type":"text/html","content_length":"20345","record_id":"<urn:uuid:a49e3c87-7b64-4f10-ac0d-16e66b214ffc>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00340.warc.gz"}
|
Andrei Rinea's technical blog
In both Java and C# it’s quite easy to express integer numerical literals. You can use both decimal and hexadecimal base to represent the value. Only for the hexadecimal base you need to prefix the
value with 0x. For decimal base values that exceed 2^31-1 you need to provide a suffix (typically L) specifying this fact so the compiler will treat it like a long integer value. C# also provides
unsigned long values (U prefix). In both languages the casing of the suffix does not matter.
Java : (notice, there are no unsigned primitives in Java)
int i1 = 23; // integer, decimal
int h1 = 0x17; // integer, hexadecimal
long i2 = 12345678900L; // long integer (64 bit signed integer)
C# :
int i1 = 23;
int h1 = 0x17;
ulong u1 = 12345678900U;
long i2 = 12345678900L;
As you might have read in Beginning Java for .NET developers on slide 14, beginning in Java 7 you can also use two more features, that are not present in C# (at least at the time of this writing) :
Binary base :
int b1 = 0b11001010;
Underscores in literals (no matter which base) :
int b1 = 0b1100_1010;
long myCardNumber = 2315_2432_2111_1110;
int thousandsSeparated = 123_456_000;
The restrictions on the underscore placing is that you may not place it at the beginning of the value (prefix) or at the end (suffix). Also, for non-integer literals, you may not place it adjacent to
the decimal separator.
For floating-point literals you must use the dot as decimal separator (if you need to specify a fraction, if not, you’re not required). You must use F for float-single-precision (32 bit) and D for
float-double-precision (64 bit). Moreover in C# you have also the M suffix corresponding to the decimal (128 bit) value type.
C# :
float x1 = 0.001F;
double x2 = 12.33D;
decimal x3 = 111.2M;
float x4 = 33F;
Java :
float f1 = 0.001F;
double f2 = 12.31D;
float f3 = 123F;
Enum – comparison of Java and .NET
Posted by Andrei Rinea on 26 November 2013 4 comments
A useful feature added in Java 1.5 (also known as J2SE 5.0, 2004) is the enum. In .NET enums have been present since the very first version (2002, and as a beta since 2000) but the engineers at Sun
managed to learn something from the shortcomings of the enums in .NET and provided more flexibility.
Let’s start with the simplest, the .NET implementation. In .NET all data types, including value types (equivalent of the primitive types) are part of the type hierarchy, being, indirectly inherited
from System.Object (equiv. of java.lang.Object). The enums are just a specialization on top of exact numeric types, by default int (System.Int32). A typical declaration :
public enum Month
Notice that the compiler is forgiving and doesn’t complain that after the last element we forgot to not place a comma. It will also work, of course, if we don’t place a comma after the last element.
Behind the scenes the compiler will generate a value-type inheriting from System.Enum that will have 12 constants. By default these constants we’ll be of type Int32 and their value, again, by
default, will start from 0 and increase by 1 for each member. January will be 0 and December will be 11. Casts between the backing type (Int32 in this case) and the Months type will be allowed both
at design time and at runtime.
You can also force individual values for each member
public enum Month
January = 3,
February = 33,
March = 222,
April = 14,
May = 20,
June = 23,
In this case January will be equal to 3, February 33, …, June 23, July 24 (not specified but after a value-specified member, the next member will be the last value + 1 if specific value is not
present. You can even force things into a bad situation like so :
public enum Months
January = 1,
July = 1,
Guess what, not only this is completely valid, but there won’t be just two duplicate values (January and July being equal to themselves, and equal to 1) but also February will be 2, just like August
and so on. Of course, this is not recommended. The compiler and the runtime will happily apply your stupid scheme but the humans will be confused. This excess of freedom is not to my liking but I
can’t do much about it except counter-recommend it. Ideally you should not have to specify values for typical enums. Except for…
Read more »
Beware of primitive wrappers in Java
Posted by Andrei Rinea on 20 November 2013 2 comments
A .NET developer can be tricked into thinking that, for example, Integer is the same with int in Java. This is dangerous, in particular for a C# developer, because in C# System.Int32 is absolutely
equivalent to int. “int” is just an alias.
In Java there are 8 primitive data types :
• byte (this is equivalent to sbyte in C#)
• short (just like short / Int16 in C#)
• int (just like int / Int32 in C#)
• long (equivalent to long / Int64)
• float (similar to float / Single)
• double (similar to double / Double)
• boolean (equivalent to bool / Boolean)
• char (equivalent to char / Char)
Now, these primitive types are not part of the Java Type System, as you might have seen in Beginning Java for .NET developers in the slides, at page 21. These primitives (“value types”) have
reference-type peers that are typically spelled the same (except int/Integer, char/Character) and just have the first letter capitalized.
Just like you should avoid comparing strings with == in Java, you should avoid declaring variables and fields of the reference-type peers, unless for a good reason.
The main danger lies in the fact that being reference types and Java not having operator overloading (see Beginning Java for .NET developers, slide 15) comparing two instances with the == operator
will compare the instances and not the values.
“Oh, but you’re wrong!”, some of you might say, “I’ve written code like this and it worked!”. Code like this :
public class Main {
public static void main(String[] args) {
Integer i1 = 23;
Integer i2 = 23;
System.out.println("i1 == i2 -> " + (i1 == i2));
Yes, it does print
i1 == i2 -> true
It will work to values up to 127 inclusive. Just replace 23 with 128 (or higher) and see how things go. I’ll wait here.
Surprised? You shouldn’t be. This thing works because of a reason called integer caching (and there are ways to extend the interval on which it works – by default -128 up to 127 inclusive) but you
shouldn’t rely on it.
Just use int where available or at least use the .intValue() method.
You might wonder what is the Integer (and the rest of the reference-type wrappers) there for? For a few things where they are needed. Once, because the generics in Java are lacking and you can’t
define a generic type with primitive type(s) as type arguments. That’s right, you can’t have List. Scary? Yes, especially when coming from .NET where generics are not implemented with type erasure.
So you need to say List and then watch out for reference comparison instead of value comparison, autoboxing performance loss and so on.
The other reason why you need these wrappers is because there is no nullable-types support in Java. So if you need to have a variable or a field that can store a primitive type but might also have to
store a null then Integer will be better for you than int.
Just make sure you understand these implications and … be (type :P) safe!
Avoid comparing strings with == in Java
Posted by Andrei Rinea on 19 November 2013 5 comments
While beginning development in Java, especially if coming from a .NET background (but not necessarily) you might do string comparison with == in Java. Don’t do it. It will compare the string
instances and not their effective value.
You might even try it first to check if == really works, testing it in a wrong manner like so :
public static void main(String[] args) {
String s1 = "Abc";
String s2 = "Abc";
System.out.println("s1 == s2 -> " + (s1 == s2));
This will output
s1 == s2 -> true
.. which might lead you to believe this works. This does return the correct value because of a feature present in Java and .NET called string interning (not specific to Java or .NET).
Try to obtain a string instance dynamically like concatenating two existing instances and see how things don’t work anymore :
public static void main(String[] args) {
String s1 = "Abc";
String s2 = "Abc";
// new lines :
String capitalA = "A";
String bc = "bc";
String s3 = capitalA + bc;
System.out.println("s1 == s2 -> " + (s1 == s2));
// new line :
System.out.println("s1 == s3 -> " + (s1 == s3));
s1 == s2 -> true
s1 == s3 -> false
Weird, huh? That’s because at compile time there are four distinct strings generated : “Abc” (once, even if referred twice), “A” and “bc”. The “Abc” instance obtained by joining “A” and “bc” will be
generated at runtime and, of course, it will be a different instance than the first “Abc” instance. That’s why the result of the == operator comparison will be false.
Read more »
Simulating C# ref parameter in Java
Posted by Andrei Rinea on 17 November 2013 1 comment
As I was saying a few posts ago (Beginning Java for .NET Developers), on the 8th slide, there are no ref and no out method parameter modifiers.
These are rarely used in .NET anyway so you can’t really complain of their lack in Java. Furthermore, some of their legitimate uses, such as the static TryParse methods on value types are not
applicable in Java. Why? Because in Java the primitives (int, long, byte etc. – the equivalent of the basic value types in .NET) are not part of the type hierarchy and they have reference-type
wrappers (Integer etc.) which would solve the issue of returning the result in case of ‘TryParse’ style of parsing. How’s that? It’s like :
public static Integer tryParseInt(String intString) {
try {
return Integer.parseInt(intString);
} catch (NumberFormatException e) {
return null;
No need for a ‘out’ parameter or ‘ref’. But! Let’s try and simulate ‘ref’ using a generic class written this way :
public class Ref<T> {
private T value;
public Ref() {
public Ref(T value) {
this.value = value;
public T getValue() {
return value;
public void setValue(T value) {
this.value = value;
public String toString() {
return value == null ? null : value.toString();
Preparing the development environment for Java – Windows and Ubuntu
Posted by Andrei Rinea on 16 November 2013 No comments
Unlike .NET development where everything is streamlined and well-aligned, starting from the OS, the framework, the tools, the IDE and all others being written by one company, in Java development
you’ll experience “freedom”(1) of choice. I’ll start with a gentle introduction which the experienced may very well skip to avoid getting bored.
In order to get started developing in Java we’ll need the following :
1. An OS. I’ll showcase Windows and Ubuntu (Linux).
2. A Java JRE. This is the most basic component required to run Java programs.
3. A Java JDK. The JDK or Java SDK (IBM calls it that way) typically includes the JRE plus a compiler, tools for running various types of Java programs, packaging tools, extra class libraries and
many more.
4. An IDE – Integrated Development Environment. This is typically an MDI (Multi-Document Interface) application which provides certain convenience features for the developer :
□ Syntax highlighting – keywords are displayed in a certain color, local variables in another etc.
□ Code completion – instead of having to type the whole keyword, or class identifier, a member and so on, an autocompletion prompt will appear (usually triggered by the user typing a dot or
other notable event) easing your typing and avoiding typos.
□ Interactive debugging – Allowing the user to control the execution of the program by inserting breakpoints, stepping over, into or out of code, watching expressions (variables, fields etc.),
modifying internal data or even (very few IDEs allow) stepping back.
□ Tracing – in case you need to inspect internal data but breaking into the debugger cancels the bug or triggers other unwanted condition or the data changes too fast, you can watch expressions
in a specially designed tool window without interrupting the flow of the debugged program
□ Source control integration – allows the user to send/push/checkin/etc changes to file(s) into a repository, obtaining the latest version, comparing versions, merging, branching and many more
□ Visual designers – For UI modules or elements most IDEs offer some kind of preview of the developed interface, showing the developer pretty much how things will look and behave without
needing to recompile, run and browsing to that particular interface
□ Packaging and deployment – Features for creating a package of the application, be it JAR, WAR, DLL, ZIP, APK etc. Furthermore many IDEs will help you push a site to a webhosting provider,
cloud service and so on.
□ .. and a whole lot more but let’s try to keep things shorter
Beginning Java for .NET Developers
Posted by Andrei Rinea on 15 November 2013 7 comments
I’ve always wanted to learn another language and platform and being a long-time .NET developer Java seemed the closest to my knowledge and one which would seem easy to learn based on what I already
I’ve put off this for various reason along the last 3-4 years, most of which laziness was chief.
Recently some colleagues moved from our project to another project that involves Java modules and since .NET is not a first-class citizen in my employer’s eyes I thought maybe it could serve me as a
kind of an ‘insurance’ – to learn Java.
I’ve obtained (..) some ebooks (Effective Java and Thinking in Java), downloaded JDK, a few IDEs (IntelliJ IDEA and NetBeans, no Eclipse for me, thanks) and started doing HelloWorld’s and stuff like
that. I noticed JavaFX (which is quite similar to WPF on which I currently work)
I’ve came across two nice comparisons of Java and .NET, written in a constructive manner (i.e. not “mine is better, na nanana”) :
• Java for C# Programmers by Cristoph Nahr
Using these two articles I compiled (yes, that’s the original meaning of the word :P) a PowerPoint slideshow.
Then I thought there might be other (.NET developer) colleagues that might be interested in my research and gave an internal presentation based on the slideshow and expanding each item by talk.
I thought I should share it with everyone so here it is (download here) :
I’ve written about Java / C# differences before, and I might continue that series in the near future, with practical examples and counter-examples.
|
{"url":"https://blog.andrei.rinea.ro/2013/11/","timestamp":"2024-11-09T09:10:57Z","content_type":"application/xhtml+xml","content_length":"67472","record_id":"<urn:uuid:f5cb1cad-2405-43cf-b1a6-f43eecc32708>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00540.warc.gz"}
|
Are these three formulas correct? | HIX Tutor
Are these three formulas correct?
If Force = $F$ ,
Acceleration = $a$ ,
and Mass = $m$
does that mean
Force = $m$$\times$$a$
Acceleration = $\frac{F}{m}$
Mass = $\frac{F}{a}$
or did I include a mistake in my work?
Answer 1
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Correct mathematical representation of the last formula is
$m = | \vec{F} \frac{|}{|} \vec{a} |$
Especially the last formula assumes that while trying to obtain the value of mass #m#, a scalar quantity, by dividing two vector quantities; Force #vecF# by acceleration #veca# the division is
performed between two scalars, i.e. , the magnitudes of two vectors involved. It is a convention and is understood.
Needless to say that the acceleration is produced by the force and therefore, these two vector quantities have the same direction. Strictly, speaking the correct formula should look like #m=|vecF|/|
Similarly, to be precise one may also write the remaining two expressions as
Force #vecF=m.veca# and Acceleration #veca=1/mvecF#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/are-these-three-formulas-correct-8f9af8af64","timestamp":"2024-11-09T21:02:14Z","content_type":"text/html","content_length":"586595","record_id":"<urn:uuid:3d76dc6c-a431-433b-9ea8-3b1a33bd2cf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00371.warc.gz"}
|
If the function y=x2+2x−4 contains the point (m,2m) and m>0, wh... | Filo
If the function contains the point and , what is the value of ?
Not the question you're searching for?
+ Ask your question
It's important not to get intimidated by all the variables. The question gives us a point on the graph, so let's plug it in.
From here we can see that . The question states that , so .
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Functions in the same exam
Practice more questions from Functions
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text If the function contains the point and , what is the value of ?
Updated On Dec 7, 2022
Topic Functions
Subject Mathematics
Class Grade 12
Answer Type Text solution:1 Video solution: 1
Upvotes 187
Avg. Video Duration 3 min
|
{"url":"https://askfilo.com/mathematics-question-answers/if-the-function-yx22-x-4-contains-the-point-m-2-m-and-m0-what-is-the-value-of-m","timestamp":"2024-11-04T23:46:09Z","content_type":"text/html","content_length":"342448","record_id":"<urn:uuid:bdf21537-683f-440f-93d2-f94c9cfe83bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00317.warc.gz"}
|
Effect of crystallographic texture on the forming limit in microforming of brass
Issue Manufacturing Rev.
Volume 6, 2019
Article Number 27
Number of page(s) 11
DOI https://doi.org/10.1051/mfreview/2019024
Published online 27 November 2019
© A. Mashalkar and V. Nandedkar, Published by EDP Sciences 2019
1 Introduction
Advances in the electronics, mobiles, energy and medical applications are influenced by developments of new materials and their processing technologies. Due to miniaturization in many sectors, micro
forming has emerged as a most preferred process for sheet metal components. The development of rational, science-based technology in the micro forming processes is primarily concerned with the need
of the detailed study of material properties. One of the specific characteristics inherent in the majority of the material is anisotropy, which is based on crystallographic structure and texture
formation under high plastic strain [1]. However, the assumption of the material isotropy is still being used in the finite element analysis, though it does not actually meet the real deformation
condition. In the plasticity theory of the isotropic material, shifting from the elastic condition to the plastic is usually determined on the basis of the maximum shear stress criteria developed by
maximum distortion strain energy criteria by Mises [2]. The major criterion does not consider the crystallographic texture of materials and consequently the anisotropy of their physical, mechanical
and plastic properties.
It should be noted that the high accuracy of the recently proposed criterions is achieved by a large amount of the anisotropy coefficients (up to 18), determination of which involves numerous
mechanical tests at different stress states [3]. Though the applied anisotropy coefficients characterize the anisotropy of plastic deformations, they do not take into account the reason for
anisotropy, i.e., the crystallographic texture [4]. Thus, the mentioned yield functions, on one hand, allow describing the plastic flow of anisotropic materials. On the other hand, they do not allow
carrying out technical analysis of micro thin foils considering the crystallographic texture [5]. As a result, it is difficult to determine the arrangement of crystallographic texture in terms of the
necessities of specific microforming processes [6,7]. However, there is short of systematic research on the micro limiting dome height [LDH] test and failure mechanism of the ultra thin foils
considering the crystal orientation. The effect of texture and grain structure on strain localisation and formability is investigated experimentally and numerically for two AlZnMg alloys [8] by
Lademo et al. The strongly textured materials exhibit inferior formability to the materials with weak or nearly random texture. The reason for this is attributed to the reduced work-hardening
capacity of the former materials and to a less degree to the plastic anisotropy. Grechnikov proposed calculation procedure which considers the crystal lattice constants and the parameters of
crystallographic orientation of material. The main practical significance of this procedure is possibility to predict the effect of crystallographic texture of rolled sheets on limiting strains and
formability of material in different metal forming process [9]. Masoud Hajian studied 1010 steel sheet formability. The initial texture of sheet material was measured through X-ray diffraction
technique. Also, the stress-strain behaviour and FLD of the material were determined by performing simple tension and hemi-spherical punch tests, respectively. In order to predict the forming limits
of the material by simulation, a UMAT subroutine was developed and linked to the non-linear finite element software ABAQUS. In this subroutine, a rate sensitive crystal plasticity model along with
the power law hardening was implemented. Second-order derivative of sheet thickness variations with respect to time was used for necking criterion. The obtained FLD was compared with the experimental
measurements and good agreement was found between simulation and experiment with acceptable errors between approximately 5–15% [10]. María A. Bertinetti studied the effect of the cube texture on
forming-limit strains is studied using a rate-dependent viscoplastic law in conjunction with the Marciniak-Kuczynski approach. The forming limit diagram and yield locus are determined for several
spreading of grain orientations around the ideal {100} ⟨001⟩ component [11]. Fulop et al. carried simulations of the mechanical response of ultra-thin ductile metal sheets. Rate-dependent single
crystal plasticity theory was used to implement the algorithms into a Finite Element code. A uniaxial tensile test and a three-point bending test are computationally evaluated. The effect of the
number of surface grains over the total number of grains is investigated numerically [12].
In this paper, the subroutine VUMAT is applied with the plasticity criteria and crystallographic orientation. Crystal lattice constants and crystallographic orientation parameters are included
explicitly and implemented in numerical analysis of the micro-LDH test considering the orientation of the blanks. Also, study involves investigations on influence of crystallographic orientation on
formability in microforming process and outcomes of numerical approach are validated with the experimental results.
2 Limiting dome height test
Micro-formability of metal foils can be measured using LDH Test. Specimen for the tensile test were designed as per ASTM E 345 standard and the properties achieved from various tests are presented in
Table 1. These specimen are designed such that, the strain paths can be achieved in both drawing as well as stretching zones using uniaxial strain, plane strain and biaxial strain. Figure 1 shows the
FSA M100 universal testing machine of capacity 100 KN with micro limit height dome attachment. Experiments were performed for the ultra thin foil of 40μm thickness of alpha brass material and Figure
2 shows the micro-formed sample.
3 True strain measurement
Circle grid with diameter of one mm each, and center to center distance of 2mm were printed on specimen to measure deformations. An optical microscope was used to record the minor and major axis of
the ellipse. The equations for percentage true major and minor strain is given in equations (1) and (2):$major train = Instanteneous Major Axis Length Original Circle Diameter × 100 .$(1) $minor
strain = Instanteneous Minor Axis Length Original Circle Diameter × 100 .$(2)
Uniaxial strain, plane strain and biaxial strains for 40μm brass foil were measured from experimentation as presented in Table 2. Forming limit curves has been plotted using these values.
4 Finite element model of the micro LDH test using vumat subroutine
Figure 3 shows the numerical model of the micro LDH test. The dimensions and geometry of the model corresponds to the standard test for stretch forming. In all cases, the blank thickness is 40μm.
The finite element model has been discretized using S3R and S4R shell element with reduced integration point over the thickness, in order to reduce the number of elements. S4R is the linear,
finite-membrane-strain, quadrilateral shell element and is robust in nature. S3R is the linear, finite-membrane-strain, triangular shell element applied to capture bending deformations or high strain
gradients because of the constant strain approximation in the elements. The combination of these two will ensure complete discretization of the model and capture all deformations with minimal error.
Between the tool and blank contact, pairs were prescribed, and the friction obeys the Coulombs law. The tool was assumed absolutely to be rigid. The model of anisotropic elasto-plastic material
considering crystallographic texture is used to describe the blank material behavior of alpha brass. In order to assess the influences of crystallographic structure on the formability, it was modeled
anisotropic material, the texture of which is represented by crystallographic orientation. The characteristic for the rolled material deformation orientation of brass is{110} ⟨112⟩, rotated cube
{100} ⟨011⟩, cube{100} ⟨001⟩ and isotropy. The orientation parameters are given in Table 3 [13].
5 Continuum damage model
Damage is addressed as one of the output measure and its evolution law is given as a general function of other state variables such as stress, plastic strain, temperature and so on. From a general
point of view, the damage variable should be described using a tensor formulation [14]. From the physical point of view, damage variable indicates the progressive material deterioration due to
non-reversible deformation processes and can be expressed by the reduction of the nominal section area of a given reference volume element (RVE) as a result of micro-voids formation and growth. Let's
consider the damage due to growth of micro-cavities, atomic bond breaking, discontinuous surface.$Damage [ D ] = A D A ,$(3)where A − overall area of the damage body, A[D] − damaged area.
Therefore, D is scalar and values between 0 and 1$σ ¯ = F ( A − A D ) .$(4) $σ ¯ = F A ( 1 − A D A )$(5)where D is the overall damage variable and $σ ¯$ is the effective (or undamaged) stress tensor
computed in the current increment. $σ ¯$ are the stresses that would exist in the material in the absence of damage. The material has lost its load-carrying capacity when D=1.$σ ¯ = σ ( 1 − D ) .$
True stress was replaced by effective stress [15]$σ = E 0 ( 1 − D )ε.$(7)
6 Damage evolution
Based on Swift's law [16] the damage evolution equation is$σ y = K ( ε 0 + p ) n$(8) $Damage ( D ) = ( D C ε PR − ε PD ) × [ 2 3 ( 1 − ν ) + 3 ( 1 − 2 ν ) ( σ H σ e q ) ] ( ε 0 + p ) 2 n p ,$(9)where
ε[PD], plastic strain under which the damage evolution is negligible; ε[PR], plastic strain at rupture; Dc, damage parameter at rupture called the critical value of the damage; σ[H], hydrostatic
stress; σ[eq], equivalent stress; σ[y], yield stress; ε[0], strain value at yield; K and n are isotropic hardening coefficients; p, equivalent plastic strain.$σ e q = η 12 + η 31 2 . σ 12 2 + η 12 +
η 23 2 . σ 22 2 − η 12 σ 11 σ 22 + ( 5−2 η 12 ) σ 12 2 .$(10)
Here σ[ij] is stress tensor [4]. The generalized anisotropic parameters η[ij] are defined by$η i j = 1 − 15 ( A l − 1 ) 3 + 2 A l ( Δ i + Δ j − Δ k − 1 5 ) ,$(11)where A^l is the anisotropic
parameter of the crystal lattice Δ[ i ]. Are the orientation factor of crystallographic orientation.$Δ i = h i 2 k i 2 + k i 2 l i 2 + l i 2 h i 2 ( h i 2 + l i 2 + k i 2 ) 2 ,$(12) h[i] , l[i] , k
[i] are Miller indices defining the eighth direction in crystal with respect to a coordinate system associated with the blank. The rolling direction set along the x-axis.
Strain tensor for 3D is written as$p = 2 3 ε p : ε p .$(13)
Strain equivalence principle yielding occurs$σ ˜ = σ 1 − D = σ y = K ( ε 0 + p ) n .$(14)
After yielding, plastic deformation occurs due to the deviatoric stress, which consists of unequal principal-stresses. Deviatoric stress is the difference between principle stress and hydrostatic
stress. Another way of representative linear elastic stress-strain is given below$σ = λ t r a c e ( ε ) I + 2 μ ε .$(15)
The stress equation including the damage can be written as$σ = ( 1 − D ) ( λ t r a c e ( ε ) I + 2 μ ε ) .$(16)
Initially, the time step is zero and at first increment the new trial stress tensor is evaluated from the linear stress strain curve. If the new calculated trial stress value is less than the yield
stress, then it is stored as new trial stress value. When the trial stress exceeds yield point and enters plastic region, damage criterion is applied, so new trail stress and plastic strain is due to
damage. Also, plastic strain and yield stress values are updated and tested for the damage initiation. Crack initiation is considered when D value exceeds 0.9. ΔD is calculated when p > ε[PD.] Δγ
calculated from the below given equations. Δ is determined considering constant D during one step which is justified in the explicit calculation because of very small increments$Δ γ = σ new Dtrial :
σ new Dtrial − 2 3 ( 1 − D old ) σ yold 2 μ ( 1 + h 3 μ ) ( 1 − D old )$(17) h, isotropic hardening law.
7 Results and discussion
Limiting Dome Height test for 40μm alpha brass foil was carried out numerically with ABAQUS using VUMAT subroutine and the results of maximum in-plane principle strain and minimum in-plane principle
strain in three cases uniaxial strain, plane strain and biaxial strain are plotted below for the different orientation (orientation of brass {110} ⟨112⟩, Rotated cube {100} ⟨011⟩, Cube {100} ⟨001⟩
and Isotropy).
7.1 LDH test for {110} ⟨112⟩
The results for LDH test for {110} ⟨112⟩ are plotted below in Figures 4–6.
From the graphs it is evident that fracture for maximum principle strain in uniaxial stretching occurs at 0.16, plane strain at 0.24 and in biaxial at 0.32. Maximum in-plane strain shown by average
value nearer to red zone elements in Figures 4a, 5a and 6a. In case of minor principle strain necking occurs at −0.1 in uniaxial stretching, 0.015 in plane stretching and at 0.17 in biaxial
stretching shown by average nearer to brown zone elements in Figures 4b, 5b and 6b.
7.2 LDH test for {100}⟨011⟩
Results of limiting dome height test {100} ⟨011⟩ for 40μm alpha brass foil are plotted below in Figures 7–9. From the numerical outcomes it is evident that fracture for maximum principle strain in
uniaxial stretching occurs at 0.12, plane strain at 0.14 and in biaxial at 0.35. Maximum in-plane strain shown by average value nearer to red zone elements in Figures 7a, 8a and 9a. In case of minor
principle strain necking occurs at −0.06 in uniaxial stretching, −0.001 in plane stretching and at 0.1465 in biaxial stretching shown by average nearer to brown zone elements in Figures 7b, 8b and 9b
7.3 LDH test for {100}⟨001⟩
Results pertaining to limiting dome height test {100} ⟨001⟩ are presented in Figures 10–12. It is evident that facuture for maximum principle strain in uniaxial stretching occurs at 0.23, plane
strain at 0.17 and in biaxial at 0.32. Maximum in-plane strain shown by average value nearer to red zone elements in Figures 10a, 11a and 12a. In case of minor principle strain necking occurs at
−0.13 in uniaxial stretching, 0.015 in plane stretching and at 0.1368 in biaxial stretching shown by average nearer to brown zone elements in Figures 10b, 11b and 12b.
7.4 LDH test for isotropy
Limiting Dome Height testisotropy for 40μm alpha brass foil was carried out numerically with ABAQUS and the results of maximum in-plane strain and minmum in-plane strain for three strain path cases
uniaxial strain, plane strain and biaxial strain are plotted below in Figures 13–15. From the graphs it is evident that fracture for maximum principle strain in uniaxial stretching occurs at 0.1238,
plane strain at 0.1466 and in biaxial at 0.3511. Maximum in-plane strain shown by average value nearer to red zone elements in Figures 13a, 14a and 15a. In case of minor principle strain necking
occurs at −0.081 in uniaxial stretching, −0.01 in plane stretching and at 0.17 in biaxial stretching shown by average nearer to brown zone elements in Figures 13b, 14b and 15b.
7.5 Forming limit curves (FLC)
For plotting FLC maximum values of the major and the minor strain are determined by measuring the principal strains at failure state. Numerical simulation of failure limit curves is carried out using
finite element analysis platform ABAQUS.
The details of maximum in-plane principle strain and minimum in-plane principle strain at different strain paths for thin foil (uniaxial, plane and biaxial) are presented in Table 4. For the
construction of the forming limit diagram (FLD) maximum in-plane principle strain is plotted on Y axis and minimum in-plane principle strain on x-axis. The forming limit curves were plotted by
joining limit strain co-ordinates and this procedure was repeated for all four crystallographic orientations. Numerically plotted FLDs are shown in Figures 16–19. It is apparent from Figures 16–19
that, as the orientation of foil changes forming limit curve changes. The area below curve presents safe zone for forming. Higher the curve on major stain axis, higher the formability. Amongst four
cases investigated Brass orientation has higher formability, rotated cube is at second place, cube and isotropy orientation has almost same formability as curve resembles. Cube and isotropy
orientation gives the same values of the fracture strain.
7.6 Validation of numerical results with experimental test data
For {110} ⟨112⟩ orientation, experimental and numerical failure limit curves are presented below in Figure 20. It is observed that maximum uniaxial strain in both approaches is in full agreement
within 20% error. Maximum plain strain is almost same in numerical approach with that of experimental. The state of maximum biaxial strain in both approaches is within 18% error.
When plotted both the failure limit curves as shown below they are in found to be of the same nature. It is obvious that numerical results strongly agree with experiment. For {100} ⟨001⟩ orientation,
experimental and numerical failure limit curves are presented below in Figure 21. It is observed that maximum uniaxial strain in both approaches is in full agreement within 10% error. Maximum plain
strain within 20% error in numerical approach with that of experimental. The state of maximum biaxial strain is in both approaches is in then 20% error. When plotted both the failure limit curves as
shown below they are in found to be of the same nature. It is obvious that numerical results strongly agree with experiment. Other orientation given more than 20% error.
8 Conclusions
Based on the numerical simulations and experiments on the study of the forming limit of microforming of brass, the following conclusions can be drawn:
• The Brass orientation and rotating cube orientation results match with the experimental results within 20% error, and they show that the material has crystallographic orientation of brass and
rotating cube orientation texture for the alpha brass C26000.
• The simulation results relating to the isotropy do not match experimental results well, which means that the material obeys the anisotropic material behavior.
• The results show limiting values in the FLD diagram for the different orientations, which will help the process and tool designs taking into account of the crystallographic orientation effects.
All Tables
All Figures
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.
|
{"url":"https://mfr.edp-open.org/articles/mfreview/full_html/2019/01/mfreview190021/mfreview190021.html","timestamp":"2024-11-02T18:55:07Z","content_type":"text/html","content_length":"134908","record_id":"<urn:uuid:7ddd1a6e-d4fa-47b6-bee8-ee8f37bebd8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00493.warc.gz"}
|
Is there a relativity-compatible thermodynamics?
2339 views
I am just wondering that laws in thermodynamics are not Lorentz invariant, it only involves the $T^{00}$ component. Tolman gave a formalism in his book. For example, the first law is replaced by the
conservation of energy-momentum tensor. But what will be the physical meaning of entropy, heat and temperature in the setting of relativity? What should an invariant Stephan-Boltzmann's law for
radiation take shape? And what should be the distribution function?
I am not seeking "mathematical" answers. Wick rotation, if just a trick, can not satisfy my question. I hope that there should be some deep reason of the relation between statistical mechanics and
field theory. In curved spacetime, effects like particle production seems very strange to me, since they originate from the ambiguity of vacuum state which reflects the defects of the formalism. The
understanding of relativistic thermodynamics should help us understand the high energy astrophysical phenomena like GRB and cosmic rays.
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user Xinyu Li
Basically I think the answer is to the question is no. For instance, there are fundamental difficulties in defining temperature. See physicsforums.com/showthread.php?t=644884
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user Ben Crowell
There do exist books with both "relativity" and "thermodynamics" in the title, for example amzn.com/0486653838 (Relativity, Thermodynamics and Cosmology, Richard C Tolman, Dover)
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user DarenW
I belive that the trick to get a relativistic version of entropy, temperature and etc is to define things in the proper reference frame, since you can extend it latter to other references keeping
lorentz covariance. At least this is one way to arrive at relativistic hydrodynamics. About the QFT and GR: I don't have any idea
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user user23873
I know you don't want to hear it but I think Wick Rotation is really the right thing. The connection between the classical world and the quantum one is the path integral, through the action. This
connects the partition function to the generating function and you're done, baring the Wick rotation. Now in curved space I don't have an answer, because even the definition of the Hilbert space (a
la Hawking) is tricky. But then, QFT doesn't play well with GR, so we wouldn't expect the connection to extend that far anyway.
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user levitopher
The book by Tolman is very interesting. I am surprised to see that it was originally published in 1934.
Thermodynamics is compatible with special relativity. Tolman's book was concerned about General Relativity, and he discovered interesting effects you can do with heat engines in gravitational fields.
For example, he gave a theoretical heat engine which operates at the Carnot limit without being infinitesimally slow, by lifting and lowering things in a gravitational field to change temperature (if
I remember correctly). This was considered impossible before. I don't remember the exact setup, so I can't vouch for what he did, I read the book a long time ago.
I don't know a definitive answer to your (really good) question, but here is a quote from an old textbook I have by Christian Moller ("The Theory of Relativity"):
Shortly after the advent of the relativity theory, Planck, Hassenoerl, Einstein and others advanced separately a formulation of the thermodynamical laws in accordance with the special principle
of relativity. This treatment was adopted unchanged including the first edition of this monograph. However it was shown by Ott and indepently by Arzelies, that the old formulation was not quite
satisfactory, in particular because generalized forces were used instead of the true mechanical forces in the description of thermodynamical processes.
The papers of Ott and Arzelies gave rise to many controversial discussions in the literature and at the present there is no generally accepted description of relativistic thermodynamics.
So at least at the time that was written it was unresolved. I'd be interested if there are any more recent updates.
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user twistor59
There is the Jüttner Distribution, which would be the relativistic generalization of the Maxwell-Boltzmann Distribution: en.wikipedia.org/wiki/…
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user user23873
@user23873 Thanks, I'd never heard of that.
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user twistor59
@cduston Just for the record, I'm not anti-Wick rotation, that was the OP!
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user twistor59
Comment-bombed! Sorry I'll move the comment where it was supposed to go and work on deleting these...
This post imported from StackExchange Physics at 2014-03-22 17:10 (UCT), posted by SE-user levitopher
|
{"url":"https://www.physicsoverflow.org/8630/is-there-a-relativity-compatible-thermodynamics","timestamp":"2024-11-04T17:47:59Z","content_type":"text/html","content_length":"174224","record_id":"<urn:uuid:50f86194-02a6-4c90-a793-b0ca36fd2a29>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00143.warc.gz"}
|
(PDF) QUANTUM SUPREMACY : in FINANCIAL MARKETS (FOREX) *
Author content
All content in this area was uploaded by Ignacio Ozcariz on Apr 30, 2020
Content may be subject to copyright.
Ignacio Ozcariz†
RQuantech - Geneva (Switzterland)
Criptosasun - Madrid (Spain)
(Dated: November 7, 2019)
The paper presents how to achieve market dominance, implementing the oldest plain vanilla price
arbitrage strategy, using the newest quantum entanglement phenomena.
I. Introduction. 1
II. Dynamics of FOREX (FX) Market 1
A. High Frequency Trading (HFT) 1
B. Direct Market Access (DMA) 2
C. Strategies 2
1. Latency-Arbitrage 2
2. Regulation (USA) 2
D. Summary 2
III. Quantum Information and Quantum Computation3
A. Introduction (Ref:[1]) 3
B. Quantum State – Vector representation
(Ref:[2]) 3
C. Measurements (Ref:[2]) 3
D. Entanglement (Ref:[2]) 4
IV. Methodology 4
A. Set-up 4
B. Price Communication 4
C. Price Reception and Comparison with current
prices 4
D. Quantum Processes 5
E. Summary 7
V. Implementation 7
A. Structure and Technologies 7
1. Tick-to-Trade 8
2. Communications 8
3. Quantum supremacy 8
B. State of the Art 8
C. Disclaimer 8
VI. Conclusions 8
Acknowledgments 8
References 8
∗Applicable also to any financial market
†Also at Universidad Politecnica Madrid. Doctorate Student;
“It from bit” symbolizes the idea that every item
of the physical world has at bottom—a very deep
bottom, in most instances — an immaterial source and
explanation; that which we call reality arises in the last
analysis from the posing of yes-or-no questions and the
registering of equipment-evoked responses; in short, that
all things physical are information-theoretic in origin
and that this is a participatory universe.”
– John A. Wheeler, [3]
Thirty years later from this statement it’s time to up-
date it and propose that “It from qubit” could be the
last truth.
This paper will present a novel approach to the
use of quantum techniques that we nickname quantum
supremacy applied to the financial markets and particu-
larly to the FOREX market.
We’ll demonstrate that having two entangled devices,
more on the definition of entanglement in Section 3 of
the paper, it’s possible to achieve a full dominance in the
FOREX using a very simple plain vanilla strategy, price
arbitrage between two far away markets.
II. DYNAMICS OF FOREX (FX) MARKET
Each day, billions of monetary units are exchanged on
the foreign exchange currency market. FX trading is used
to determine currency exchange rates across the world.
While people have been trading currencies for thousands
of years, modern technology has changed the way that
many banks and individual investors do business.
One of the key technologies in the evolution of the
markets has been high frequency trading (HFT) and its
impact in the FX trading has been extraordinary.
A. High Frequency Trading (HFT)
According to the dictionary of financial terms of NAS-
DAQ [4]
High Frequency Trading (HFT)
Refers to computerized trading using proprietary algo-
rithms. There are two types high frequency trading.
Execution trading is when an order (often a large or-
der) is executed via a computerized algorithm. The pro-
gram is designed to get the best possible price. It may
split the order into smaller pieces and execute at different
The second type of high frequency trading is not exe-
cuting a set order but looking for small trading opportu-
nities in the market.
It is estimated that 50 percent of stock trading volume
in the U.S. is currently being driven by computer-backed
high frequency trading. Also known as algo or algorith-
mic trading.
B. Direct Market Access (DMA)
Access to the markets was historically being mediated
by banks or direct brokers at the markets. From the last
decennials of the past century this interrelation has given
pass to the direct access to the markets named DMA
(Direct Market Access).
According to David Polen [5] :
There are two distinct market segments that use DMA
- the human trader and the blackbox. I like to call
this “Human DMA” and “Highfrequency Trading (HFT)
With Human DMA, the extreme is a buy-side that has
traders manually executing trades and looking at market
data over the Internet-
With HFT DMA, the extreme is a blackbox co-located
at the exchange. One market segment is sub-millisecond
and the other is more than tens of milliseconds - some-
times hundreds of milliseconds.
C. Strategies
About the strategies used by the black-boxes, in 2014
the Security and Exchange Commission (SEC) appears to
emphatically say that two strategies, order anticipation
and momentum ignition, are manipulative and illegal.
The SEC writes [6]:
“Directional strategies generally involve establishing a
long or short position in anticipation of a price move
up or down. The Concept Release requested comment
on two types of directional strategies – order anticipa-
tion and momentum ignition – that ‘may pose particular
problems for long-term investors’ and ‘may present seri-
ous problems in today’s market structure.’
An order anticipation strategy seeks to ascertain the
existence of large buyers or sellers in the marketplace and
then trade ahead of those buyers or sellers in anticipation
that their large orders will move market prices (up for
large buyers and down for large sellers).
A momentum ignition strategy involves initiating a se-
ries of orders and trades in an attempt to ignite a rapid
price move up or down.
As noted in the Concept Release[7], any market partic-
ipant that manipulates the market has engaged in con-
duct that already is illegal. The Concept Release focused
on the issue of whether additional regulatory tools were
needed to address illegal practices, as well as any other
practices associated with momentum ignition strategies.”
1. Latency-Arbitrage
Going down to the technicalities, Latency Arbitrage is
an important concept when discussing High Frequency
Trading, and refers to the fact that different people and
firms receive market data at different times.
These time differences, known as latencies, may be as
small as a billionth of second (a nanosecond), but in the
world of high speed trading, such differences can be cru-
cial. So crucial, in fact, that trading firms pay lots of
money to be located closer to exchanges’ servers– each
foot closer saves one nanosecond.
Latency arbitrage occurs when high frequency trading
algorithms make trades a split second before a competing
trader, and then resell the stock seconds later for a small
2. Regulation (USA)
The previous Congress dealt with potential regulation
of HFT (US Congress HFT Overview of Recent Devel-
opments 2016)[8]. We very briefly refer here to the Tes-
timony on Regulatory Reforms to Improve Equity Mar-
ket Structure by Stephen Luparello, director, Division
of Trading and Markets, U.S. Securities and Exchange
Commission, Before the Senate Committee on Banking,
Housing and Urban Affairs Subcommittee on Securities,
Insurance and Investment. This declaration was made
March 3, 2016.
Luparello said SEC staff was developing a recommen-
dation for the SEC to consider addressing the use of ag-
gressive, destabilizing trading strategies that could exac-
erbate price volatility.
Until today regulation is still a working process.
D. Summary
It can be extremely challenging to earn a profit when
trading currency. In many cases, investors can lose sig-
nificant sums of money. Exchange rates can be impacted
by a variety of factors. This can include economic condi-
tions, politics, weather, shipping conditions, piracy, tech-
nology advances and more.
Many HFT programs are installed in specialized data
centres located near an exchange (Co-location). Since the
speed of execution is limited by the speed of light, many
programmers and investors try to minimize the amount
of time it takes for an order to be executed. This is
possible by minimizing the amount of time it takes data
to travel between the operator premises and an exchange.
Most HFT programs are designed to profit from very
small price differences in a currency. In many cases, a
program will make a profit of only a few cents per trade.
However, millions of these types of trades every day can
yield a significant profit.
Our approach in this paper we’ll be to use the oldest
strategy, price arbitrage between two locations, profit-
ing the quantum supremacy. (The spooky action at a
distance; in Einstein words).
A. Introduction (Ref:[1])
a. Quantum computing fundamentals. All comput-
ing systems rely on a fundamental ability to store and
manipulate information. Current computers manipulate
individual bits, which store information as binary 0 and
1 states. Quantum computers leverage quantum mechan-
ical phenomena to manipulate information. To do this,
they rely on quantum bits, or qubits.
b. Quantum Properties. Three quantum mechan-
ical properties — superposition, entanglement, and
interference — are used in quantum computing to
manipulate the state of a qubit.
1. Superposition
Superposition refers to a combination of states we
would ordinarily describe independently. To make
a classical analogy, if you play two musical notes at
once, what you will hear is a superposition of the
two notes.
2. Entanglement
Entanglement is a famously counter-intuitive quan-
tum phenomenon describing behavior we never see
in the classical world. Entangled particles behave
together as a system in ways that cannot be ex-
plained using classical logic.
3. Interference
Finally, quantum states can undergo interference
due to a phenomenon known as phase. Quantum
interference can be understood similarly to wave
interference; when two waves are in phase, their
amplitudes add, and when they are out of phase,
their amplitudes cancel.
B. Quantum State – Vector representation
What then is a qubit?
Just as a classical bit has a state – either 0 or 1 – a qubit
also has a state. Two possible states for a qubit are the
states |0iand h1|, which as you might guess correspond
to the states 0 and 1 for a classical bit.
Notation like | i and h | is called the Dirac notation,
and we’ll be seeing it often, as it’s the standard notation
for states in quantum mechanics. The difference between
bits and qubits is that a qubit can be in a state other than
|0ior h1|. It is also possible to form linear combinations
of states, often called superpositions:
FIG. 1. Qubit Bloch Sphere representation
The numbers αand βare complex numbers, although
for many purposes not much is lost by thinking of them
as real numbers. Put another way, the state of a qubit
is a vector in a two-dimensional complex vector space.
The special states |0iand h1|are known as computational
basis states and form an orthonormal basis for this vector
We can examine a bit to determine whether it is in
the state 0 or 1. For example, computers do this all the
time when they retrieve the contents of their memory.
Rather remarkably, we cannot examine a qubit to deter-
mine its quantum state, that is, the values of αand β.
Instead, quantum mechanics tells us that we can only ac-
quire much more restricted information about the quan-
tum state. When we measure a qubit we get either the
result 0, with probability |α|2, or the result 1, with prob-
ability |β|2.Naturally, |α|2+|β|2= 1, since the probabil-
ities must sum to one. Geometrically, we can interpret
this as the condition that the qubit’s state be normalized
to length 1. Thus, in general a qubit’s state is a unit
vector in a two-dimensional complex vector space.
C. Measurements (Ref:[2])
Suppose we have two qubits. If these were two classical
bits, then there would be four possible states, 00, 01, 10,
and 11. Correspondingly, a two qubit system has four
computational basis states denoted |00i,|01i,|10i,
A pair of qubits can also exist in superpositions of these
four states, so the quantum state of two qubits involves
associating a complex coefficient – sometimes called an
amplitude – with each computational basis state, such
that the state vector describing the two qubits is
|ψi=α00|00i+α01 |01i+α10|10i+α11 |11i(2)
Similar to the case for a single qubit, the measurement
result x (= 00, 01, 10 or 11) occurs with probability
|αx|2, with the state of the qubits after the measure-
ment being |xi. The condition that probabilities sum to
one is therefore expressed by the normalization condition
that Px∈{0,1}2|αx|2= 1,where the notation x∈ {0,1}2
means ‘the set of strings of length two with each let-
ter being either zero or one’. For a two-qubit system,
we could measure just a subset of the qubits, say the
first qubit, and you can probably guess how this works:
measuring the first qubit alone gives 0 with probability
|α00|2+|α01 |2, leaving the post-measurement state
|ψi=α00|00i+α01 |01i
p|α00|2+|α01 |2(3)
Note how the post-measurement state is re-normalized
by the factor p|α00|2+|α01|2so that it still satisfies the
normalization condition, just as we expect for a legiti-
mate quantum state.
D. Entanglement (Ref:[2])
An important two qubit state is the Bell state or EPR
pair, β00, that we’ll make use in our methodology section.
β00 =|00i+|11i
This innocuous-looking state is responsible for many sur-
prises in quantum computation and quantum informa-
tion. The Bell state has the property that upon mea-
suring the first qubit, one obtains two possible results: 0
with probability 1/2, leaving the post-measurement state
|φ´i=|00i, and 1 with probability 1/2, leaving |φ´
As a result, a measurement of the second qubit al-
ways gives the same result as the measurement of the
first qubit. That is, the measurement outcomes are cor-
These correlations have been the subject of intense in-
terest ever since a famous paper by Einstein, Podolsky
and Rosen, in which they first pointed out the strange
properties of states like the Bell state. EPR’s insights
were taken up and greatly improved by John Bell, who
proved an amazing result: the measurement correlations
in the Bell state are stronger than could ever exist be-
tween classical systems. These results were the first inti-
mation that quantum mechanics allows information pro-
cessing beyond what is possible in the classical world.
This “amazing” correlation will be at the base of our
Quantum Supremacy over the FOREX market.
A. Set-up
Market strategy will be made in the FOREX market
over London and New York markets based on the pair
EUR/USD. Let’s name the price of this pair P.
For the implementation of our strategy we’ll use the
Alice and Bob characters invented by Ron Rivest, Adi
Shamir, and Leonard Adleman in their 1978 paper ”A
method for obtaining digital signatures and public-key
cryptosystems. In our scenario we suppose that A and
B (also known as Alice and Bob) are two market ma-
chines located (collocated) in London (Alice) and New
York (Bob).
Initially, at the start of the market session, both ma-
chines shared several entangled pair of qubits (2N). Let’s
name for the first interchange the qubits Aq1,Aq2 for
Alice and Bq1,Bq2 for Bob.
Aq1,Bq1 are a pair of Bell’s qubits β00 and the same for
the other pair Aq2,Bq2.
Let’s establish a reference via the GPS clock shared by
both parties, using 20 Hz cycle, so each 50 milliseconds,
and name this time reference T0.
B. Price Communication
At this time A and B sends the current price Pof its
market to the other. Taking in account the 7,000 Kilo-
metres distance the time taken to the signal to arrive to
the other end is approx. (fibre-optics or radio transmis-
sion could made differences in the final time), 43 mil-
liseconds. Let’s name these prices P l0(London Price)
and P n0(New York price) at T0.
The potential outcomes that A and B will have regard-
ing the prices at T0are:
1. P l0=P n0State φ0(London equal price than New
2. P l0> P n0State χ0(London greater price than
New York)
3. P l0< P n0State ψ0(London lesser price than New
An arbitrage strategy could be established in the states
χand ψbut not in φ.
We’ll introduce a 5 milliseconds guard time to cover
potential delays of the signal due to any circumstances.
C. Price Reception and Comparison with current
At T0+48 msec. that we’ll consider as T1, A and B had
new prices of Pat their respective markets. Let’s name
these new prices P l1(London) and P n1(New York).
FIG. 2. Initial Price States
It’s obvious that if the price in London is greater than
in NY (state χ0) a very simple strategy consists to buy
the pair in NY and to sell it in London that will carry on
a profit in the transaction. The same occurs if London
is lesser than in NY (state ψ0). In his case the strategy
consists to sell the pair in NY and to buy it in London
and the strategy will carry also on a profit.
To be successful in this strategy we need:
1. The prices when both parties received the infor-
mation regarding the other partner side have not
changed to the opposite condition. So for example,
at time T1,P l1> P n1
2. Both parties can communicate the other side the
condition 1 satisfied in its leg of the market.
The second condition allows that both parties go forward
in the buy/sell process doing the transactions concur-
Logically a further “classical” communication regard-
ing condition 2 will introduce a new delay of 43 millisec-
onds allowing the markets to change its positions during
the delay and removing any potential benefit.
At this point we’ll introduce the Quantum Supremacy
mechanism based on the qubits shared previously be-
tween A and B.
D. Quantum Processes
At the moment T1we’ll have, related to the potential
movements of the market, the next conditions:
Alice side:
If State χ0
1. Condition GO:P l1≥P l0. The market is in the
same or more favourable conditions that in T0
2. Condition NO-GO:P l1< P l0. The market is in
less favourable conditions that in T0
If State ψ0
1. Condition GO:P l1≤P l0. The market is in the
same or more favourable conditions that in T0
2. Condition NO-GO:P l1> P l0. The market is in
less favourable conditions that in T0
FIG. 3. Alice at T1
Bob side:
If State χ0
1. Condition GO:P n1≤P n0. The market is in the
same or more favourable conditions that in T0
2. Condition NO-GO:P n1> P n0. The market is in
less favourable conditions that in T0
If State ψ0
1. Condition GO:P n1≥P n0. The market is in the
same or more favourable conditions that in T0
2. Condition NO-GO:P n1< P n0. The market is in
less favourable conditions that in T0
FIG. 4. Bob at T1
The manipulations of the qubits that Alice and Bob will
make with their qubits depending of the previous condi-
tions are:
Alice makes a measure in Aq1that is:
1. State φ0– Do nothing
2. States χ0and ψ0– and Condition GO Vmakes
a measure using M=|1ih1|
3. States χ0and ψ0– and Condition NO-GO V
makes a measure using M=|0ih0|
FIG. 5. Alice Measurement
Bob makes a measure in Bq2that is:
1. State φ0– Do nothing
2. States χ0and ψ0– and Condition GO Vmakes
a measure using M=|1ih1|
3. States χ0and ψ0– and Condition NO-GO V
makes a measure using M=|0ih0|
FIG. 6. Bob Measurement
Taking in consideration the characteristics of the measure
processes described in the Section (3), after the measure
of Alice and Bob of their qubits Aq1and Bq2with the
Matrix |1ih1|the qubits will finish in the state |1iand
after the measure of Alice and Bob of their qubits with
the Matrix |0ih0|the qubit will finish in the state |0i.
Going now to the entanglement properties presented
also in Section III.D, instantaneously to the above-
mentioned measures, the entangled qubits of Alice and
Bob, Aq2and Bq1will pass to the states |1iand |0iif
their pairs had finished in these states. (Remember the
properties of the Bell state β00 ).
A measure by Alice or Bob to these qubits will allow
to determine the GO or NO-GO condition of his part-
ner with probability 1, without the need of any classic
information exchange between them.
FIG. 7. Quantum Entanglement
E. Summary
Summarizing at the end of the process Alice and Bob
will have each of them the conditions GO or NO-
GOcorresponding to both partner positions.
So, two GO conditions will allow to settle from both
parties the Buy/Sell transaction making the correspond-
ing profit.
FIG. 8. Trade Summary
Taking in account that first measurement with the cho-
sen matrix is made at T1exact (∆T < 100 nanoseconds),
the second measurement would be made at T1+ 1 mi-
crosecond to have the assurance that the first measure-
ment has already taken place by the other party.
In this condition the trade is settle by both parties at
T1+2 microseconds using machines that perform Tick to
Trade in this 1 microsecond delay.
Trade is close and next cycle will start at T0+ 50 mil-
liseconds as the new T0.
To close this section let us, using the dynamics of the
market as presented in Section II, to make a quick and of
course not very serious calculation of the potential that
the above strategy could bring over this pair.
Considering that the GO conditions are established by
a minimum 1 pip (0.01%), price difference between the
positions, and these GO conditions have a probability
of 10% (minimum), the 20 Hz cycle gives us 2 trades
successful each second. Having 10 hours market trading,
he total number of successful trades are 72,000. With the
previous 1 pip profit per trade we obtain 7.2% profit per
day over the nominal amount of the transactions.
A. Structure and Technologies
The previous devised strategy could be implemented
under the structure of a Hedge Fund (HF) under the cat-
egory of extremely technological High Frequency Trad-
The base as presented is the arbitration of one pair
(EUR / USD) at FOREX markets between New York
and London.
Technologies for the implementation will be forefront
in these three domains:
1. Tick-to-Trade
Tick-to-Trade in the order of 1 microsecond (1 µsec).
The current (posted) fastest time is 98 nanoseconds. So,
we can be 10 times slower to the top performers of the
table regarding this technology.
2. Communications
Communications between London and New York us-
ing existing Fibre-optics links or in a further step radio
waves over the Atlantic. Satellite connectivity with Low
Orbit Satellites could also be considered. In any case
our 43-msec. mark is easily achievable from any of these
3. Quantum supremacy
Quantum supremacy in execution, through the entan-
glement mechanisms between the devices co-located with
the execution computers at the premises of the FOREX
markets in London and New York.
B. State of the Art
The markets of London and New York will be the
launching ones due to its maximum volume and relia-
bility. Also initially as described in the paper we can op-
erate over one pair. There’s no reason to think in further
expansions of the operations to other places and other
pairs, in which the HF can operate.
It’s clear that the hurdle for the expansion, and for the
launch of course, will be the availability of the devices
that will support the quantum supremacy.
As today, November 2019, the availability of qubits en-
tangled in different locations to perform quantum com-
munication is one of the hottest applications. The main
benefit for Quantum Communication is in the Quantum
Key Distribution (QKD) processes and currently is a
matter of normal business. In July last year, Alberto
Boaron of the University of Geneva, Switzerland, and
colleagues reported distributing secret keys using QKD
over a record distance of more than 400 kilometres of op-
tical fiber, at 6.5 kilobits per second. In contrast, com-
mercially available systems, such as the one sold by the
Geneva-based company ID Quantique, provide QKD over
50 kilometres of fibre.
The critical issue for all the current developments is
the distance that the photons as carriers of the quantum
information could achieve.
C. Disclaimer
The companies of the Author, RQuanTech (Geneva)
and Criptosasun (Madrid), are working in a device that
would implement the entanglement of qubits at the dis-
tances required by the setup of the methodology dis-
cussed in Section IV and the implementation presented
in Section V
Taking in account that this development is a work in
progress we can not now assure that the milestones pro-
posed will be achieved.
It’s clear that Quantum Supremacy will not only dis-
rupts financial markets nor all the other markets in the
Regarding the current quarrel between IBM and
Google over the claim of the later, Ethan Siegel has
written in Forbes magazine[9], “Progress in the world
of quantum computing is astounding, and despite the
claims of its detractors, systems with greater numbers of
qubits are undoubtedly on the horizon. When successful
quantum error-correction arrives (which will certainly re-
quire many more qubits and the necessity of addressing
and solving a number of other issues), we’ll be able to
extend the coherence timescale and perform even more
in-depth calculations”
The Author wish to acknowledge the support of the
teams of RQuanTech and Criptosasun for their full sup-
port with the corrections and marvelous ideas.
[1] IBM, (2019), https://www.ibm.com/quantum-
[2] I. L. Nielsen, Michael A.; Chuang, Quantum Computa-
tion and Quantum Information: 10th Anniversary Edition
(Cambridge University Press, 2010).
[3] J. A. Wheeler, Information, physics, quantum: the search
for links, in Complexity, entropy, and the physics of in-
formation (Westview Press, 1990).
[4] NASDAQ, (2019), https://www.nasdaq.com/glossary/h/high-
[5] D. Polen, Fix Globlal 2-12, 23 (2010).
[6] SEC, (2014), https://wallstreetonparade.com/2014/04/did-
[7] S. of SEC, Equity Market Structure Literature Review.
Part II: High Frequency Trading (U.S. Securities and Ex-
change Commission, 2014).
[8] R. S. M. Shorter, High Frequency Trading: Overview
of Recent Developments (Congressional Research Service,
[9] E. Siegel, Forbes Magazine 10-19, 27 (2019).
|
{"url":"https://www.researchgate.net/publication/341043972_QUANTUM_SUPREMACY_in_FINANCIAL_MARKETS_FOREX","timestamp":"2024-11-06T15:50:40Z","content_type":"text/html","content_length":"411967","record_id":"<urn:uuid:09cfa8e2-924f-440a-a0d5-c69bdaaab61a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00568.warc.gz"}
|
Basic Photometric Quantities
1. 1.7 Basic Photometric Quantities
1.7 Basic Photometric Quantities
One of the central problems of optical measurements is the quantification of light sources and lighting conditions in numbers directly related to the perception of the human eye. This discipline is
called “photometry” and its significance leads to the use of separate physical quantities that differ from the respective radiometric quantities in only one respect: Whereas radiometric quantities
simply represent a total sum of radiation power at various wavelengths and do not account for the fact that the human eye’s sensitivity to optical radiation depends on wavelength, the photometric
quantities represent a weighted sum with the weighting factor being defined by either the photopic or scotopic spectral luminous efficiency function. Thus, the numerical value of photometric
quantities directly relates to the impression of “brightness”. Photometric quantities are distinguished from radiometric quantities by the index “v” for “visual”. Furthermore, photometric quantities
relating to scotopic vision are denoted by an additional prime, for example Φv’. The following explanations are given for the case of photopic vision, which describes the eye’s sensitivity under
daylight conditions and are therefore very significant for the vast majority of lighting situations (photopic vision takes place when the eye is ad /en-us/service-and-support/knowledge-base/
basics-light-measurement/appendix/?stage=Stage ted to luminance levels of at least several candelas per square meters, scotopic vision takes place when the eye is adapted to luminance levels below
some hundredths of a candela per square meter. For mesopic vision, which is between the photopic and scotopic range, no spectral luminous efficiency function has been defined yet). However, the
respective relations for scotopic vision can be easily derived by replacing V(λ) with V'(λ) and K[m] (= 683 lm/W) with K'[m] (= 1700 lm/W).
Since the definition of photometric quantities closely follows the corresponding definitions of radiometric quantities, the corresponding equations hold true – the index “e” only has to be replaced
by the index “v”. Thus, not all relations are repeated. Instead, a more general formulation of all relevant relations is given in the Appendix.
Measuring instruments for these applications are often called photometers or in the case of illuminance measurement luxmeters as well as spectral photometers or respectively spectral luxmeters.
The following sections give information on:
Luminous flux Φv
Luminous flux Φv is the basic photometric quantity and describes the total amount of electromagnetic radiation emitted by a source, spectrally weighted with the human eye’s spectral luminous
efficiency function V(λ). Luminous flux is the photometric counterpart to radiant power. The luminous flux is given in lumen (lm). At 555 nm where the human eye has its maximum sensitivity, a radiant
power of 1 W corresponds to a luminous flux of 683 lm. In other words, a monochromatic source emitting 1 W at 555 nm has a luminous flux of exactly 683 lm. The value of 683 lm/W is abbreviated as Km
(the value of Km = 683 lm/W is given for photopic vision. For scotopic vision, K[m]' = 1700 lm/W has to be used). However, a monochromatic light source emitting the same radiant power at 650 nm,
where the human eye is far less sensitive and V(λ) = 0.107, has a luminous flux of 0.107 × 683 lm = 73.1 lm. For a more detailed explanation of the conversion of radiometric to photometric
quantities, see paragraph Conversion between radiometric and photometric quantities.
Luminous intensity Iv
Luminous intensity Iv quantifies the luminous flux emitted by a source in a certain direction. It is therefore the photometric counterpart of the “radiant intensity (I[e])”, which is a radiometric
quantity. In detail, the source’s (differential) luminous flux dΦ[v] emitted in the direction of the (differential) solid angle element dΩ is given by
dΦ[v] = I[v] × dΩ
and thus
The luminous intensity is given in lumen per steradian (lm/sr). 1 lm/sr is referred to as “candela” (cd):
1 cd = 1 lm/sr
Luminance Lv
Luminance L[v] describes the measurable photometric brightness of a certain location on a reflecting or emitting surface when viewed from a certain direction. It describes the luminous flux emitted
or reflected from a certain location on an emitting or reflecting surface in a particular direction (the CIE definition of luminance is more general. This tutorial discusses the most relevant
application of luminance describing the spatial emission characteristics of a source is discussed). In detail, the (differential) luminous flux dΦ[v] emitted by a (differential) surface element dA in
the direction of the (differential) solid angle element dΩ is given by
dΦ[v] = L[v] cos(Θ) × dA × dΩ
with Θ denoting the angle between the direction of the solid angle element dΩ and the normal of the emitting or reflecting surface element dA.
The unit of luminance is
1 lm m^-2 sr^-1 = 1 cd m^-2
Illuminance Ev
Illuminance E[v] describes the luminous flux per area impinging upon a certain location of an irradiated surface. In detail, the (differential) luminous flux dΦ[v] upon the (differential) surface
element dA is given by
dΦ[v] = E[v] × dA
Generally, the surface element can be oriented at any angle towards the direction of the beam. Similar to the respective relation for irradiance, the illuminance E[v] upon a surface with arbitrary
orientation is related to illuminance E[v, normal] upon a surface perpendicular to the beam by
E[v ]= E[v, normal] cos(ϑ)
with ϑ denoting the angle between the beam and the surface’s normal. The unit of illuminance is lux (lx).
1 lx = 1 lm m^-2
Luminous exitance Mv
Luminous exitance M[v] quantifies the luminous flux emitted or reflected from a certain location on a surface per area. In detail, the (differential) luminous flux dΦ[v] emitted or reflected by the
surface element dA is given by
dΦ[v ]= M[v] × dA
The unit of luminous exitance is 1 lm m^-2, which is the same as the unit for illuminance. However, the abbreviation lux is not used for luminous exitance.
Conversion between radiometric and photometric quantities
Monochromatic radiation
In the case of monochromatic radiation at a certain wavelength λ, a radiometric quantity X[e] is simply transformed to its photometric counterpart X[v] by multiplication with the respective spectral
luminous efficiency V(λ) and by the factor K[m] = 683 lm/W. Thus,
X[v] = X[e] × V(λ) × 683 lm/W
with X denoting one of the quantities Φ, I, L, or E.
Example: An LED (light emitting diode) emitsnearly monochromatic radiation at λ = 670 nm, where V(λ) = 0.032. Its radiant power amounts to 5 mW. Thus, its luminous flux equals
Φ[v] = Φ[e] × V(λ) × 683 lm/W = 0.109 lm = 109 mlm
Since V(λ) changes very rapidly in this spectral region (by a factor of 2 within a wavelength interval of 10 nm), LED light output should not be considered monochromatic in order to ensure accurate
results. However, using the relations for monochromatic sources still results in an approximate value for the LED’s luminous flux which might be sufficient in many cases.
Polychromatic radiation
If a source emits polychromatic light described by the spectral radiant power Φ[λ](λ), its luminous flux can be calculated by spectral weighting of Φ[λ](λ) with the human eye’s spectral luminous
efficiency function V(λ), integration over wavelength and multiplication with K[m] = 683 lm/W, so
Φ[v] = K[m] × λ ∫ Φ[λ](λ) × V(λ)dλ
In general, a photometric quantity X[v] is calculated from its spectral radiometric counterpart X[λ](λ) through the relation
X[v] = K[m] × λ ∫ X[λ](λ) × V(λ)dλ
with X denoting one of the quantities Φ, I, L, or E.
|
{"url":"https://www.gigahertz-optik.com/en-us/service-and-support/knowledge-base/basics-light-measurement/light-color/quantities-photometric/","timestamp":"2024-11-07T03:57:59Z","content_type":"text/html","content_length":"575246","record_id":"<urn:uuid:cd0ffa4c-7fc5-481a-a95d-2e7b7c160ceb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00484.warc.gz"}
|
Archive of events from 2014
An archive of events from the year
• Research Lindström Lecture: Yiannis Moschovakis (Emeritus Professor of Mathematics at University of Southern California, Los Angeles)
The logical form and meaning of attitudinal sentences
The language L over a fixed vocabulary K is an (applied) typed λ-calculus with additional constructs for acyclic recursion and attitudinal application, an extension of Montague’s Language of
Intensional Logic LIL as formulated by Daniel Gallin. It is denotationally interpretable in the classical typed λ-calculus over K, but intensionally richer: in particular, it can define the
referential intension of each term A, an abstract algorithm which computes the denotation of A and provides a plausible explication of its meaning.
The key mathematical construction of L is an effective reduction calculus which compiles each term A into an (essentially) unique canonical form cf(A), a denotational term which explicates the
logical form of A and from which the referential intension of A can be read off. The central open problem about L (over a finite, interpreted vocabulary) is the decidability of global synonymy -
and it is a problem about the classical, interpreted typed λ-calculus.
• Research Lindström Lecture: Joan Rand Moschovakis (Emerita Professor of Mathematics at Occidental College, Los Angeles, California)
Now Under Construction: Intuitionistic Reverse Analysis
Each variety of reverse mathematics attempts to determine a minimal axiomatic basis for proving a particular mathematical theorem. Classical reverse analysis asks which set existence axioms are
needed to prove particular theorems of classical second-order number theory. Intuitionistic reverse analysis asks which intuitionistically accepted properties of numbers and functions suffice to
prove particular theorems of intuitionistic analysis using intuitionistic logic; it may also consider the relative strength of classical principles which do not contradict intuitionistic
S. Simpson showed that many theorems of classical analysis are equivalent, over a weak system PRA of primitive recursive arithmetic, to one of the following set existence principles: recursive
comprehension, arithmetical comprehension, weak Konig’s Lemma, arithmetical transfinite recursion, Π11 comprehension. Intermediate principles are studied also. Intuitionistic analysis depends on
function existence principles: countable and dependent choice, fan and bar theorems, continuous choice.
The fan and bar theorems have important mathematical equivalents. W. Veldman, building on a proof by T. Coquand, recently showed that over intuitionistic two-sorted recursive arithmetic BIM the
principle of open induction is strictly intermediate between the fan and bar theorems, and is equivalent to intuitionistic versions of a number of classical theorems. Intuitionistic logic
separates classically equivalent versions of countable choice, and it matters how decidability is interpreted. R. Solovay recently showed that Markov’s Principle is surprisingly strong in the
presence of the bar theorem. The picture gradually becomes clear.
• Public Lindström Lecture: Yiannis Moschovakis (Emeritus Professor of Mathematics at University of Southern California, Los Angeles)
Frege’s sense and denotation as algorithm and value
In his classical 1892 article On sense and denotation, Frege associates with each declarative sentence its denotation (truth value) and also its sense (meaning) “wherein the mode of presentation
[of the denotation] is contained”. For example, 1+1=2 and there are infinitely many prime numbers are both true, but they mean different things - they are not synonymous. Frege [1892] has an
extensive discussion of senses and their properties, including, for example, the claim that the same sense has different expressions in different languages or even in the same language; but he
does not say what senses are or give an axiomatization of their theory which might make it possible to prove these claims. This has led to a large literature by philosophers of language and
logicians on the subject, which is still today an active research topic. A plausible approach that has been discussed by many (including Michael Dummett) is suggested by the “wherein” quote
above: in slogan form, the sense of a sentence is an algorithm which computes its denotation. Coupled with a rigorous definition of (abstract, possibly infnitary) algorithms, this leads to a rich
theory of meaning and synonymy for suitably formalized fragments of natural language, including Richard Montague’s Language of Intensional Logic. My aim in this talk is to discuss with as few
technicalities as possible how this program can be carried out and what it contributes to our understanding of some classical puzzles in the philosophy of language.
• Public Lindström Lecture: Joan Rand Moschovakis (Emerita Professor of Mathematics at Occidental College, Los Angeles, California)
Intuitionistic Analysis, Forward and Backward
In the early 20th century the Dutch mathematician L. E. J. Brouwer questioned the universal applicability of the Aristotelian law of excluded middle and proposed basing mathematical analysis on
informal intuitionistic logic, with natural numbers and choice sequences (infinitely proceeding sequences of freely chosen natural numbers) as objects. For Brouwer, numbers and choice sequences
were mental constructions which by their nature satisfied mathematical induction, countable and dependent choice, bar induction, and a continuity principle contradicting classical logic. Half a
century later, S. C. Kleene and R. E. Vesley developed a significant part of Brouwer’s intuitionistic analysis in a formal system FIM. Kleene’s function-realizability interpretation proved FIM
consistent relative to its classically correct subsystem B, facilitating a detailed comparison of intuitionistic with classical analysis C. Continuing Brouwer’s work into the 21st century, Wim
Veldman and others are now developing an intuitionistic reverse mathematics parallel to, but diverging significantly from, both classical reverse mathematics as established by H. Friedman and S.
Simpson, and constructive reverse mathematics in the style of E. Bishop. This lecture provides the basics of intuitionistic analysis and a sketch of its reverse development.
• Journées sur les Arithmétiques Faibles 33
The 33rd meeting of JAF (Journées sur les Arithmétiques Faibles) was held in Gothenburg, Sweden, June 16-18 2014.
|
{"url":"https://logic-gu.se/events/2014/","timestamp":"2024-11-05T09:55:17Z","content_type":"text/html","content_length":"17988","record_id":"<urn:uuid:112f7e0f-e3a7-4f2d-93e3-6714b9100d92>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00400.warc.gz"}
|
How does a steel ship float if it is heavier than water?
Archimedes’ principle states that a body immersed or partially immersed in water loses an amount of weight that is equal to the weight of the fluid it displaces.
Whether or not an object can float depends on the density (weight รท volume) of both the object and the water.
If the density of the object is less than that of water, the object will sink into the water only to the point where the weight of displaced water equals the weight of the object. A one foot wooden
cube, for example, might weigh 50 pounds.
In water, the submerged part of the cube will displace a volume of water weighing 50 pounds. Because the cube is less dense than water, it needs an equal weight but a smaller volume of water to
support it. The force of the displaced water pressing in on all sides is called buoyancy.
If this principle holds, how can a steel ship possibly float, when steel has a density approximately 8 times that of water? In fact, the hull of the ship is filled with air, and air’s density is 816
times less than that of water.
If the overall size and weight of the ship are considered, then, its density is actually less than that of water, and the ship will float.
|
{"url":"https://zippyfacts.com/how-does-a-steel-ship-float-if-it-is-heavier-than-water/","timestamp":"2024-11-03T23:34:17Z","content_type":"text/html","content_length":"53143","record_id":"<urn:uuid:e1cf1a27-74df-4158-b940-090472a401f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00066.warc.gz"}
|
Accounting for meteorological biases in simulated plumes using smarter metrics
Articles | Volume 16, issue 6
© Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License.
Accounting for meteorological biases in simulated plumes using smarter metrics
In the next few years, numerous satellites with high-resolution instruments dedicated to the imaging of atmospheric gaseous compounds will be launched, to finely monitor emissions of greenhouse gases
and pollutants. Processing the resulting images of plumes from cities and industrial plants to infer the emissions of these sources can be challenging. In particular traditional atmospheric inversion
techniques, relying on objective comparisons to simulations with atmospheric chemistry transport models, may poorly fit the observed plume due to modelling errors rather than due to uncertainties in
the emissions.
The present article discusses how these images can be adequately compared to simulated concentrations to limit the weight of modelling errors due to the meteorology used to analyse the images. For
such comparisons, the usual pixel-wise ℒ[2] norm may not be suitable, since it does not linearly penalise a displacement between two identical plumes. By definition, such a metric considers a
displacement as an accumulation of significant local amplitude discrepancies. This is the so-called double penalty issue. To avoid this issue, we propose three solutions: (i) compensate for position
error, due to a displacement, before the local comparison; (ii) use non-local metrics of density distribution comparison; and (iii) use a combination of the first two solutions.
All the metrics are evaluated using first a catalogue of analytical plumes and then more realistic plumes simulated with a mesoscale Eulerian atmospheric transport model, with an emphasis on the
sensitivity of the metrics to position error and the concentration values within the plumes. As expected, the metrics with the upstream correction are found to be less sensitive to position error in
both analytical and realistic conditions. Furthermore, in realistic cases, we evaluate the weight of changes in the norm and the direction of the four-dimensional wind fields in our metric values.
This comparison highlights the link between differences in the synoptic-scale winds direction and position error. Hence the contribution of the latter to our new metrics is reduced, thus limiting
misinterpretation. Furthermore, the new metrics also avoid the double penalty issue.
Received: 03 Aug 2022 – Discussion started: 04 Nov 2022 – Revised: 13 Feb 2023 – Accepted: 19 Feb 2023 – Published: 31 Mar 2023
Near-real-time monitoring of atmospheric gaseous compounds at the scale of power plants, cities, regions, and countries would allow decision-makers to track the effectiveness of emission reduction
policies in the context of climate change mitigation (Horowitz, 2016) or other voluntary emission reduction efforts. Inventories of the emitted atmospheric gaseous compounds are diverse in scale (
Janssens-Maenhout et al., 2019; Kuenen et al., 2014) and methodology. The elaboration of comprehensive inventories generally combines various approaches based on a complex mixture of measurement
techniques, database elaboration, and numerical modelling. Despite the use of quality assurance and control verifications (Calvo Buendia et al., 2019a, b), the emissions fluxes can bear large
uncertainties, depending on the species, on the countries or on the spatial scale (Cai et al., 2019; Hergoualc'h et al., 2021; Meinshausen et al., 2009; Pison et al., 2018; Solazzo et al., 2021).
Furthermore, the delay between the emissions and the release of the corresponding inventory could be important due to a large amount of data to gather and aggregate. Even when the inventories are
known to be accurate, they currently do not fulfil the need for real-time monitoring of emissions at a regional scale. By observing from space the plumes of gases downwind of large cities and
industrial plants, and atmospheric signals at a scale of a few to several hundreds of kilometres, the new generation of high-resolution satellite imagery may help address this need (Veefkind et al.,
2012; Broquet et al., 2018). For instance, the future Copernicus Anthropogenic Carbon Dioxide Monitoring (CO2M) mission will provide images of CO[2] concentrations at a resolution of almost 2km^2,
which will enable the observation of urban-scale pollutant plumes (Brunner et al., 2019; Kuhlmann et al., 2019, 2020). These new images can be directly used through fast methods to quantify the mean
emissions of sources (Varon et al., 2018, 2020; Hakkarainen et al., 2021). These fast methods only require images to estimate the emissions. They do so either by using a simplified chemical transport
model (CTM) or by averaging the emissions of a given source.
Here we focus on the use of such images to update the emissions sources on a smaller timescale. This can be done using an inverse method relying on comparisons between the images and the predictions
of a CTM. A better match between the observed concentration fields and the simulated one will result from a more accurate source. However, the CTM prediction is bounded by the meteorological
conditions used. It takes as inputs temperature, pressure, winds, cloud cover fields, etc. Usually, these atmospheric fields are provided by predictions previously obtained with mesoscale numerical
weather prediction models (Lian et al., 2018). The estimated atmospheric fields come with uncertainties, which in turn yield uncertainties in the simulated concentration fields, for instance, the
location or the main direction of the plume. Within the retrieval algorithm, the concentration fields derived from satellites and CTM models are usually compared pixel-wise. However, pixel-wise
comparison cannot easily remove the relative weight of the meteorological uncertainties within the comparison between observation and simulation. This results in estimated increments applied to the
emissions inventories that are biased by the approximated meteorology used in the simulations. This issue is also present in other simulations with observation comparisons (Dumont Le Brazidec et al.
, 2021; Farchi et al., 2016; Keil and Craig, 2007). Assuming that the temporal variability (e.g. annual cycle, seasonal cycle, diurnal cycle) of the emissions is well-known and that the CTM is
perfect, the displacement between the observed and simulated plumes is driven by the meteorological conditions. Such a displacement yields a position error in the inversion. Thus our main goal is
then to define a metric for the comparison that levels down the position error to reduce the weight of meteorology uncertainties within the inversion.
A better account of position error for observation versus simulation comparison of coherent features is a subject of active research (e.g. Ebert and McBride, 2000; Ebert, 2008; Gilleland et al., 2009
; Gilleland, 2021). These authors developed several metrics and skill scores that are more sensitive to pathological situations where usual metrics provide less information, especially when there is
a position error between the feature they observed and the one forecasted. To do so, they build indicators by splitting the sources of discrepancies and by doing comparisons on deformed meshes (
Hoffman et al., 1995; Hoffman and Grassotti, 1996; Amodei et al., 2009; Marzban and Sandgathe, 2010; Gilleland et al., 2010). We will follow the same methodology, by splitting the different sources
of discrepancies, but the position errors will include errors due to a translation and a rotation. We will consider a specific class of deformations to free the comparison from position errors. We
will consider either (i) an isometric transformation or (ii) a transport map resulting from optimal transport, both differentiable with respect to the compared plumes. Optimal transport metrics were
already used for radionuclide plumes (Farchi et al., 2016), but there were computation limitations. To allow a more systematic comparison between the metrics, we use the Kantorovich standpoint on
optimal transport.
The objective of this paper is to develop a simple and efficient metric for urban-scale plume images which can level down the difference due to the meteorology while fitting into an inverse framework
(following Feyeux et al., 2018; Tamang et al., 2022). Even though the methods could be used for other gaseous compounds, reactive atmospheric gaseous compounds have a more complex transport due to
chemistry. For the sake of simplicity, we will consider CO[2] since it is a passive tracer. Several metric candidates are introduced and compared. From the baseline local ℒ[2] norm, a new metric with
an upstream non-local correction of position errors is described in Sect. 2. In Sect. 3, going further away from the local comparison, we use the optimal transport theory to define the Wasserstein
distance between two plumes and then to build a new metric freed from position errors. The different metrics are then evaluated and compared on a database of analytical two-dimensional Gaussian puff
cases in Sect. 4. The metrics are then compared on a realistic database of CO[2] plumes from a German power plant in Sect. 5. For both databases, the images and the simulations are computed using the
same model, which allows us to monitor the discrepancies seen by the metrics. In Sect. 6 we describe the dependence of the four metrics on meteorology, before concluding in Sect. 7.
2Local metrics and illustration of double penalty issue using analytical plumes
In this section, we start by introducing the notation in Sect. 2.1 and then the Gaussian puff model used to simulate the plumes in the analytical experiments in Sect. 2.2. Furthermore, we assume that
the plumes are already detected and separated from the background noise and instrumental noise. These steps bring challenges that are outside the scope of this article. The ℒ[2] norm is then defined
in Sect. 2.3, with an emphasis on the double penalty issue. To deal with the double penalty issue associated with the family of pixel-wise metrics such as the ℒ[2] norm, a second metric is proposed
in Sect. 2.4.
2.1Discrete and continuous representation of an image
In the present article, we focus on two-dimensional images of the enhancement of the total column of CO[2] concentration or of the ground-level concentration. These images are given with a
discretisation of N pixels. An image can hence be represented by a vector $\mathbit{x}={\left({x}_{\mathrm{1}},\mathrm{\dots },{x}_{N}\right)}^{\top }\in {\mathbb{R}}^{N}$.
It is also possible to obtain a continuous representation of the image using interpolation (e.g. bilinear). In this case, the image is represented by a two-dimensional field $X:\mathbb{E}\to \mathbb
{R}$ defined on a finite domain 𝔼⊂ℝ^2. Without loss of generality, we can assume that $\mathbb{E}=\left[\mathrm{0},\mathrm{1}{\right]}^{\mathrm{2}}$. Furthermore, the two-dimensional field X can be
extended to ℝ^2 by using zero padding outside the original domain 𝔼. If needed, a smooth transition from X to 0 can be included to avoid a sharp gradient at the boundaries of the original domain 𝔼.
For each metric definition, we will use either the discrete or the continuous representation of the images, but this will be explicitly mentioned.
2.2Analytical plumes
Our Gaussian puff model is a simplified model of a concentration field (e.g. concentration at a given altitude or total column concentration in specific conditions). It has the advantage of yielding
analytical expressions for the Wasserstein metrics (see Sect. 3). It is also a relevant case in transport modelling: the transport of a three-dimensional Gaussian puff is a simplified model to
estimate the transport of non-reactive pollutants (Korsakissok and Mallet, 2009; Seigneur, 2019). A set of Gaussian puffs is used extensively in the following sections to illustrate and evaluate the
metric candidates.
In the Gaussian puff model, we assume that X is proportional to the probability density function (pdf) of the normal distribution 𝒩(μ,Σ):
$\begin{array}{}\text{(1)}& X\left(\mathbit{x}\right)\propto \frac{\mathrm{1}}{\sqrt{\left(\mathrm{2}\mathit{\pi }{\right)}^{\mathrm{2}}|\mathbf{\Sigma }|}}\mathrm{exp}\left[-\frac{\mathrm{1}}{\
mathrm{2}}\left(\mathbit{x}-\mathbit{\mu }{\right)}^{\top }{\mathbf{\Sigma }}^{\mathbf{-}\mathbf{1}}\left(\mathbit{x}-\mathbit{\mu }\right)\right],\end{array}$
where μ and Σ are the mean and the covariance matrix, respectively. The operator $|.|$ is the determinant for square matrices. Also, note that since the covariance matrix Σ is positive definite, it
can be factored as follows:
$\begin{array}{}\text{(2)}& \mathbf{\Sigma }=\mathbf{R}\left(\mathit{\theta }\right)\mathbf{\Delta }\mathbf{R}\left(\mathit{\theta }{\right)}^{\top },\end{array}$
where R(θ) is the rotation matrix of angle θ, the angle between the principal axis of the Gaussian and the x axis, and where Δ is a diagonal matrix with the variance along the two principal axes of
the Gaussian. Two examples of puffs based on the Gaussian puff model are provided in Fig. 1b and c.
2.3The ℒ[2] norm and the double penalty issue
To compare two concentration fields, one can see to what extent the fields overlap. This provides a pixel-wise (i.e. local) assessment of the discrepancies. The ℒ[2] norm is then defined as the sum
of the squared discrepancies. More specifically, the ℒ[2] norm d between two concentration fields X[A] and X[B] is defined as
$\begin{array}{}\text{(3)}& d\left({X}_{A},{X}_{B}\right)\triangleq \sqrt{\frac{{\int }_{{\mathbb{R}}^{\mathrm{2}}}{\left[{X}_{A}\left(\mathbit{x}\right)-{X}_{B}\left(\mathbit{x}\right)\right]}^{\
mathrm{2}}\mathrm{d}\mathbit{x}}{{\int }_{{\mathbb{R}}^{\mathrm{2}}}{\mathbf{1}}_{\mathbb{E}}\mathrm{d}\mathbit{x}}},\end{array}$
$\begin{array}{}\text{(4)}& d\left({\mathbit{x}}_{A},{\mathbit{x}}_{B}\right)\triangleq \sqrt{\frac{\mathrm{1}}{N}\sum _{n=\mathrm{1}}^{N}{\left({x}_{A,n}-{x}_{B,n}\right)}^{\mathrm{2}}},\end{array}$
in the discrete case, where x[A] and x[B] are the two concentration vectors corresponding to the concentration fields X[A] and X[B]. In the limit of a higher and higher resolution, the discrete
formulation should converge towards the continuous formulation.
To identify the origin of the discrepancies, Feyeux et al. (2018) propose to split the difference between two fields into two categories: the position error and the amplitude error. A position error
occurs when the two compared plumes are not located in the same place in the images. An amplitude error occurs when the two compared plumes are in the same place in the images, but locally their
pixels do not have the same values. With the ℒ[2] pixel-wise norm, all the discrepancies are seen as local amplitude errors. This property is illustrated in Fig. 1, where a uniform concentration
field 𝒰[𝔼] is compared to two Gaussian puffs shifted by ϵ=0.054 along the x axis with respect to each other.^1 The values of the distance are reported in Table 1. In this case, a small position error
is penalised by d as much as an absence of plume: this is the so-called double penalty issue. For this metric, the translation cost is composed of two equal contributions. The first is the cost of
setting to zero all pixels at the location of the first Gaussian puff. The second is the cost of enhancing the pixels at the translated location.
In the following sections, we further extend the classification of Feyeux et al. (2018) by splitting the amplitude error into two subcategories: the scale error and the shape error. The scale error
corresponds to the difference in total amplitude between two shape-matching fields, i.e. the difference between the sum of the compared image pixels. The shape error corresponds to the difference
between the isocontours after removal of the scale error (i.e. normalisation) and position error (i.e. when both centres of mass and principal axes are superimposed) fields. This splitting of errors
is illustrated in the flow chart in Fig. 2.
2.4Local metric with non-local upstream position correction
We propose to address the double penalty issue while still relying on the ℒ[2] norm by applying an upstream correction of the position error to d. The position error can be seen as a combination of
an orientation and a translation error. The orientation error corresponds to the differences that could be reduced by a rotation applied to two concentration fields sharing the same centre of mass
location that maximises their overlapping. The translation error corresponds to the difference that could be reduced by a translation applied to two concentration fields.
The new distance is defined in a way that involves finding the rotation and translation that minimise d. The idea is that the rotation should cancel the orientation error, and the translation should
cancel the translation error. Let us consider the plane transformation F defined as follows:
$\begin{array}{}\text{(5)}& \mathbf{F}\left(\mathbit{x}\right)={\mathbit{x}}_{\mathrm{0}}+{\mathbit{x}}_{t}+\mathbf{R}\left(\mathit{\theta }\right)\left[\mathbit{x}-{\mathbit{x}}_{\mathrm{0}}\right],
which corresponds to a translation of vector ${\mathbit{x}}_{t}=\left({x}_{t},{y}_{t}{\right)}^{\top }$, followed by a rotation of angle θ and of centre x[0]+x[t], where ${\mathbit{x}}_{\mathrm{0}}=\
left({x}_{\mathrm{0}},{y}_{\mathrm{0}}{\right)}^{\top }$ is the position of the centre of mass of the plume before the transformation. The transformation F depends on three parameters: $\left({x}_
{t},{y}_{t},\mathit{\theta }\right)$. Note that this is an isometry of the plane. The optimal transformation should minimise
$\begin{array}{}\text{(6a)}& \mathcal{J}\left({x}_{t},{y}_{t},\mathit{\theta }\right)& \triangleq {d}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\right),\text{(6b)}& & =\underset{{\mathbb{R}}^
{\mathrm{2}}}{\int }{\left[{X}_{A}\left(\mathbit{x}\right)-{X}_{B}\left(\mathbf{F}\left(\mathbit{x}\right)\right)\right]}^{\mathrm{2}}\mathrm{d}\mathbit{x}/\underset{{\mathbb{R}}^{\mathrm{2}}}{\int }
However, this cost function is constant for any transformation that moves all the mass of the B plume outside the domain $\mathbb{E}=\left[\mathrm{0},\mathrm{1}{\right]}^{\mathrm{2}}$, and by
construction X[B] is then null. This would make the minimisation very difficult with gradient-based optimisation methods. For this reason, we add the following regularisation term to the cost
function to penalise any transformation that moves the B plume outside the domain 𝔼:
This regularisation does not affect the location of the minima of d[F]. The final cost function is
$\begin{array}{}\text{(8)}& \mathcal{J}\left(\mathit{\theta },{x}_{t},{y}_{t}\right)\triangleq \mathit{\alpha }\phantom{\rule{0.125em}{0ex}}{d}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\
right)+\mathit{\beta }\phantom{\rule{0.125em}{0ex}}\mathit{\rho }\left({x}_{t},{y}_{t}\right),\end{array}$
where α is set to the average mass of the A and B plumes, and β is set by trial and error to 10^4. In practice, the cost function 𝒥 can be minimised with the limited-memory
Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm (Nocedal and Wright, 2006) that is based on the gradient of 𝒥 with respect to all three parameters θ, x[t], and y[t], whose expressions are given
in Appendix C. To compute the gradient, the spatial partial derivatives of the concentration field X[B] are needed. Hence, to ensure the continuity of the partial derivatives, we use second-order
bivariate spline interpolation to define the continuous concentration field X[B] from its original image x[b]. In order to avoid any issue due to the local non-convexity of the problem, we also
provide a specific initialisation to the minimisation algorithm. The initial translation is then computed using the two centres of mass. Then we do orthogonal regressions to compute the principal
axes of both X[A] and X[B]. The initial θ is the angle between these axes.
Finally, with the optimal transformation F^∗, i.e. the one that minimises 𝒥 defined by Eq. (8), the new distance, called d[F], is defined by
$\begin{array}{}\text{(9)}& {d}_{F}\left({X}_{A},{X}_{B}\right)\triangleq d\left({X}_{A},{X}_{B}\circ {\mathbf{F}}^{\ast }\right).\end{array}$
For the example of Fig. 1, the values of d[F] are reported in Table 1 and can be compared to the values of d. In the second case (distance between the two Gaussian puffs), d[F] is close to zero. The
residual value is due to the finite resolution of the images. In the first case (distance between the Gaussian puff and the uniform concentration), d[F] stays similar to d because any transformation
F that keeps the plume in the domain is optimal.
3Metrics based on optimal transport theory
In this section, we introduce the Wasserstein distance, the distance of the optimal transport, as a non-local alternative to the pixel-wise ℒ[2] norm.
3.1Optimal transport and the Wasserstein distance
The optimal transport theory was first introduced in the 17th century by Monge in his famous memoir (Monge, 1781). It is based on the idea that there exists a transport plan to move masses that
minimises a given cost of transport. A wider view of the problem was proposed by Kantorovich (1942) using a probabilistic approach. The field has finally regained popularity in the last few decades,
in particular with the generalisation by Villani (2009).
In this section, we follow the Kantorovich approach, which means that we will use the discrete representation (see Sect. 2.1). Moreover, the theory is defined only for vectors whose coefficients are
non-negative and sum up to one. While the first condition is satisfied in our case (because we work with images of pollutant concentration), the second is not. Therefore, in the following instead of
working with the concentration vectors x[A] and x[B], we will work with their normalised counterparts ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}$:
$\begin{array}{}\text{(10)}& \stackrel{\mathrm{^}}{\mathbit{x}}\triangleq \frac{\mathbit{x}}{{\mathbit{x}}^{\top }\mathbf{1}},\end{array}$
where 1∈ℝ^N is the vector full of ones and x∈ℝ^N is either x[A] or x[B].
The set of couplings P between ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}$ is defined by
$\begin{array}{}\text{(11)}& \mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)\triangleq \left\{\mathbf{P}\in {\mathbb{R}}_{+}^{N×N}:\phantom{\
rule{0.25em}{0ex}}\mathbf{P}\mathbf{1}={\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\mathbf{P}}^{\top }\mathbf{1}={\stackrel{\mathrm
Note that $\mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)$ is not empty because $\mathbf{P}={\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}{\
stackrel{\mathrm{^}}{\mathbit{x}}}_{B}^{\top }$ is a coupling between ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}$. The cost of a coupling $\mathbf{P}\in
\mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)$ is defined by
$\begin{array}{}\text{(12)}& \mathcal{J}\left(\mathbf{P}\right)=\sum _{i,j=\mathrm{1}}^{N}{C}_{i,j}{P}_{i,j},\end{array}$
where ${C}_{i,j}\ge \mathrm{0}$ is the (i,j) element of the cost matrix C penalising the transport between ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}$.
Here, it is chosen to be the square of the Euclidean distance between the ith and jth pixels of the original image. For this specific choice, the cost function 𝒥 defined by Eq. (12) has a minimum,
which is obtained for a unique coupling P^∗. The Wasserstein distance, the distance of the optimal transport, is then defined by
$\begin{array}{}\text{(13)}& w\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)=\sqrt{\sum _{i,j=\mathrm{1}}^{N}{C}_{i,j}{P}_{i,j}^{\ast }},\end{array}$
and it is actually a distance according to the mathematical definition. The proofs of these statements can be found in optimal transport textbooks (e.g. Villani, 2009).
Two interesting properties of the Wasserstein distance can be highlighted. First, this metric is defined for normalised vectors only. This means in our case that the difference in total mass between
two images is entirely ignored. Alternative solutions have been proposed to take into account this difference, e.g. the one proposed by Farchi et al. (2016) or the use of unbalanced optimal transport
(Chizat et al., 2018), but this is beyond the scope of the present study.
Second, following Benamou and Brenier (2000), it is possible to define an optimal transport interpolation between ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_
{B}$. This optimal transport interpolation can help us visualise the idea of vicinity according to w. An example is shown in Fig. 3 for two Gaussian puffs. In the case of the optimal transport
interpolation, the w distance between the first puff and the interpolated puff is linearly growing (by the construction of the interpolation), while the increase of the d distance is at first very
steep. In some sense, this behaviour was expected since the first puff and the interpolated puff are quickly separated from each other. In the case of the linear interpolation, the same phenomenon
happens: the d distance is linearly growing (by the construction of the interpolation), while the increase of the w is steeper, but not as steep as the increase of d in the first case. This shows
that the Wasserstein distance w is a metric that better handles the position error than d, since it accounts linearly for the mismatch in the plume positions.
3.2Sinkhorn's algorithm
To compute the Wasserstein distance, we have to determine the optimal coupling matrix P^∗ by minimising 𝒥 defined by Eq. (12). The convexity of the cost function 𝒥 is not guaranteed; thus it is usual
(see, e.g. Peyré and Cuturi, 2019, and references therein) to add the following entropic regularisation:
$\begin{array}{}\text{(14)}& \mathcal{H}\left(\mathbf{P}\right)\triangleq -\sum _{i,j=\mathrm{1}}^{N}{P}_{i,j}\left(\mathrm{ln}{P}_{i,j}-\mathrm{1}\right).\end{array}$
The objective function to minimise becomes
$\begin{array}{}\text{(15)}& {\mathcal{J}}^{\mathit{ϵ}}\left(\mathbf{P}\right)=\sum _{i,j=\mathrm{1}}^{N}{P}_{i,j}{C}_{i,j}+\mathit{ϵ}\sum _{i,j=\mathrm{1}}^{N}{P}_{i,j}\left(\mathrm{ln}{P}_{i,j}-\
under the same constraint $\mathbf{P}\in \mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)$. The solution of the regularised problem is an
approximation of the Wasserstein distance. When ϵ→0, it converges toward the exact value of the Wasserstein distance $w\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel{\mathrm{^}}{\mathbit
{x}}}_{B}\right)$; when ϵ→∞, the optimal coupling matrix converges toward $\mathbf{P}={\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}^{\top }$.
It is possible to show that minimising Eq. (15) is equivalent to minimising the Kullback–Leibler divergence between $\mathbf{P}\in \mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},{\stackrel
{\mathrm{^}}{\mathbit{x}}}_{B}\right)$ and the Gibbs kernel $\mathbf{K}=\mathrm{exp}\left(-\mathbf{C}/\mathit{ϵ}\right)$, where the exponential is applied entry-wise, which is given by
$\begin{array}{}\text{(16)}& \mathrm{KL}\left(\mathbf{P}|\mathbf{K}\right)=\sum _{i,j=\mathrm{1}}^{N}{P}_{i,j}\mathrm{ln}\left(\frac{{P}_{i,j}}{{K}_{i,j}}\right)-{P}_{i,j}+{K}_{i,j}.\end{array}$
The advantage of this formulation it that this problem is known to admit a unique solution which is the projection of the Gibbs kernel K onto $\mathcal{U}\left({\stackrel{\mathrm{^}}{\mathbit{x}}}_
{A},{\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}\right)$. This unique solution can be written
$\begin{array}{}\text{(17)}& \mathbf{P}={\mathbit{u}}^{\top }\mathbf{K}\mathbit{v},\end{array}$
where u and v are vectors with positive or null entries satisfying
$\begin{array}{}\text{(18a)}& \mathbit{u}\circ \left(\mathbf{K}\mathbit{v}\right)={\stackrel{\mathrm{^}}{\mathbit{x}}}_{A},\text{(18b)}& \mathbit{v}\circ \left({\mathbf{K}}^{\top }\mathbit{u}\right)=
In these equations, ∘ is the Schur–Hadamard (i.e. entry-wise) product in ℝ^N.
The (u,v) factorisation is unique and can be easily found using the iterative update scheme proposed by Sinkhorn, where the lth update is given by
$\begin{array}{}\text{(19a)}& {\mathbit{u}}^{\left(l+\mathrm{1}\right)}={\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}÷\mathbf{K}{\mathbit{v}}^{\left(l\right)},\text{(19b)}& {\mathbit{v}}^{\left(l+\mathrm
{1}\right)}={\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}÷{\mathbf{K}}^{\top }{\mathbit{u}}^{\left(l+\mathrm{1}\right)},\end{array}$
where ÷ is the entry-wise division in ℝ^N.
3.3Log-formulation of Sinkhorn's algorithm
Sinkhorn's algorithm provides a simple and quick solution to the optimal transport problem. However, this formulation raises technical issues. The first is that for small values of ϵ – which is what
we are aiming for to be as close as possible to the true optimal transport solution – the algorithm converges slowly.^2 To accelerate the convergence, we use a high value of ϵ and progressively
decrease it whenever (u,v) has converged. We will use this technique in our experiments.
Another numerical issue appears when ϵ is small compared to the entries of C. In this case, u, v, and K explode and cannot be computed with finite numerical precision. To address this issue, we adopt
the log-Sinkhorn algorithm proposed by Peyré and Cuturi (2019), which is presented in the following lines.
Let us introduce f and g, which are related to u and v by
$\begin{array}{}\text{(20a)}& {u}_{i}=\mathrm{exp}\left({f}_{i}/\mathit{ϵ}\right),\text{(20b)}& {v}_{j}=\mathrm{exp}\left({g}_{j}/\mathit{ϵ}\right).\end{array}$
Instead of updating (u,v) with Sinkhorn iteration Eq. (19), we update (f,g) using
$\begin{array}{}\text{(21a)}& \begin{array}{rl}{f}_{i}^{\left(l+\mathrm{1}\right)}& =-\mathit{ϵ}\mathrm{ln}\left[\sum _{j=\mathrm{1}}^{N}\mathrm{exp}\left\{\frac{{f}_{i}^{\left(l\right)}+{g}_{j}^{\
left(l\right)}-{C}_{i,j}}{\mathit{ϵ}}\right\}\right]\\ & +{f}_{i}^{\left(l\right)}+\mathit{ϵ}\mathrm{ln}{\stackrel{\mathrm{^}}{x}}_{A,i},\end{array}\text{(21b)}& \begin{array}{rl}{g}_{j}^{\left(l+\
mathrm{1}\right)}& =-\mathit{ϵ}\mathrm{ln}\left[\sum _{i=\mathrm{1}}^{N}\mathrm{exp}\left\{\frac{{f}_{i}^{\left(l+\mathrm{1}\right)}+{g}_{j}^{\left(l\right)}-{C}_{i,j}}{\mathit{ϵ}}\right\}\right]\\ &
Combining the log-Sinkhorn algorithm while decreasing ϵ is not straightforward, because there are a lot of numerical decisions to make: intermediate and final values of ϵ, convergence criteria, etc.
After several tests, we ended up with Appendix A, which we found to be a good trade-off between speed and accuracy. The value of ϵ is progressively decreased from 1 to 10^−5: each time the
convergence criterion is met, ϵ is reduced by a factor of 10. In our case, the convergence criterion is met when the relative difference between the former and the current value of the Wasserstein
distance falls below $\mathit{\zeta }=\mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$. In addition, we set a maximum number of Sinkhorn iterations of 200 per value of ϵ to keep the computational cost under
control. Finally, note that, for a given ϵ, one can try to accelerate the convergence by using the averaging step proposed in Chizat et al. (2018), but this is beyond the scope of the present study.
3.4Gaussian approximation and upstream correction
Following the derivation of Sect. 2.4, we want to apply the same upstream correction of the position error to the Wasserstein distance w. However, this would require the gradient of the Wasserstein
distance w with respect to each one of its inputs. The computation is not straightforward, even taking into account the log-Sinkhorn formulation developed in Sect. 3.3. For this reason, we will use
the Gaussian approximation, for which the Wasserstein distance has an analytical formula.
More specifically, we assume that we have two continuous concentration fields X[A] and X[B] that follow the Gaussian puff model:
$\begin{array}{}\text{(22a)}& \begin{array}{rl}{X}_{A}\left(\mathbit{x}\right)=& \frac{\mathrm{1}}{\sqrt{\left(\mathrm{2}\mathit{\pi }{\right)}^{\mathrm{2}}|{\mathbf{\Sigma }}_{A}|}}\\ & \mathrm{exp}
\left[-\frac{\mathrm{1}}{\mathrm{2}}{\left(\mathbit{x}-{\mathbit{\mu }}_{A}\right)}^{\top }{\mathbf{\Sigma }}_{A}^{-\mathrm{1}}\left(\mathbit{x}-{\mathbit{\mu }}_{A}\right)\right],\end{array}\text
{(22b)}& \begin{array}{rl}{X}_{B}\left(\mathbit{x}\right)=& \frac{\mathrm{1}}{\sqrt{\left(\mathrm{2}\mathit{\pi }{\right)}^{\mathrm{2}}|{\mathbf{\Sigma }}_{B}|}}\\ & \mathrm{exp}\left[-\frac{\mathrm
{1}}{\mathrm{2}}{\left(\mathbit{x}-{\mathbit{\mu }}_{B}\right)}^{\top }{\mathbf{\Sigma }}_{B}^{-\mathrm{1}}\left(\mathbit{x}-{\mathbit{\mu }}_{B}\right)\right],\end{array}\end{array}$
with Σ[A] and Σ[B] given by
$\begin{array}{}\text{(23a)}& {\mathbf{\Sigma }}_{A}=\mathbf{R}\left({\mathit{\theta }}_{A}\right){\mathbf{\Delta }}_{A}\mathbf{R}\left({\mathit{\theta }}_{A}{\right)}^{\top },\text{(23b)}& {\mathbf
{\Sigma }}_{B}=\mathbf{R}\left({\mathit{\theta }}_{B}\right){\mathbf{\Delta }}_{B}\mathbf{R}\left({\mathit{\theta }}_{B}{\right)}^{\top }.\end{array}$
In this case, the squared Wasserstein distance between X[A] and X[B] is given by^3
$\begin{array}{}\text{(24)}& \begin{array}{rl}{w}^{\mathrm{2}}\left({X}_{A},{X}_{B}\right)& =‖{\mathbit{\mu }}_{A}-{\mathbit{\mu }}_{B}{‖}^{\mathrm{2}}+\mathrm{Tr}\left({\mathbf{\Sigma }}_{A}+{\
mathbf{\Sigma }}_{B}\right)\\ & -\mathrm{2}\mathrm{Tr}\left({\left[{\mathbf{\Sigma }}_{A}^{\mathrm{1}/\mathrm{2}}{\mathbf{\Sigma }}_{B}{\mathbf{\Sigma }}_{A}^{\mathrm{1}/\mathrm{2}}\right]}^{\frac{\
Following the approach of Sect. 2.4, let us now apply the plane transformation F given by Eq. (5) to X[B]. The squared Wasserstein distance becomes
which depends on x[t], y[t], and θ, the three parameters of F. It can be shown (see Appendix D) that ${w}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\right)$ reaches its minimum when ${\mathbit
{x}}_{t}={\mathbit{\mu }}_{A}-{\mathbit{\mu }}_{B}$ and $\mathit{\theta }={\mathit{\theta }}_{A}-{\mathit{\theta }}_{B}$ (modulo π), in which case the distance is given by
$\begin{array}{}\text{(26a)}& w\left({X}_{A},{X}_{B}\circ \mathbf{F}\right)& =\sqrt{\mathrm{Tr}\left({\mathbf{\Delta }}_{A}+{\mathbf{\Delta }}_{B}\right)-\mathrm{2}\mathrm{Tr}\left[{\left({\mathbf{\
Delta }}_{A}{\mathbf{\Delta }}_{B}\right)}^{\frac{\mathrm{1}}{\mathrm{2}}}\right]},\text{(26b)}& & =\sqrt{\mathrm{Tr}\left[{\left({\mathbf{\Delta }}_{A}^{\frac{\mathrm{1}}{\mathrm{2}}}-{\mathbf{\
Delta }}_{B}^{\frac{\mathrm{1}}{\mathrm{2}}}\right)}^{\mathrm{2}}\right]},\end{array}$
which is known as the Hellinger distance between X[A] and X[B] (Peyré and Cuturi, 2019). By construction, this distance estimates the shape error between X[A] and X[B], since the translation, the
rotation, and the scale differences have been removed. In the following, it will be written w[F] to point out the similarity between the relationship $d/{d}_{F}$ on the one hand and $w/{w}_{F}$ on
the other hand. In the case where X[A] and X[B] are not exactly Gaussian, we can still use the Gaussian puff model as an approximation. In this case, w[F] provides an approximation of the shape
Finally, an issue with both w and w[F] is that they are normalised fields, and thus they ignore the scale error, i.e. the difference of total mass between the images. As a consequence, these metrics
cannot be used as such in an inversion framework. One way to address this issue is to add to w and w[F] a term to represent the scale error. Using discrete formalism, this term could be
$\begin{array}{}\text{(27)}& {d}_{\mathrm{mass}}^{\mathrm{2}}\left({\mathbit{x}}_{A},{\mathbit{x}}_{B}\right)\propto {\left[\mathrm{1}-\mathrm{2}\frac{\sum _{n=\mathrm{1}}^{N}{x}_{A,n}\sum _{n=\
mathrm{1}}^{N}{x}_{B,n}}{{\left(\sum _{n=\mathrm{1}}^{N}{x}_{A,n}\right)}^{\mathrm{2}}+{\left(\sum _{n=\mathrm{1}}^{N}{x}_{B,n}\right)}^{\mathrm{2}}}\right]}^{\mathrm{2}},\end{array}$
which is convex. The remaining question would then be the relative contribution of w (or w[F]) and d[mass] in the final distance, which is related to the following question: which kind of error
(position, mass, etc.) should be penalised more? This kind of question is beyond the scope of the present article, which is why we only use w and w[F] as is in our numerical experiments.
4Comparison of the metric on analytical test cases
In this section, we evaluate and compare the metrics with a database of images built using a set of Gaussian puffs. The database is introduced in Sect. 4.1, the computation of the non-local metrics
is validated in Sect. 4.2, and the behaviour of the metrics on this Gaussian puff database are compared in Sect. 4.3.
4.1Gaussian puff database and experimental setup
The database consists of 10^4 pairs of images constructed using Gaussian puffs and then discretised on the domain $\mathbb{E}=\left[\mathrm{0},\mathrm{1}{\right]}^{\mathrm{2}}$ using a finite
resolution of 32×32 pixels. Each puff is parameterised by its mean μ (two scalars) and its covariance matrix $\mathbf{\Sigma }=\mathbf{R}\left(\mathit{\theta }\right)\mathbf{\Delta }\mathbf{R}\left(\
mathit{\theta }{\right)}^{\top }$ (three scalars: θ and both diagonal entries of Δ), which are randomly drawn as follows:
1. Both components of μ are uniformly drawn in [0.15,0.85].
2. θ is uniformly drawn in $\left[-\mathit{\pi },\mathit{\pi }\right]$.
3. σ[1] and then σ[2], the two non-zero components of Δ, are drawn from a half-normal distribution with a standard deviation of 0.33. If needed, σ[1] and σ[2] are then swapped to ensure σ[1]≥σ[2].
Ideally, the domain 𝔼 should cover a large majority of the mass of each puff. In practice, more than 99% of the mass of a Gaussian puff is covered by the 6σ[1]×6σ[2] rectangle centred on μ and
oriented along the principal axes. For this reason, we repeat step 3 of the random draw until this 6σ[1]×6σ[2] rectangle is included in the domain 𝔼. In addition, the puffs should not be too small,
which is why in our case when 6σ[1] and 6σ[2] are both smaller than 9 pixels, it is rejected and entirely re-drawn.
The characteristics of the database are shown in Fig. 4 and cover the analytical pathological situations described in Davis et al. (2009). As expected, the distribution of $‖\mathbit{\mu }‖$ is close
to Gaussian, the distribution of θ is close to uniform, and the distribution of σ[1] and then σ[2] are close to log-normal.
4.2Validation of the implemented Sinkhorn algorithm
For our Gaussian puff database, there are four different ways to compute the Wasserstein distance:
We have applied all four methods, and the differences are shown in Fig. 5. Note that w[emd] has been computed using the Python Optimal Transport (POT) library (Flamary et al., 2021).
The fractional bias over all pairs is no more than 5% when we compare w[th] to the other three methods of computing the Wasserstein distance. By contrast, w[emd] and w[num] are very close to each
other. We have checked that the latter phenomenon is reduced when the resolution is increased. Therefore, we conclude that the gap between w[th] on the one hand and w[num], w[emd], and w[ϵ] on the
other hand is not due to the estimation of the Wasserstein distance but results from the discretisation of the problem with the 32×32 resolution (sampling errors). Figure 5 also shows that w[ϵ]
matches w[emd] well, which validates our log-Sinkhorn implementation.
4.3Correlation to the different error categories
In this subsection, we compare the behaviour of the metrics with respect to three error categories: the translation error, the orientation error, and the shape error. Note that the behaviour with
respect to the scale error cannot be compared since the w and w[F] distances use normalised images. We used the Pearson correlation as our main indicator of the strength of the link between the
behaviour of the metrics and the error category. The closer the norm of the Pearson correlation is to 1, the more linear the relation between the quantities is. If the Pearson correlation is
positive, then the increase in an error category leads to an increase in the metric value, if negative it leads to a decrease, and if nearly null it means that the two quantities seem independent.
For each pair of images in the database, we define T (for translation) as $‖{\mathbit{\mu }}_{B}-{\mathbit{\mu }}_{A}{‖}^{\mathrm{2}}$. This quantity represents the translation error between both
images. The correlation between T and the four distances is reported in the first column of Table 2.
As expected, the Wasserstein distance w is strongly correlated to T. The ℒ[2] norm d is also showing a significant correlation of 0.33 to T, highlighting a likely dependency. Both d[F] and w[F] are
designed to be released from the position error and, in particular, the translation error. This property is confirmed by the very low correlation between T on the one hand and d[F] and w[F] on the
other hand. Additionally, the fact that T is much more correlated to d than to d[F] confirms that d indeed depends on the T quantity.
For each pair of images in the database, we define θ as $‖{\mathit{\theta }}_{B}-{\mathit{\theta }}_{A}‖$. This quantity represents the orientation error between both images. The correlation between
θ and the four distances is reported in the second column of Table 2. The results show also that there is no correlation between θ and any of the distances. In a sense, this shows that all the
distances are, for our database, not sensitive to the orientation error.
For each pair of images in the database, we define H as the Hellinger distance between A and B, as given by Eq. (26). This is actually very similar to w[F], but with one exception: H uses the
theoretical values of Δ[A] and Δ[B] (i.e. the ones that have been drawn), while w[F] uses the sample covariance of the 32×32-pixel images. This quantity represents the shape error between both
images. The correlation between H and the four distances is reported in the third column of Table 2.
Both d and w show a low correlation to H, which is not the case of d[F] and w[F]. On the one hand, the correlation between w[F] and H was highly expected from the definition of H. The remaining
difference is due to the finite resolution of the images. On the other hand, the proportionality of d[F] with the H was wanted but not assessed. By superimposing optimally the plumes, we removed the
position error, but d[F] remains sensitive to H, meaning we did not remove all errors. Thus such behaviour reflects our way of splitting the error. More generally, this comparison on the Gaussian
puff database confirms that both d[F] and w[F] are freed from the position error and seem to be driven by the shape error.
5Comparison of the metric on realistic test cases
To go deeper in our analysis, we now compare the metrics using realistic plumes. This section follows the same organisation as Sect. 4: we present the experimental setup in Sect. 5.1, we validate the
computation of the non-local metrics in Sect. 5.2, and we compare the behaviour of the metrics on this specific database in Sect. 5.3.
5.1Simulation database and experimental setup
We use a simulation database of hourly 3D fields of CO[2] concentrations due to anthropogenic CO[2] emissions from the Neurath lignite-fired power plant (Germany, 51.04^∘N, 6.60^∘E). The database
is extracted from a larger one, over western Europe, as described in Potier et al. (2022). Simulations were performed with the CHIMERE Eulerian transport model (Menut et al., 2013) driven by the
Community Inversion Framework (CIF; Berchet et al., 2021). The resolution of the simulation (longitude: 6.82^∘W to 19.18^∘N; latitude: 42.0 to 56.39^∘N, Fig. 6; Santaren et al., 2021) goes
from 50 to 2km. The Neurath power plant is located in the 2km×2km resolution zoom (longitude: 1.25^∘W to 10.64^∘E; latitude: 47.45 to 53.15^∘N). The vertical grid is composed of 29 pressure
layers extending from the surface to 300hPa (approximately 9km above the ground level). CHIMERE is forced by meteorological variables at 9km resolution (Agusti-Panareda, 2018), provided by the
European Centre for Medium-Range Weather Forecasts (ECMWF) for the CO[2] Human Emissions project (CHE; https://www.che-project.eu/, last access: 14 March 2023). The CO[2] emissions from the Neurath
power plant are extracted from the ∼1km ($\mathrm{1}/\mathrm{60}$^∘×$\mathrm{1}/\mathrm{120}$^∘) resolution inventory (TNO_GHGco_1x1km_v1_1) of the annual emissions produced by the Netherlands
Organisation for Applied Scientific Research (TNO) over Europe for the year 2015 (Denier van der Gon et al., 2017; Super et al., 2020). The temporal disaggregation at the hourly scale is based on
coefficients provided with the TNO inventories for the sector A-Public Power, in the Gridded Nomenclature For Reporting (GNFR) of the United Nations Framework Convention on Climate Change (UNFCCC).
Emissions are projected on the CHIMERE vertical grid with coefficients corresponding to this A GNFR sector (Bieser et al., 2011), also provided with the TNO inventories. We simulate 14d of 1h
emission pulses at the Neurath power plant location, from 1 to 14 July 2015, i.e. 336 puffs. The transport of these puffs is simulated until midnight. Consequently, the later in the day the puffs are
emitted, the shorter they are tracked. Assuming that the source continuously releases CO[2], the puffs are aggregated to create plume images for every hour. Due to this experimental setup, the plume
follows a 24h cycle. It appears after 0h and grows until past 24h of simulation and then restarts.
We ensure that the daily evolution of the hourly emission rate from the source is the same for all plumes. Hence, for a given hour of the day, the difference between two simulated plumes from two
different days is due to the meteorology. We build a database that regroups per pair simulated plumes at a given hour but from different days (e.g. day 1 hour 10 versus day 3 hour 10). To get a
realistic two-dimensional concentration field, we compute the vertical mean of the concentration weighted by the width of the vertical levels. We ignore the first 2h of the simulation, to ensure
that a plume appears in the image. This leaves 2093 pairs of distinct plumes. The images are cropped to 100×100-pixel (here 1 pixel is equal to 2km square cell of the simulation) images to reduce
the computer resource requirements.
5.2Comparison of the different estimations of the Wasserstein distance
We have applied all three methods, and the differences are shown in Fig. 7. The results show, as for the Gaussian puff database, that w[ϵ] and w[emd] are close to each other, which once again
validates our algorithm. Moreover, the results show that w[num] is a reasonably good approximation of w as well. The distance w[num] makes the approximation that the images are Gaussian puffs, which
is a strong approximation but allows for very quick computation. The values of w[num] seem to be usually lower than those of w[ϵ]. This previous remark is in agreement with Theorem 2.1 from Gelbrich
(1990). It is shown that, for elliptic contour distributions with given mean and covariance matrices, the distance between the two Gaussians with these respective parameters (i.e. w[num]) is a lower
bound of the Wasserstein distance between the two distributions. We are not assured to work with plumes that are elliptic distributions. However, it seems to be a good direction to look at to explain
and quantify, if possible, this negative bias. The understanding of the discrepancies between w[num] and w[ϵ] is needed to be able to substitute w[num] to w[ϵ]. For this reason, w[num] is not tested
further in the following sections.
5.3Correlation to the different error categories
In this subsection, we compare the behaviour of the metrics with respect to the same three error categories as in Sect. 4.3: the translation, orientation, and shape error. To do so, we keep the same
quantities T, θ, and H, with the notable exception that H is now equal to w[F] because there is no theoretical covariance. The results are reported in Table 3.
While the correlation between w and T remains very strong, d shows less correlation to T than for the Gaussian puff database. Both d[F] and w[F] are less correlated to T than d and w, respectively,
but their correlation to T is here higher than with the Gaussian puff database. Hence for this realistic database, both d[F] and w[F] are only partially freed from the translation error.
In this case, the correlations between the metrics and θ are higher than for the Gaussian puff database but again do not prompt a clear conclusion.
By construction, w[F] is equal to H, which yields a correlation of 1. Both d and w show a small correlation to H, which was not the case in the Gaussian puff database. The correlation to H is still
higher for d[F], which was expected since d[F] is designed to be partially freed from the position error. This result, however, should be taken with caution because here, contrary to the Gaussian
puff database, H now only partially accounts for the shape error between two plumes.
This second study with realistic cases shows that the behaviour of each metric slightly differs from what has been seen in the Gaussian case. Nevertheless, the results confirm that both d[F] and w[F]
are partially freed from the position error while being still sensitive to the shape error, which is what we hoped for.
6Sensitivity to the meteorological conditions
As stated in the introduction, the goal of this article is to develop and test metrics that can discriminate errors stemming from imperfect meteorology from other sources of discrepancies. Therefore,
following the approach used in the previous sections, we define here four indicators that we consider representative of the difference in meteorological conditions between the two images. We then
examine the correlation between these indicators, the previous indicators (T, θ, and H), and the metrics in the case of the realistic database.
6.1Definition of meteorological indicators
To simplify the analysis, we define four scalar indicators that characterise the meteorological conditions. These indicators focus on the direction and the norm of the wind as experienced by the
pollutant during its transportation. For each image, we proceed as follows.
1. We first average the wind components (three-dimensional fields) in the vertical direction between the surface and the planetary boundary layer (PBL) height. Indeed, the realistic database has
been simulated with summer conditions, and hence the plumes are assumed to be dispersed within the PBL. This results in two-dimensional fields for each time snapshot.
2. We compute the norm and the direction of the averaged winds. This results in two two-dimensional fields for each time snapshot.
3. We average the norm and the direction over the 100×100-pixel grid. This results in two scalars for each time snapshot.
4. We finally compute the time average and time standard deviation of the averaged norm and direction between midnight (the time at which the emissions started) and the time of the image. This
results in four scalars: E[N] (averaged wind norm), E[D] (averaged wind direction), S[N] (deviation of wind norm), and S[D] (deviation of wind direction).
The meteorological indicators are then defined as the absolute differences in E[N], E[D], S[N], and S[D] between the two images that are compared, simply written ΔE[N], ΔE[D], ΔS[N], and ΔS[D].
6.2Correlation between the meteorological indicators and the error categories
Using the realistic database, we compute the correlation between ΔE[N], ΔE[D], ΔS[N], and ΔS[D] on the one hand and T, θ, and H on the other hand. The idea is to see how differences in the
meteorological conditions impact the position and amplitude errors. The results are reported in Table 4.
One can notice that T is mainly correlated to ΔE[D] and a little less to ΔE[N], while ΔS[D] and ΔE[D] are correlated to H. This means that differences in meteorology like ΔE[D] will likely induce
both position error and shape error. Therefore, by removing the position error, we only partially remove the meteorological impact on the differences. Explaining why ΔS[D] induces differences in
shape is straightforward, but explaining how ΔE[D] induces differences in terms of translation instead of orientation is not as so. A difference in the main direction of the plume (which translates
into ΔE[D]) will move further away the centres of mass from each other and hence induce a larger T (which is the distance between the two centres of mass). It should be noted that a wind direction
change that will keep superimposed the centre of mass will drive orientation error.
6.3Correlation between the meteorological indicators and the metrics
To conclude our study, we now compare the different metrics to the meteorological indicators. The results are reported in Table 5.
According to the correlations shown in Table 5, the metric w is correlated to ΔE[D] and ΔE[N] indicators. This is expected since these meteorological changes tend to move the centre of mass and thus
increase the translation error. The results show also that w[F] sees a drop in correlation to ΔE[D] compared to w while getting a correlation with respect to ΔS[D]. For optimal transport metrics, we
can see that removing the position error does not always remove the sensitivity to a change in meteorology. It should be noticed that increasing in either d or d[F] does not seem to be more
correlated to our different meteorology indicators. Such a lack of correlation compared to the optimal transport theory metrics could result from the weight of the scale error in the distance
definition. We normalised the plume the same way as we do for w before computing the distance d and d[F], leading us to the normalised image distances d^∗ and ${d}_{F}^{\ast }$. First, d^∗ and ${d}_
{F}^{\ast }$ are more correlated than d and d[F] to ΔE[D] and ΔS[N], showing that the scale error is masking the sensitivity of pixel-wise metrics with respect to meteorology. Second, ${d}_{F}^{\ast
}$ gains significantly in correlation to ΔS[D] compared to d^∗ but remains as correlated to ΔE[D] as d^∗. Then the plane transformation applied in ${d}_{F}^{\ast }$ allows a better alignment of the
compared plumes, giving more weight to shape error induced by ΔS[D], but does not compensate for all the changes resulting from ΔE[D] or ΔE[N].
The lack of correlation to our meteorological indicators for d and d[F] seems appealing, but it is due to amplitude error held by a small number of highly concentrated pixels above the source for our
cases (i.e. a hotspot). For similar cases, d remains a good metric for updating the inventories. If the “hotspots” of the two images have amplitudes close to each other or there is no “hotspot” but a
large plume, d and d[F] become more correlated to several meteorological changes, making them less suitable. Pixel-wise metrics seem to be better adapted to compare “hotspot” and not highly extended
plumes. A more versatile metric will be a weighted distance using the w[F], or at least a normalised ${d}_{F}^{\ast }$, which is not sensitive to all changes in meteorology, and a term that
represents the scale error between the two images.
In this article, we discussed the use of new metrics for comparing passive gas plumes, practically CO[2] plumes, within an inverse framework aiming at the monitoring of pollutant emissions.
At first, we emphasised how critical the double penalty issue related to pixel-wise comparison is. The traditional ℒ[2] norm tends to overweight position errors mixing up with other sources of
errors. In the context of source inversion, this results in an over-penalised comparison of concentration fields that are slightly shifted from each other, and the mixing makes it difficult to
evaluate the relative weight of different types of error afterwards. Yet, for us, the identification of the relative weight of the errors is critical, since we want to level down the one due to
meteorology and level up the one related to emissions. Assuming that most of the position error is driven by meteorology, we proposed to design metrics that are freed from position error. Following
this goal, a pixel-wise metric with an upstream position correction was designed. This new metric has the advantage of keeping the formalism of the ℒ[2] norm while being released from position
errors. In addition, it is proposed to use a metric based on the optimal transport theory: the Wasserstein distance. Focusing on the algorithmic aspects related to two-dimensional satellite images,
we derived and validated a method to compute this metric. The Wasserstein metric is more sensitive to position errors, but it is not hampered by the double penalty issue. To complete our catalogue of
metrics, an optimal transport metric freed from position errors is proposed. It can be easily computed with a Gaussian approximation. This metric coincides with the Hellinger distance. Nevertheless,
both optimal transport metrics rely on normalised images and thus are unaware of the difference in the total mass present in the plumes. The scale factor between the images is linearly related to
emission fluxes which we want to estimate. This means that, within the inversion framework, the scale factor between the two images should be added and weighted independently.
These four metrics were compared on a specifically designed Gaussian puff database and evaluated according to their correlations with respect to translation error, orientation error, and shape error.
The numerical experiments showed that the resolution of the images tends to impact the optimal transport problem. As expected, the two metrics designed to be freed from position errors are not
correlated to translation and orientation errors. The ℒ[2] norm and Wasserstein metrics are both correlated to the translation error. From this, we extended our tests to a realistic plume database.
This second test series shows that, for a more complex plume geometry, the metrics are still correlated to the translation error. This implies that the new metrics are only partially freed from
position errors.
Then we discussed the link between a position error and a variation within the mesoscale meteorology using the same realistic database. Designing relevant scalar indicators related to meteorological
variance, we evaluated how specific changes in meteorological conditions lead to an increase in the distance between the plumes. We have seen that the meteorological changes can be correlated to
position errors as well as amplitude errors between plumes. This means that removing the position error from the metrics will not make the comparison insensitive to a meteorological change. However,
some metrics were found to be more sensitive to specific changes in meteorological conditions. For instance, while the Wasserstein metric is sensitive to changes in the main direction or intensity of
the winds, the Hellinger metric is more sensitive to changes in the spread of the wind direction both in time and space. This provides guidelines to enlighten the choice of a metric for a given
meteorological situation. By composing this with these new metrics freed from position error and additional scaling terms, we get more manageable metrics that will level down in the weight of
modelling error due to the meteorology used in the comparison.
These metrics are used to quantify the proximity of a couple of plumes and could hence be used in an inverse framework, in particular for processing XCO[2] images. The question of the impact of the
meteorological changes on the metrics discussed here can be translated into another question: what importance do we give to each error category? We know that meteorological changes can result in
position errors, and we strongly suspect that changes in the emission's temporal profile or vertical distribution can also yield position errors. In such a case, it would be interesting to evaluate
the impact of the removal of the position errors and whether the amplitude errors carry enough information to compensate. We have seen that amplitude errors can also emerge from changes in
meteorology. Thus further studies have to be undergone to evaluate the sensitivity of the metrics to either the emissions or the meteorology, to determine which error has to be more weighted from the
perspective of monitoring the emissions. We have to make sure that by removing some sensitivity with respect to meteorology, we are not levelling down by the same factor the sensitivity with respect
to the emissions.
For an operational purpose, optimising on non-local metrics is much more difficult than on pixel-wise metrics because it requires the computation of non-trivial gradients. The three non-local metrics
that we proposed are parameterised. These parameters usually balance a trade-off between computational efficiency and accuracy. In the case of the pixel-wise distance with an upstream correction, the
choice of the optimal isometry to apply depends on these parameters. Even though this study could be done with a personal computer, further computation optimisation developments are needed for
operational use. Here we are only considering passive tracers, but an extension of the study should be using these metrics for reactive pollutants. However, it requires quantifying the relative
impact of chemistry on the shape, scale, and position of the plume.
The key idea here is that meteorology is fixed and bounds our model predictions. We choose to develop metrics that aim to remove the weight of such bound within the comparison to the observation. We
could instead consider that meteorology is not fixed and can be seen as additional degrees of freedom to estimate. Thus the Wasserstein metric is interesting because it penalises the position error
linearly, but it remains numerically costly compared to pixel-wise metrics. Yet, we have seen that approximating the plume by Gaussian puffs yields a cheap estimate of the true Wasserstein distance.
To ease the computation, we suggest using an approximation of the Wasserstein distance, assuming Gaussian puff-like plumes or separable into a Gaussian mixture as in Chen et al. (2019) and Delon and
Desolneux (2020). But the relevance of these approximations has to be discussed when it comes to real, noisy, cloudy, plume images. This paper was a first step towards the use of smarter metrics to
compare plume images to monitor atmospheric gaseous compound emissions through an inverse method.
Appendix A:Log-Sinkhorn algorithm
ϵ[0]=1, ${\mathit{ϵ}}_{*}={\mathrm{10}}^{-\mathrm{5}}$, δϵ=10, convergence criterion $\mathit{\zeta }=\mathrm{5}×{\mathrm{10}}^{-\mathrm{4}}$, maximum number of iterations k[max]=200
Cost matrix C, Normalised concentration vectors ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A}$ and ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{B}$
while $\mathit{ϵ}\ge {\mathit{ϵ}}_{*}$ do
▹ Number of iterations
▹ Previous value of w
${f}_{i}←-\mathit{ϵ}\mathrm{ln}\left[\sum _{j=\mathrm{1}}^{N}\mathrm{exp}\left\{\frac{{f}_{i}^{\left(l\right)}+{g}_{j}^{\left(l\right)}-{C}_{i,j}}{\mathit{ϵ}}\right\}\right]+{f}_{i}^{\left(l\right)}+
${g}_{j}←-\mathit{ϵ}\mathrm{ln}\left[\sum _{i=\mathrm{1}}^{N}\mathrm{exp}\left\{\frac{{f}_{i}^{\left(l+\mathrm{1}\right)}+{g}_{j}^{\left(l\right)}-{C}_{i,j}}{\mathit{ϵ}}\right\}\right]+{g}_{j}^{\left
$\mathbf{P}←\mathrm{exp}\left\{\frac{\mathbit{f}{\mathbf{1}}^{\top }+\mathbf{1}{\mathbit{g}}^{\top }-\mathbf{C}}{\mathit{ϵ}}\right\}$
$w←\sqrt{\sum _{i,j=\mathrm{1}}^{N}{C}_{i,j}{P}_{i,j}}$
▹ Current value of w
until $|{w}^{-}-w|/w<\mathit{\zeta }$ or k≥k[max]
▹ Convergence criterion
$\mathit{ϵ}←\mathit{ϵ}/\mathit{\delta }\mathit{ϵ}$
▹ Progressively decrease ϵ
return Wasserstein distance w
x Position vector in the image
X[A,B] Continuous interpolation of the concentration field
x[A,B] Discrete representation of the concentration field
${\stackrel{\mathrm{^}}{\mathbit{x}}}_{A,B}$ Normalised discrete concentration field
𝒩(μ,Σ) Normal distribution of mean μ and error covariance matrix Σ
𝒰[𝔼] Uniform distribution over the domain 𝔼
μ Always refers to a mean vector
Σ Always refers to an error covariance matrix
Δ Diagonal matrix with the eigenvalues of Σ
R(θ) Rotation matrix of angle θ
x[t] Translation vector
x[0] Centre of mass coordinate vector
F Transformation in the plane
d Usual pixel-wise Euclidean distance
d[F] Pixel-wise distance with an upstream position correction
w Wasserstein distance
w[F] Wasserstein distance with an upstream position correction
w[emd] Earth mover distance
w[ϵ] Log-Sinkhorn approximation of the Wasserstein distance
w[num] Wasserstein distance between two Gaussian puffs
w[th] Analytical Wasserstein distance between two Gaussian puffs
ϵ Weight of the entropic regularisation of the log-Sinkhorn algorithm
ζ Convergence criterion for the log-Sinkhorn algorithm
T Translation length between the centre of mass of two plumes
θ Rotation angle between the principal axes of two plumes
H Hellinger distance between the error covariance matrices of two plumes
E[N] Mean wind speed seen by the plume averaged over the image domain and time
E[D] Mean wind direction seen by the plume averaged over the image domain and time
S[N] Standard deviation of the wind speed seen by the plume across the image domain and time
S[D] Standard deviation of the wind direction seen by the plume across the image domain and time
Appendix C:Gradient of the cost function for d[F]
To minimise Eq. (8) we use the L-BFGS algorithm provided by the SciPy library. The algorithm explicitly uses the gradient of the cost function 𝒥 with respect to θ, x[t], and y[t]. The first term of
this gradient – corresponding to ${d}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\right)$ – is given by
$\begin{array}{}\text{(C1)}& \begin{array}{rl}\frac{\partial \mathcal{J}}{\partial \mathit{\alpha }}=& -\mathrm{2}\underset{{\mathbb{R}}^{\mathrm{2}}}{\int }\left[{X}_{A}\left(\mathbit{x}\right)-{X}_
{A}\left(\mathbf{F}\left(\mathbit{x}\right)\right)\right]\\ & \left[\frac{\partial {X}_{B}}{\partial x}\left(\mathbf{F}\left(\mathbit{x}\right)\right)\cdot \frac{\partial {F}_{x}}{\partial \mathit{\
alpha }}+\frac{\partial {X}_{B}}{\partial y}\left(\mathbf{F}\left(\mathbit{x}\right)\right)\cdot \frac{\partial {F}_{y}}{\partial \mathit{\alpha }}\right]\mathrm{d}\mathbit{x},\end{array}\end{array}$
where α is either θ, x[t], or y[t], $\mathbit{x}={\left(x,y\right)}^{\top }$, and $\mathbf{F}={\left({F}_{x},{F}_{y}\right)}^{\top }$. The partial derivatives of X[B] are given by the second image
(using the interpolation method), and the partial derivatives of F[x] and F[y] are
$\begin{array}{}\text{(C2a)}& \begin{array}{rl}& \frac{\partial {F}_{x}}{\partial \mathit{\theta }}=-\left(x-{x}_{\mathrm{0}}\right)\mathrm{sin}\mathit{\theta }-\left(y-{y}_{\mathrm{0}}\right)\mathrm
{cos}\mathit{\theta },\\ & \phantom{\rule{1em}{0ex}}\frac{\partial {F}_{y}}{\partial \mathit{\theta }}=\left(x-{x}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\theta }-\left(y-{y}_{\mathrm{0}}\right)\
mathrm{sin}\mathit{\theta },\end{array}\text{(C2b)}& \frac{\partial {F}_{x}}{\partial {x}_{t}}=\mathrm{1},\phantom{\rule{1em}{0ex}}\frac{\partial {F}_{y}}{\partial {x}_{t}}=\mathrm{0},\text{(C2c)}& \
frac{\partial {F}_{x}}{\partial {y}_{t}}=\mathrm{0},\phantom{\rule{1em}{0ex}}\frac{\partial {F}_{y}}{\partial {y}_{t}}=\mathrm{1}.\end{array}$
Appendix D:From the Wasserstein distance w to the Hellinger distance w[F]
Let us define the cost function
$\begin{array}{}\text{(D1)}& \mathcal{J}\left({x}_{t},{y}_{t},\mathit{\theta }\right)\triangleq {w}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\right),\end{array}$
where ${w}^{\mathrm{2}}\left({X}_{A},{X}_{B}\circ \mathbf{F}\right)$ is given by Eq. (25). The goal is to minimise 𝒥. From Eq. (25), we remark that 𝒥 has three terms $\mathcal{J}={\mathcal{J}}_{\
mathrm{1}}+{\mathcal{J}}_{\mathrm{2}}+{\mathcal{J}}_{\mathrm{3}}$, with
Minimising 𝒥 with respect to $\left({x}_{t},{y}_{t},\mathit{\theta }\right)$ is equivalent to minimising 𝒥[2] with respect to (x[t],y[t]) and minimising 𝒥[3] with respect to θ. The minimum of 𝒥[2] is
0 and is reached for ${\mathbit{x}}_{t}={\mathbit{\mu }}_{B}-{\mathbit{\mu }}_{A}$. Let us now focus on the minimum of 𝒥[3]. For convenience, we define
$\begin{array}{}\text{(D5)}& \mathbf{M}\left(\mathit{\theta }\right)\triangleq {\mathbf{\Delta }}_{A}^{\mathrm{1}/\mathrm{2}}\mathbf{R}\left(\mathit{\theta }+{\mathit{\theta }}_{B}-{\mathit{\theta }}
_{A}\right){\mathbf{\Delta }}_{B}\mathbf{R}\left(\mathit{\theta }+{\mathit{\theta }}_{B}-{\mathit{\theta }}_{A}{\right)}^{\top }{\mathbf{\Delta }}_{A}^{\mathrm{1}/\mathrm{2}},\end{array}$
in such a way that ${\mathcal{J}}_{\mathrm{3}}\left(\mathit{\theta }\right)=-\mathrm{2}\mathrm{Tr}\mathbf{M}{\left(\mathit{\theta }\right)}^{\frac{\mathrm{1}}{\mathrm{2}}}$. With our notation, we
$\begin{array}{}\text{(D6a)}& {\mathbf{\Delta }}_{A}=\left[\begin{array}{cc}{\mathit{\sigma }}_{\mathrm{1},A}& \mathrm{0}\\ \mathrm{0}& {\mathit{\sigma }}_{\mathrm{2},A}\end{array}\right],\text
{(D6b)}& {\mathbf{\Delta }}_{B}=\left[\begin{array}{cc}{\mathit{\sigma }}_{\mathrm{1},B}& \mathrm{0}\\ \mathrm{0}& {\mathit{\sigma }}_{\mathrm{2},B}\end{array}\right],\text{(D6c)}& \mathbf{R}\left(\
mathit{\theta }+{\mathit{\theta }}_{B}-{\mathit{\theta }}_{A}\right)=\left[\begin{array}{cc}\mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}& -\mathrm{sin}\stackrel{\mathrm{̃}}{\mathit{\theta }}\\ \
mathrm{sin}\stackrel{\mathrm{̃}}{\mathit{\theta }}& \mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\end{array}\right],\end{array}$
where $\stackrel{\mathrm{̃}}{\mathit{\theta }}\triangleq \mathit{\theta }+{\mathit{\theta }}_{B}-{\mathit{\theta }}_{A}$, and hence
$\begin{array}{}\text{(D7)}& \begin{array}{rl}& \mathbf{M}\left(\mathit{\theta }\right)=\\ & \left[\begin{array}{cc}{\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{1},B}{\mathrm{cos}}^{\
mathrm{2}}\stackrel{\mathrm{̃}}{\mathit{\theta }}+{\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{2},B}{\mathrm{sin}}^{\mathrm{2}}\stackrel{\mathrm{̃}}{\mathit{\theta }}& \sqrt{{\mathit{\
sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{2},A}}\left({\mathit{\sigma }}_{\mathrm{1},B}-{\mathit{\sigma }}_{\mathrm{2},B}\right)\mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\mathrm{sin}
\stackrel{\mathrm{̃}}{\mathit{\theta }}\\ \sqrt{{\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{2},A}}\left({\mathit{\sigma }}_{\mathrm{1},B}-{\mathit{\sigma }}_{\mathrm{2},B}\right)\
mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\mathrm{sin}\stackrel{\mathrm{̃}}{\mathit{\theta }}& {\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B}{\mathrm{cos}}^{\mathrm{2}}\
stackrel{\mathrm{̃}}{\mathit{\theta }}+{\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{1},B}{\mathrm{sin}}^{\mathrm{2}}\stackrel{\mathrm{̃}}{\mathit{\theta }}\end{array}\right].\end{array}
By construction, M(θ) is symmetric and positive definite; therefore it is diagonalisable with strictly positive eigenvalues λ[±](θ). As a consequence, we have
$\begin{array}{}\text{(D8)}& \mathrm{Tr}\mathbf{M}{\left(\mathit{\theta }\right)}^{\frac{\mathrm{1}}{\mathrm{2}}}=\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}+\sqrt{{\mathit{\lambda }}
_{-}\left(\mathit{\theta }\right)}.\end{array}$
Let us now introduce the following ancillary quantities:
$\begin{array}{}\text{(D9a)}& \mathit{\alpha }\triangleq {\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{1},B}+{\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B},\text
{(D9b)}& \mathit{\beta }\triangleq {\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{2},B}+{\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{1},B},\text{(D9c)}& \mathit{\kappa }
\left(\mathit{\theta }\right)\triangleq \mathrm{Tr}\mathbf{M}\left(\mathit{\theta }\right)=\mathit{\alpha }{\mathrm{cos}}^{\mathrm{2}}\stackrel{\mathrm{̃}}{\mathit{\theta }}+\mathit{\beta }{\mathrm
{sin}}^{\mathrm{2}}\stackrel{\mathrm{̃}}{\mathit{\theta }},\text{(D9d)}& \begin{array}{rl}\mathit{\gamma }\left(\mathit{\theta }\right)& \triangleq {\mathit{\kappa }}^{\mathrm{2}}\left(\mathit{\theta
}\right)-\mathrm{4}det\mathbf{M}\left(\mathit{\theta }\right)\\ & ={\mathit{\kappa }}^{\mathrm{2}}\left(\mathit{\theta }\right)-\mathrm{4}{\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm
{1},B}{\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B}.\end{array}\end{array}$
Note that γ(θ) is the discriminant of the characteristic polynomial of M(θ), which means that γ(θ)≥0 because M(θ) is symmetric and positive definite. With these quantities, we have
$\begin{array}{}\text{(D10)}& {\mathit{\lambda }}_{±}\left(\mathit{\theta }\right)=\frac{\mathrm{1}}{\mathrm{2}}\left(\mathit{\kappa }\left(\mathit{\theta }\right)±\sqrt{\mathit{\gamma }\left(\mathit
{\theta }\right)}\right).\end{array}$
Let us first consider the case γ(θ)=0. In this case, ${\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)={\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)\triangleq \mathit{\lambda }\left(\
mathit{\theta }\right)$; in other words M(θ)=λ(θ)I. From the definition of M(θ), Eq. (D5), we deduce that
$\begin{array}{}\text{(D11)}& \mathbf{R}\left(\stackrel{\mathrm{̃}}{\mathit{\theta }}\right){\mathbf{\Delta }}_{B}\mathbf{R}{\left(\stackrel{\mathrm{̃}}{\mathit{\theta }}\right)}^{\top }=\mathit{\
lambda }\left(\mathit{\theta }\right){\mathbf{\Delta }}_{A},\end{array}$
which enforces $\stackrel{\mathrm{̃}}{\mathit{\theta }}=\mathrm{0}$ (modulo π). This means that Eq. (D7) simplifies into
$\begin{array}{}\text{(D12)}& \mathbf{M}\left(\mathit{\theta }\right)=\left[\begin{array}{cc}{\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{1},B}& \mathrm{0}\\ \mathrm{0}& {\mathit{\
sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B}\end{array}\right],\end{array}$
and hence $\mathit{\lambda }\left(\mathit{\theta }\right)={\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{1},B}={\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B}$.
Without loss of generality, we can assume in the definition of Δ[A] and θ[A] that $\mathrm{0}<{\mathit{\sigma }}_{\mathrm{1},A}\le {\mathit{\sigma }}_{\mathrm{2},A}$ and the same for B.^4 This means
that ${\mathit{\sigma }}_{\mathrm{1},A}{\mathit{\sigma }}_{\mathrm{1},B}={\mathit{\sigma }}_{\mathrm{2},A}{\mathit{\sigma }}_{\mathrm{2},B}$ actually implies ${\mathit{\sigma }}_{\mathrm{1},A}={\
mathit{\sigma }}_{\mathrm{2},A}$ and ${\mathit{\sigma }}_{\mathrm{1},B}={\mathit{\sigma }}_{\mathrm{2},B}$. In this case, the covariance matrices for A and B are isotropic, and 𝒥[3] does not actually
depend on θ.
Let us now consider the non-isotropic case: $\mathrm{0}<{\mathit{\sigma }}_{\mathrm{1},A}<{\mathit{\sigma }}_{\mathrm{2},A}$ and $\mathrm{0}<{\mathit{\sigma }}_{\mathrm{1},B}<{\mathit{\sigma }}_{\
mathrm{2},B}$, which is the only case where 𝒥[3] depends on θ. In this case, we necessarily have γ(θ)>0. We can then take the derivative of 𝒥[3] with respect to θ:
$\begin{array}{}\text{(D13)}& \begin{array}{rl}& -\frac{\mathrm{1}}{\mathrm{2}}{\mathcal{J}}_{\mathrm{3}}^{\prime }\left(\mathit{\theta }\right)\\ & =\frac{{\mathit{\lambda }}_{+}^{\prime }\left(\
mathit{\theta }\right)}{\mathrm{2}\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}}+\frac{{\mathit{\lambda }}_{-}^{\prime }\left(\mathit{\theta }\right)}{\mathrm{2}\sqrt{{\mathit{\lambda
}}_{-}\left(\mathit{\theta }\right)}},\end{array}\text{(D14)}& =\frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}{\mathit{\lambda }}_{+}^{\prime }\left(\mathit{\theta }\right)}{\
mathrm{2}{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}+\frac{\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}{\mathit{\lambda }}_{-}^{\prime }\left(\mathit{\theta }\right)}{\mathrm
{2}{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)},\text{(D15)}& \begin{array}{rl}=& \frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}\left({\mathit{\kappa }}^{\prime }\left(\
mathit{\theta }\right)+\frac{{\mathit{\gamma }}^{\prime }\left(\mathit{\theta }\right)}{\mathrm{2}\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}}\right)}{\mathrm{4}{\mathit{\lambda }}_{+}\left
(\mathit{\theta }\right)}\\ & +\frac{\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}\left({\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right)-\frac{{\mathit{\gamma }}^{\prime }\left
(\mathit{\theta }\right)}{\mathrm{2}\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}}\right)}{\mathrm{4}{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)},\end{array}\text{(D16)}& \begin
{array}{rl}=& \frac{\frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}}\left({\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\
right)\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}+\frac{\mathrm{1}}{\mathrm{2}}{\mathit{\gamma }}^{\prime }\left(\mathit{\theta }\right)\right)}{\mathrm{4}{\mathit{\lambda }}_{+}\left(\
mathit{\theta }\right)}\\ & +\frac{\frac{\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}}\left({\mathit{\kappa }}^{\prime }\left(\
mathit{\theta }\right)\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}-\frac{\mathrm{1}}{\mathrm{2}}{\mathit{\gamma }}^{\prime }\left(\mathit{\theta }\right)\right)}{\mathrm{4}{\mathit{\lambda }}
_{-}\left(\mathit{\theta }\right)},\end{array}\text{(D17)}& \begin{array}{rl}=& \frac{\frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\
right)}}\left({\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right)\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}+{\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right)\mathit{\kappa }\
left(\mathit{\theta }\right)\right)}{\mathrm{4}{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}\\ & +\frac{\frac{\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma
}\left(\mathit{\theta }\right)}}\left({\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right)\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}-{\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\
right)\mathit{\kappa }\left(\mathit{\theta }\right)\right)}{\mathrm{4}{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)},\end{array}\text{(D18)}& =\frac{\mathrm{2}\frac{\sqrt{{\mathit{\lambda }}_
{+}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}}{\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right){\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}
{\mathrm{4}{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}-\frac{\mathrm{2}\frac{\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\
right)}}{\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right){\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}{\mathrm{4}{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)},\text{(D19)}& =
{\mathit{\kappa }}^{\prime }\left(\mathit{\theta }\right)\cdot \frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}-\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}}{\mathrm
{2}\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}},\text{(D20)}& =\left(\mathit{\beta }-\mathit{\alpha }\right)\mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\mathrm{sin}\stackrel{\mathrm{̃}}
{\mathit{\theta }}\cdot \frac{\sqrt{{\mathit{\lambda }}_{+}\left(\mathit{\theta }\right)}-\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}}{\sqrt{\mathit{\gamma }\left(\mathit{\theta }\
which is the product of three terms: β−α, $\mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\mathrm{sin}\stackrel{\mathrm{̃}}{\mathit{\theta }}$, and $\left(\sqrt{{\mathit{\lambda }}_{+}\left(\mathit
{\theta }\right)}-\sqrt{{\mathit{\lambda }}_{-}\left(\mathit{\theta }\right)}\right)/\sqrt{\mathit{\gamma }\left(\mathit{\theta }\right)}$. The third term is always strictly positive because γ(θ)>0.
The first term is always strictly negative because we have assumed that ${\mathit{\sigma }}_{\mathrm{1},A}<{\mathit{\sigma }}_{\mathrm{2},A}$ and ${\mathit{\sigma }}_{\mathrm{1},B}<{\mathit{\sigma }}
_{\mathrm{2},B}$. Hence we only need to consider the second term $\mathrm{cos}\stackrel{\mathrm{̃}}{\mathit{\theta }}\mathrm{sin}\stackrel{\mathrm{̃}}{\mathit{\theta }}$ to conclude that the minima of
𝒥[3] are met for $\stackrel{\mathrm{̃}}{\mathit{\theta }}=\mathrm{0}$ (modulo π) or equivalently $\mathit{\theta }={\mathit{\theta }}_{A}-{\mathit{\theta }}_{B}$ (modulo π). In this case, M(θ)=Δ[A]Δ[B
] and ${\mathcal{J}}_{\mathrm{3}}\left(\mathit{\theta }\right)=-\mathrm{2}\mathrm{Tr}\left({\mathbf{\Delta }}_{A}{\mathbf{\Delta }}_{B}\right)$, which yields the correct formula for w[F], Eq. (26).
Finally, note that this formula is also valid in the case where at least one of A or B is isotropic.
Writing process: mainly PJV and EP with inputs from all co-authors. System and experiment design: PJV, JDLB, AF, MB, and YR. Implementation: PJV. Support in development and use of data: PJV and EP.
Analysis: mainly PJV, JDLB, AF, and MB, with feedback from YR, EP and GB.
The contact author has declared that none of the authors has any competing interests.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This study has been funded by the national research project ANR-ARGONAUT no. ANR-19-CE01-0007 (PollutAnts and gReenhouse Gases emissiOns moNitoring from spAce at high ResolUTion). Joffrey Dumont Le
Brazidec is supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no. 958927 (Prototype system for a Copernicus CO2service). All the figures were
drawn using CVD-friendly colour maps. This was made possible using a Python wrapper around Fabio Crameri's perceptually uniform colour maps (Crameri, 2021), available here: https://
www.fabiocrameri.ch/colourmaps/ (last access: 14 March 2023). CEREA is a member of Institut Pierre-Simon Laplace (IPSL). The authors jointly thank the associate editor, Lok Lamsal, and the two
anonymous referees for the relevant comments they made during the review of this article.
This research has been supported by the national research project ANR-ARGONAUT (grant no. ANR-19-CE01-0007, PollutAnt and gReenhouse Gases emissiOns moNitoring from spAce at high ResolUTion).
This paper was edited by Lok Lamsal and reviewed by two anonymous referees.
Agusti-Panareda, A.: The CHE Tier1 Global Nature Run, Tech. rep., CO[2] Human Emissions, H2020 European Project, https://www.che-project.eu/sites/default/files/2018-07/CHE-D2.2-V1-0.pdf (last access:
14 March 2023), 2018.a
Amodei, M., Sanchez, I., and Stein, J.: Deterministic and fuzzy verification of the cloudiness of High Resolution operational models, Meteorol. Appl., 16, 191–203, https://doi.org/10.1002/met.101,
Benamou, J.-D. and Brenier, Y.: A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem, Numer. Math., 84, 375–393, https://doi.org/10.1007/s002110050002, 2000.a
Berchet, A., Sollum, E., Thompson, R. L., Pison, I., Thanwerdas, J., Broquet, G., Chevallier, F., Aalto, T., Berchet, A., Bergamaschi, P., Brunner, D., Engelen, R., Fortems-Cheiney, A., Gerbig, C.,
Groot Zwaaftink, C. D., Haussaire, J.-M., Henne, S., Houweling, S., Karstens, U., Kutsch, W. L., Luijkx, I. T., Monteil, G., Palmer, P. I., van Peet, J. C. A., Peters, W., Peylin, P., Potier, E.,
Rödenbeck, C., Saunois, M., Scholze, M., Tsuruta, A., and Zhao, Y.: The Community Inversion Framework v1.0: a unified system for atmospheric inversion studies, Geosci. Model Dev., 14, 5331–5354,
https://doi.org/10.5194/gmd-14-5331-2021, 2021.a
Bieser, J., Aulinger, A., Matthias, V., Quante, M., and Denier van der Gon, H.: Vertical emission profiles for Europe based on plume rise calculations, Environ. Pollut., 159, 2935–2946, https://
doi.org/10.1016/j.envpol.2011.04.030, 2011.a
Bonneel, N., van de Panne, M., Paris, S., and Heidrich, W.: Displacement Interpolation Using Lagrangian Mass Transport, Association for Computing Machinery, New York, NY, USA, 30, 1–6, https://
doi.org/10.1145/2070781.2024192, 2011.a
Broquet, G., Bréon, F.-M., Renault, E., Buchwitz, M., Reuter, M., Bovensmann, H., Chevallier, F., Wu, L., and Ciais, P.: The potential of satellite spectro-imagery for monitoring CO[2] emissions from
large cities, Atmos. Meas. Tech., 11, 681–708, https://doi.org/10.5194/amt-11-681-2018, 2018.a
Brunner, D., Kuhlmann, G., Marshall, J., Clément, V., Fuhrer, O., Broquet, G., Löscher, A., and Meijer, Y.: Accounting for the vertical distribution of emissions in atmospheric CO[2] simulations,
Atmos. Chem. Phys., 19, 4541–4559, https://doi.org/10.5194/acp-19-4541-2019, 2019.a
Cai, B., Cui, C., Zhang, D., Cao, L., Wu, P., Pang, L., Zhang, J., and Dai, C.: China city-level greenhouse gas emissions inventory in 2015 and uncertainty analysis, Appl. Energ., 253, 113579, https:
//doi.org/10.1016/j.apenergy.2019.113579, 2019.a
Calvo Buendia, E., Tanabe, K., Kranjc, A., Baasansuren, J., Fukuda, M., Ngarize, S., Osako, A., Pyrozhenko, Y., Shermanau, P., and Frederici, S.: Quality Assurance/Quality Control and Verification,
in: 2019 Refinement to the 2006 IPCC Guidelines for National Greenhouse Gas Inventories, vol. 1, IPCC, Switzerland, https://www.ipcc-nggip.iges.or.jp/public/2019rf/pdf/1_Volume1/
19R_V1_Ch03_Uncertainties.pdf (last access: 14 March 2023), 2019a.a
Calvo Buendia, E., Tanabe, K., Kranjc, A., Baasansuren, J., Fukuda, M., Ngarize, S., Osako, A., Pyrozhenko, Y., Shermanau, P., and Frederici, S.: Uncertainties, in: 2019 Refinement to the 2006 IPCC
Guidelines for National Greenhouse Gas Inventories, vol. 1, IPCC, Switzerland, https://www.ipcc-nggip.iges.or.jp/public/2019rf/pdf/1_Volume1/19R_V1_Ch06_QA_QC.pdf (last access: 14 March 2023),
Chen, Y., Georgiou, T. T., and Tannenbaum, A.: Optimal Transport for Gaussian Mixture Models, IEEE Access, 7, 6269–6278, https://doi.org/10.1109/ACCESS.2018.2889838, 2019.a
Chizat, L., Peyré, G., Schmitzer, B., and Vialard, F.-X.: Scaling algorithms for unbalanced optimal transport problems, Math. Comput., 87, 2563–2609, https://doi.org/10.1090/mcom/3303, 2018.a, b
Crameri, F.: Scientific colour maps (7.0.1), Zenodo [code], https://doi.org/10.5281/zenodo.5501399, 2021.a, b
Davis, C. A., Brown, B. G., Bullock, R., and Halley-Gotway, J.: The Method for Object-Based Diagnostic Evaluation (MODE) Applied to Numerical Forecasts from the 2005 NSSL/SPC Spring Program, Weather
Forecast., 24, 1252–1267, https://doi.org/10.1175/2009WAF2222241.1, 2009.a
Delon, J. and Desolneux, A.: A Wasserstein-Type Distance in the Space of Gaussian Mixture Models, SIAM J. Imaging Sci., 13, 936–970, https://doi.org/10.1137/19M1301047, 2020.a
Denier van der Gon, H. A. C., Kuenen, J. J. P., Janssens-Maenhout, G., Döring, U., Jonkers, S., and Visschedijk, A.: TNO_CAMS high resolution European emission inventory 2000–2014 for anthropogenic
CO[2] and future years following two different pathways, Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2017-124, in review, 2017.a
Dumont Le Brazidec, J., Bocquet, M., Saunier, O., and Roustan, Y.: Quantification of uncertainties in the assessment of an atmospheric release source applied to the autumn 2017 ^106Ru event, Atmos.
Chem. Phys., 21, 13247–13267, https://doi.org/10.5194/acp-21-13247-2021, 2021.a
Ebert, E. E.: Fuzzy verification of high-resolution gridded forecasts: a review and proposed framework, Meteorol. Appl., 15, 51–64, https://doi.org/10.1002/met.25, 2008.a
Ebert, E. E. and McBride, J. L.: Verification of precipitation in weather systems: determination of systematic errors, J. Hydrol., 239, 179–202, https://doi.org/10.1016/S0022-1694(00)00343-7, 2000.a
Farchi, A., Bocquet, M., Roustan, Y., Mathieu, A., and Quérel, A.: Using the Wasserstein distance to compare fields of pollutants: application to the radionuclide atmospheric dispersion of the
Fukushima-Daiichi accident, Tellus B, 68, 31682, https://doi.org/10.3402/tellusb.v68.31682, 2016.a, b, c
Feyeux, N., Vidard, A., and Nodet, M.: Optimal transport for variational data assimilation, Nonlin. Processes Geophys., 25, 55–66, https://doi.org/10.5194/npg-25-55-2018, 2018.a, b, c
Flamary, R., Courty, N., Gramfort, A., Alaya, M. Z., Boisbunon, A., Chambon, S., Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud, N. T. H., Janati, H., Rakotomamonjy, A.,
Redko, I., Rolet, A., Schutz, A., Seguy, V., Sutherland, D. J., Tavenard, R., Tong, A., and Vayer, T.: POT: Python Optimal Transport, J. Mach. Learn. Res., 22, 1–8, http://jmlr.org/papers/v22/
20-451.html (last access: 14 March 2023), 2021.a
Gelbrich, M.: On a Formula for the L2 Wasserstein Metric between Measures on Euclidean and Hilbert Spaces, Math. Nachr., 147, 185–203, https://doi.org/10.1002/mana.19901470121, 1990.a
Gilleland, E.: Novel measures for summarizing high-resolution forecast performance, Adv. Stat. Clim. Meteorol. Oceanogr., 7, 13–34, https://doi.org/10.5194/ascmo-7-13-2021, 2021.a
Gilleland, E., Ahijevych, D., Brown, B. G., Casati, B., and Ebert, E. E.: Intercomparison of Spatial Forecast Verification Methods, Weather Forecast., 24, 1416–1430, https://doi.org/10.1175/
2009WAF2222269.1, 2009.a
Gilleland, E., Lindström, J., and Lindgren, F.: Analyzing the Image Warp Forecast Verification Method on Precipitation Fields from the ICP, Weather Forecast., 25, 1249–1262, https://doi.org/10.1175/
2010WAF2222365.1, 2010.a
Hakkarainen, J., Szeląg, M. E., Ialongo, I., Retscher, C., Oda, T., and Crisp, D.: Analyzing nitrogen oxides to carbon dioxide emission ratios from space: A case study of Matimba Power Station in
South Africa, Atmos. Environ. X, 10, 100110, https://doi.org/10.1016/j.aeaoa.2021.100110, 2021.a
Hergoualc'h, K., Mueller, N., Bernoux, M., Kasimir, A., van der Weerden, T. J., and Ogle, S. M.: Improved accuracy and reduced uncertainty in greenhouse gas inventories by refining the IPCC emission
factor for direct N[2]O emissions from nitrogen inputs to managed soils, Glob. Change Biol., 27, 6536–6550, https://doi.org/10.1111/gcb.15884, 2021.a
Hoffman, R. N. and Grassotti, C.: A Technique for Assimilating SSM/I Observations of Marine Atmospheric Storms: Tests with ECMWF Analyses, J. Appl. Meteorol. Clim., 35, 1177–1188, https://doi.org/
10.1175/1520-0450(1996)035<1177:ATFASO>2.0.CO;2, 1996.a
Hoffman, R. N., Liu, Z., Louis, J.-F., and Grassoti, C.: Distortion Representation of Forecast Errors, Mon. Weather Rev., 123, 2758–2770, https://doi.org/10.1175/1520-0493(1995)123<2758:DROFE>2.0.CO;
2, 1995.a
Horowitz, C. A.: Paris Agreement, International Legal Materials, 55, 740–755, https://doi.org/10.1017/S0020782900004253, 2016.a
Janssens-Maenhout, G., Crippa, M., Guizzardi, D., Muntean, M., Schaaf, E., Dentener, F., Bergamaschi, P., Pagliari, V., Olivier, J. G. J., Peters, J. A. H. W., van Aardenne, J. A., Monni, S.,
Doering, U., Petrescu, A. M. R., Solazzo, E., and Oreggioni, G. D.: EDGAR v4.3.2 Global Atlas of the three major greenhouse gas emissions for the period 1970–2012, Earth Syst. Sci. Data, 11,
959–1002, https://doi.org/10.5194/essd-11-959-2019, 2019.a
Kantorovich, L. V.: On mass transportation, C. R. (Doklady) Acad. Sci. URSS (N. S.), 37, 199–201, https://ci.nii.ac.jp/naid/10018386680/ (last access: 14 March 2023), 1942.a
Keil, C. and Craig, G. C.: A Displacement-Based Error Measure Applied in a Regional Ensemble Forecasting System, Mon. Weather Rev., 135, 3248–3259, https://doi.org/10.1175/MWR3457.1, 2007.a
Korsakissok, I. and Mallet, V.: Comparative Study of Gaussian Dispersion Formulas within the Polyphemus Platform: Evaluation with Prairie Grass and Kincaid Experiments, J. Appl. Meteorol. Clim., 48,
2459–2473, https://doi.org/10.1175/2009JAMC2160.1, 2009.a
Kuenen, J. J. P., Visschedijk, A. J. H., Jozwicka, M., and Denier van der Gon, H. A. C.: TNO-MACC_II emission inventory; a multi-year (2003–2009) consistent high-resolution European emission
inventory for air quality modelling, Atmos. Chem. Phys., 14, 10963–10976, https://doi.org/10.5194/acp-14-10963-2014, 2014.a
Kuhlmann, G., Broquet, G., Marshall, J., Clément, V., Löscher, A., Meijer, Y., and Brunner, D.: Detectability of CO[2] emission plumes of cities and power plants with the Copernicus Anthropogenic CO
[2] Monitoring (CO2M) mission, Atmos. Meas. Tech., 12, 6695–6719, https://doi.org/10.5194/amt-12-6695-2019, 2019.a
Kuhlmann, G., Brunner, D., Broquet, G., and Meijer, Y.: Quantifying CO[2] emissions of a city with the Copernicus Anthropogenic CO[2] Monitoring satellite mission, Atmos. Meas. Tech., 13, 6733–6754,
https://doi.org/10.5194/amt-13-6733-2020, 2020.a
Lian, J., Wu, L., Bréon, F.-M., Broquet, G., Vautard, R., Zaccheo, T. S., Dobler, J., and Ciais, P.: Evaluation of the WRF-UCM mesoscale model and ECMWF global operational forecasts over the Paris
region in the prospect of tracer atmospheric transport modeling, Elem. Sci. Anthr., 6, 64, https://doi.org/10.1525/elementa.319, 2018.a
Marzban, C. and Sandgathe, S.: Optical Flow for Verification, Weather Forecast., 25, 1479–1494, https://doi.org/10.1175/2010WAF2222351.1, 2010.a
Meinshausen, M., Meinshausen, N., Hare, W., Raper, S. C. B., Frieler, K., Knutti, R., Frame, D. J., and Allen, M. R.: Greenhouse-gas emission targets for limiting global warming to 2^∘C, Nature,
458, 1158–1162, https://doi.org/10.1038/nature08017, 2009.a
Menut, L., Bessagnet, B., Khvorostyanov, D., Beekmann, M., Blond, N., Colette, A., Coll, I., Curci, G., Foret, G., Hodzic, A., Mailler, S., Meleux, F., Monge, J.-L., Pison, I., Siour, G., Turquety,
S., Valari, M., Vautard, R., and Vivanco, M. G.: CHIMERE 2013: a model for regional atmospheric composition modelling, Geosci. Model Dev., 6, 981–1028, https://doi.org/10.5194/gmd-6-981-2013, 2013.a
Monge, G.: Mémoire sur la théorie des déblais et des remblais, Histoire de l’Académie royale des sciences avec les mémoires de mathématique et de physique tirés des registres de cette Académie,
Imprimerie royale, 666–705, 1781.a
Nocedal, J. and Wright, S. J.: Large-scale unconstrained optimization, Numerical Optimization, Springer, 164–192, ISBN 978-0-387-30303-1, 2006.a
Peyré, G. and Cuturi, M.: Computational Optimal Transport: With Applications to Data Science, Foundations and Trends^® in Machine Learning, 11, 355–607, https://doi.org/10.1561/2200000073, 2019.a, b
, c
Pison, I., Berchet, A., Saunois, M., Bousquet, P., Broquet, G., Conil, S., Delmotte, M., Ganesan, A., Laurent, O., Martin, D., O'Doherty, S., Ramonet, M., Spain, T. G., Vermeulen, A., and Yver Kwok,
C.: How a European network may help with estimating methane emissions on the French national scale, Atmos. Chem. Phys., 18, 3779–3798, https://doi.org/10.5194/acp-18-3779-2018, 2018.a
Potier, E., Broquet, G., Wang, Y., Santaren, D., Berchet, A., Pison, I., Marshall, J., Ciais, P., Bréon, F.-M., and Chevallier, F.: Complementing XCO[2] imagery with ground-based CO[2] and ^14CO[2]
measurements to monitor CO[2] emissions from fossil fuels on a regional to local scale, Atmos. Meas. Tech., 15, 5261–5288, https://doi.org/10.5194/amt-15-5261-2022, 2022.a
Santaren, D., Broquet, G., Bréon, F.-M., Chevallier, F., Siméoni, D., Zheng, B., and Ciais, P.: A local- to national-scale inverse modeling system to assess the potential of spaceborne CO[2]
measurements for the monitoring of anthropogenic emissions, Atmos. Meas. Tech., 14, 403–433, https://doi.org/10.5194/amt-14-403-2021, 2021.a, b
Seigneur, C.: Air Pollution: Concepts, Theory, and Applications, Cambridge University Press, ISBN 9781108481632, 2019.a
Solazzo, E., Crippa, M., Guizzardi, D., Muntean, M., Choulga, M., and Janssens-Maenhout, G.: Uncertainties in the Emissions Database for Global Atmospheric Research (EDGAR) emission inventory of
greenhouse gases, Atmos. Chem. Phys., 21, 5655–5683, https://doi.org/10.5194/acp-21-5655-2021, 2021.a
Super, I., Dellaert, S. N. C., Visschedijk, A. J. H., and Denier van der Gon, H. A. C.: Uncertainty analysis of a European high-resolution emission inventory of CO[2] and CO to support inverse
modelling and network design, Atmos. Chem. Phys., 20, 1795–1816, https://doi.org/10.5194/acp-20-1795-2020, 2020.a
Tamang, S. K., Ebtehaj, A., van Leeuwen, P. J., Lerman, G., and Foufoula-Georgiou, E.: Ensemble Riemannian data assimilation: towards large-scale dynamical systems, Nonlin. Processes Geophys., 29,
77–92, https://doi.org/10.5194/npg-29-77-2022, 2022.a
Vanderbecken, P. J.: Passive gas plume database for metrics comparison (Version 0), Zenodo [data set], https://doi.org/10.5281/zenodo.6958047, 2022.a
Varon, D. J., Jacob, D. J., McKeever, J., Jervis, D., Durak, B. O. A., Xia, Y., and Huang, Y.: Quantifying methane point sources from fine-scale satellite observations of atmospheric methane plumes,
Atmos. Meas. Tech., 11, 5673–5686, https://doi.org/10.5194/amt-11-5673-2018, 2018. a
Varon, D. J., Jacob, D. J., Jervis, D., and McKeever, J.: Quantifying Time-Averaged Methane Emissions from Individual Coal Mine Vents with GHGSat-D Satellite Observations, Environ. Sci. Technol., 54,
10246–10253, https://doi.org/10.1021/acs.est.0c01213, 2020.a
Veefkind, J. P., Aben, I., McMullan, K., Förster, H., de Vries, J., Otter, G., Claas, J., Eskes, H. J., de Haan, J. F., Kleipool, Q., van Weele, M., Hasekamp, O., Hoogeveen, R., Landgraf, J., Snel,
R., Tol, P., Ingmann, P., Voors, R., Kruizinga, B., Vink, R., Visser, H., and Levelt, P. F.: TROPOMI on the ESA Sentinel-5 Precursor: A GMES mission for global observations of the atmospheric
composition for climate, air quality and ozone layer applications, Remote Sens. Environ., 120, 70–83, https://doi.org/10.1016/j.rse.2011.09.027, 2012.a
Villani, C.: Optimal Transport, vol. 338 of Grundlehren der mathematischen Wissenschaften, Springer, Berlin, Heidelberg, https://doi.org/10.1007/978-3-540-71050-9, 2009.a, b
For this specific value of ϵ, the d distance between 𝒰[𝔼] and the first plume is similar to the d distance between the two plumes.
The convergence speed is measured here by the number of iterations.
By construction, X[A] and X[B] are normalised, in such a way that we do not need to renormalise them to be able to compute the Wasserstein distance.
If this is not the case, we just have to change θ[A] to θ[A]+π to swap σ[1,A] and σ[2,A].
|
{"url":"https://amt.copernicus.org/articles/16/1745/2023/","timestamp":"2024-11-10T11:53:52Z","content_type":"text/html","content_length":"560734","record_id":"<urn:uuid:a88a0343-32d3-4b12-9886-628c34174b52>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00570.warc.gz"}
|
Economic System in Islam- Muslim Contribution to Mathematics & Physics - IslamOnline
Summary of 8.7 “Muslim Contribution to Astronomy & Chemistry”
Last weeks program was a continuation of the effect of progress in Islamic sciences in the Middle Ages and its effect on the Renaissance in Europe.’ We tried to look into some of the basic routes
through which Muslim learning and sciences penetrated into Europe including Europeans studying at Muslim universities as well as the Crusades.’ We started with specific examples of historical
manifestation and of the attitude the Quran teaches towards science and learning.’ We discussed more particularly the area of Astronomy and chemistry.’ We gave various examples of Chemicals that
still hold their names which come from Arabic words.’ For example alcohol.
8.8′ Muslim Contribution to Mathematics and Physics
Host: How did the numerals that we use come to be called Arabic numerals?
Jamal Badawi:
The Arabic Numerals that we use at the present time were originally used in India.’ However to Muslims goes the credit of popularizing the use of these numerals and brining them to Europe and
introducing them to the world at large.’ Along with spreading their use but the development of Muslims in the field of mathematics also helped to make them be known as Arabic numerals.’ In addition
to numerals one of the great mathematicians by the name of Muhammad Bin Ahmad during the 10th century actually invented the concept of zero.’ The Arabic word for zero is sifr which means void.’ This
is related, in the opinion of some scholars to the term that came to be known as cipher and decipher.’ The discovery of the zero was not a simple thing and according to many science historians it
revolutionized the discipline of mathematics.’ It made it possible to express all numbers using ten characters, giving them absolute value and a value by position.’ In fact without this the whole
later development of mathematics would have been stifled.’ The zero as a concept came to be known in Europe only about 300 years after Muslims used it in the 13th century.
Host:’ It is claimed that the term Algebra comes from Arabic and is connected to the Islamic civilization, is this correct?
Jamal Badawi:
Algebra is a means of universal arithmetic.’ It is the use of numbers, letter and symbols used to analyze and express relationships between concept of quantity in terms of formulas and equations.’ In
a simpler way Algebra is calculation using symbols.’ The term is of’ Arabic origin.’ The arabic term is Algebr which means to unite or to put pieces together.’ Algebr was initiated by a Muslim
mathematician Mu?ammad ibn M?s? al-Khw?rizm? who lived in the 9th century and was connected with the House of Wisdom (that was established in Bagdad).’ His book on the subject called ‘Hisab Aljebr
Walmukabala’ which means calculation by symbols.’ This became a classic that for hundred of years was used as the basic in mathematics.’ The interesting thing is that the name al-Khwarismi is similar
to algorithm and actually the algorithm was named after Al-Khwarismi.’ This is the study of the decimal system of counting which was introduced by him.’ In fact one of the major historians in the
history and development of science, George Sarton, says that Al-Khwarismi ‘is one of the founders of analysis or Algebra as distinct from Geometry.” This made Algebra a distinct discipline.’ In
addition to this we find so many other mathematicians who contributed to the field of Algebra like Abu Al Wafa who lived in the first part of the tenth century.’ He contributed a great deal of study
which perfected the work of Al-Khwarismi and he also worked on quadratic equations.’ In fact many according to Sarton many of the works of the Muslim passed on to Europe through translation from
Arabic to Latin by such people as Robert of Chester, Adler of Bathe and John of Civil.
Host:’ Did Muslim mathematicians contribute to Geometry as well?
Jamal Badawi:
Yes, in fact without the contribution of Muslim mathematicians in Geometry many of the golden treasures of the past could have been totally lost.’ Take for example the work of Euclid; without Muslims
being keen about learning and preserving that heritage it would have been totally lost to history.’ In fact the first translation of Euclid’s work to Arabic was done in the first half of the 9th
century.’ From this Arabic translation that heritage was passed on to Europe.’ It was later translated from Arabic to Latin.’ It is not really a matter of preserving the teachings of the past but
there were lots of additions and commentaries added to it from as early as the 9th century.’ One of the most important commentaries on Euclid’s work came in the 13th century by Na??r al-D?n al-??s?.’
His commentary and critique of Euclid provided the impetuous for the study of non Euclidian Geometry.’ This influenced people in the 18th century who were the forerunners of the so-called
non-Euclidian Geometry which emerged in the 19th century.’ In addition to this we find that according to George Sarton makes tremendous reference to that as it is one of the most tremendous works in
science.’ He indicates that in the 13th century the leading books in Geometry were in Arabic and Latin (which were translated from Arabic).’ In fact attention was not only given to the theoretical
aspects but Muslims were practically oriented too which lead to the sub area of Trigonometry.
Host:’ Could you shed some light on the field of Trigonometry?
Jamal Badawi:
Trigonometry is a very important field in mathematics as it has a variety of application in surveying, navigation and engineering.’ According to John Draper Muslims were the first to develop
Trigonometry in it’s modern form.’ The Greeks of course had some knowledge of Trigonometry but Muslims developed it into its modern form and in fact they were the first to use the sine and cosine.’
Al-Battani is credited for sine and cosine functions.’ This would probably be related to the deep interest in astronomy.’ Trigonometry has lots of application in astronomy.’ It is interesting that
some of the work done by Muslim Trigonometry specialists on Tangents was not known in Europe until 500 years later.’ Some areas passed on to Europe faster but this is one area that took much longer.’
They also have works on Spheric Trigonometry which might be related to Astronomy.’ George Sarton in his Introduction to History of Science to say in Volume 2.1 page 12 he says the development of
Trigonometry was entirely due to Muslim efforts during a the Middle Ages.’ He also says: ‘This outline of Trigonometry in the twelfth and thirteenth century can not but give the reader a very high
idea of Muslim science.’ All the progressive work to the very end of this period was published in Arabic.’ Latin Trigonometry was but a pail reflection of the Arabic and it was already a little
behind the times when it was new for the Arabic efforts did not stop but continued with increased efficiency.” This is really a fascinating area where a great deal of contributions were made by
Muslim mathematicians and really provided the impetuous and inspiration for later mathematicians to continue.
Host:’ Are there additional examples of contributions to field of mathematics that you may be able to share?
Jamal Badawi:
If we go back to the 10th century (particularly the second half) we find a Muslim mathematician Abu Al-Wafa who was regarded by historians to be the first one to show the generality of the sine
theorem relative to spherical triangles.’ He also gave a new method for constructing the sign tables.’ In the 11th century very famous mathematicians, Alberoni and Ibn Sina (known in the West as
Avicenna) contributed work at a very high level and full of originality.’ In Egypt a great astronomer and mathematician lived in the the 11th century who contributed a great deal was Ibn Yunus.’ One
of the fascinating contributions came in the second part of the 11th century by a man who is known for being a poet than a scientist, Omar Al-Khayyam (Omar Khayyam in English).’ Al-Khayyam was so
bright in mathematics that according to Sarton he conceived a very remarkable classification of equations.’ It is said that he recognized 13 different forms of cubic equations which is rather complex
for that time.’ He tried to solve all of them and even gave partial geometric solutions to some of them the equations.’ In addition to this we find other people who contributed to the different
branches of mathematics.’ For example in the first half of the 11th century we find a Muslim Mathematician, Ibn Al-Samh who contributed to the field of Calculus.’ Others also contributed to
Commercial Arithmetic which is called Almuamalat in Arabic.
Host:’ What was the simple most important Muslim contribution to the field of physics?
Jamal Badawi:
It is agreed that the most important single contribution of Muslim physicists was the science of optics.’ In fact we can speak of optics without mentioning the name of Abu-Al Hassan Ibn-Al Haytham,
the adulterated Latinized name is Alhazen.’ He lived in the first half of the 11th century and he was described by George Sarton as ‘the greatest Muslim physicist and one of the greatest students of
optics of all times.” Alhazen according to Sarton exerted a great deal of influence on Western science.’ Furthermore, he showed great progress in the experimental method and which was the forerunner
of Bacon and Kepler.’ As a result of his work we find that the development and use of the microscopes and telescopes developed at a later time.’ One of the main reasons for this contribution is
basically a classic that he wrote called Kitab Almanathir or Optics.’ Indeed this work is regarded by historians as the beginning of the modern science of optics.’ I will give examples on some of his
bright discoveries which were made in the first part of the 11th century.’ First of all, there was a common misconception in Greek science about how the person sees.’ In Greek science they used to
believe that a ray of light proceeds from the eye to the object.’ He corrected this and he indicated that the ray of light actually illuminates from the object to the eye.’ For that time this created
a major shift in the science of optics.’ In addition to this he showed a great deal of understanding of light refraction and light reflection which are both very important phenomena in physics.’ In
fact he made a very important discover of the curvilinear path of the ray of light threw the atmosphere.’ This discovery led him to explain the concept of twilight: which is the reason why we can see
the sun and the moon before they rise and after they set.’ At the early age of scientific development during the 11th century he was able to determine that the retina is the seat of vision and that
the impressions that are made by light upon the retina is conveyed along the optic nerve to the brain.’ He was also able to explain why we are able to see one vision even though we use two eyes.’ His
explanation was that this was due to the formation of the visual images on the symmetrical portions of both retinas.’ No wonder that we find that many of his works and particularly the ones focused
on optics were translated and used over and over again for several centuries in Europe.’ Alhazen was definitely the most outstanding physicist in the area of optics but in fact he was not the only
one.’ There are some other Muslim physicists who introduced additional notable improvements such as Nasir Al Din Al Tusi, Qutb Al Din Al Shirazi and Kamal Al Din Al Farisi.
Host:’ What other contributions to physics did Muslims contribute to?
Jamal Badawi:
For example one of the most important things’ that revolutionized trade and resulted in the improvement of navigation is the compass.’ It is true that the Greek knew something about the properties of
the magnet.’ It is also true that the Chinese understood some of the basic directive properties of the magnet.’ According to historians like George Sarton these nations were not able to put their
knowledge and information into practical application.’ It is believed that it was the Muslims who adopted these ideas and put them into use.’ They were the first to use the magnetic needle for the
purpose of navigation.
Another interesting aspect was their investigation into the field of hydrostatics which started as early as the 9th century.’ One of the interesting books Mezan Alhikmah (means theThe Book of Balance
of Wisdom) on the subject of hydrostatics was written by Abu al-Rahman al-Kh?zini.’ George Sarton sais’standard work on this subject was written by al-Kh?zini and it is one of the main physical
treatises of the Middle Ages.” He also says that he worked on tables of specific gravities of certain liquids and solids and he talked about a variety of physical facts and theories.’ Another aspect
that may be of interest to physicists and engineers is the area of hydraulics.’ The idea of a water wheel was known in past history and there are some archeological evidence to this effect.’
According to Sarton the Muslim physicist introduced a great deal of improvement on the use of water wheels.’ He said ‘they made very remarkable use of them.” This in fact might explain the great deal
of prosperity in agriculture.’ It contributed a great deal to various methods of agriculture and improvement in the production of this particular area.’ In fact historians narrate that evidence shows
that (particularly in Syria) waterwheels were used quite frequently and efficiently.’ In the city of Hama in the 13th century there were as many as 32 efficient water wheels that were in use (this is
the same city that was attacked by the Army of Hafez al-Assad in Syria in which 20-30 thousand people were slaughtered).’ It is so overwhelming to see the strides that were made in this particular
field and all others which we discussed before.’ We are not saying that all the contributions were made in science.’ We will see later that there were contributions to geography, history, law and
other fields that were just as remarkable.’ We started with the contribution to science because that stands clearly in the face of some of the erroneous arguments hat might be predominant in the West
that Islam stands against progress and is only befitting for nomadic people who ride camels and live a simple life in the desert.’ Forgotten are the great contributions to the development of sciences
which manifested over several hundreds of years which paved the way for what we know today as modern scientific progress.
|
{"url":"https://islamonline.net/en/economic-system-in-islam-muslim-contribution-to-mathematics-physics/","timestamp":"2024-11-04T17:00:27Z","content_type":"text/html","content_length":"149726","record_id":"<urn:uuid:060f2dda-532d-48b5-8723-a39e6431668e>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00042.warc.gz"}
|
What is the molecular weight of sulfur if 35.5 grams of sulfur dissolve in 100.0 grams of CS2 to produce a solution that has a boiling point of 49.48°C? | HIX Tutor
What is the molecular weight of sulfur if 35.5 grams of sulfur dissolve in 100.0 grams of CS2 to produce a solution that has a boiling point of 49.48°C?
Answer 1
The molecular mass of sulfur is 256 u.
As you might have guessed, this is a boiling point elevation problem. The formula for boiling point elevation is
#ΔT_"b" = K_"b"m#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The molecular weight of sulfur can be calculated using the formula for boiling point elevation:
ΔT = K_b * m
Where: ΔT = change in boiling point (in °C) K_b = ebullioscopic constant (for CS2, K_b = 2.42 °C/m) m = molality of the solution
First, we need to calculate the molality of the solution:
molality (m) = moles of solute / mass of solvent (in kg)
The mass of CS2 solvent is (100.0 g - 35.5 g) = 64.5 g
Converting this to kilograms: 64.5 g / 1000 = 0.0645 kg
Now, we need to find the moles of sulfur:
moles of sulfur = mass of sulfur / molar mass of sulfur
The molar mass of sulfur is approximately 32.06 g/mol.
moles of sulfur = 35.5 g / 32.06 g/mol = 1.107 moles
Now, we can calculate the molality:
molality (m) = 1.107 moles / 0.0645 kg = 17.14 mol/kg
Now, we can use the boiling point elevation formula:
ΔT = K_b * m
Substitute the values:
ΔT = 2.42 °C/m * 17.14 mol/kg = 41.49 °C
The change in boiling point is 49.48°C - initial boiling point (CS2 = 46.2°C) = 3.28°C.
Therefore, 41.49°C = 3.28°C
Now, we can find the molecular weight of sulfur using the formula:
ΔT = K_b * m
Rearranging the formula to solve for molecular weight:
molecular weight (M) = ΔT / K_b
Substitute the values:
M = 3.28°C / 2.42°C/m = 1.36 mol
The molecular weight of sulfur is approximately 1.36 * 32.06 g/mol = 43.61 g/mol.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/what-is-the-molecular-weight-of-sulfur-if-35-5-grams-of-sulfur-dissolve-in-100-0-8f9af85a1f","timestamp":"2024-11-05T22:42:04Z","content_type":"text/html","content_length":"578143","record_id":"<urn:uuid:630f1604-01b9-443e-a3d6-e4c47c727c05>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00591.warc.gz"}
|
Optimal Mitigation of Slopes by Probabilistic Methods
Commenced in January 2007
Optimal Mitigation of Slopes by Probabilistic Methods
Authors: D. De-León-Escobedo, D. J. Delgado-Hernández, S. Pérez
A probabilistic formulation to assess the slopes safety under the hazard of strong storms is presented and illustrated through a slope in Mexico. The formulation is based on the classical safety
factor (SF) used in practice to appraise the slope stability, but it is introduced the treatment of uncertainties, and the slope failure probability is calculated as the probability that SF<1. As the
main hazard is the rainfall on the area, statistics of rainfall intensity and duration are considered and modeled with an exponential distribution. The expected life-cycle cost is assessed by
considering a monetary value on the slope failure consequences. Alternative mitigation measures are simulated, and the formulation is used to get the measures driving to the optimal one (minimum
life-cycle costs). For the example, the optimal mitigation measure is the reduction on the slope inclination angle.
Keywords: Expected life-cycle cost, failure probability, slopes failure, storms.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1316546
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
[1] Larsen, M. C., Simon, A., 1993. A rainfall intensity-duration threshold for landslides in a humid-tropical environment, Puerto Rico. Geografiska Annaler Series A 75 A (1–2), 13–23.
[2] Lin, M. L., Jeng, F. S., 2000. Characteristics of hazards induced by extremely heavy rainfall in Central Taiwan-Typhoon Herb. Engineering Geology 58, 191–207.
[3] Wang Y., C ao Z and Au S-K., 2010, Efficient Monte Carlo Simulation of parameter sensitivity in probabilistic slope stability analysis, Computers and Geotechnics, Vol. 37, 7- 8, pp. 1015-1022.
[4] Lari S., Frattini P. and Crosta G. B., 2014, A probabilistic approach for landslide hazard analysis, Engineering Geology, Vol. 182 part A, 19, pp. 3-14.
[5] Zhang J., Huang H. W., Zhang L: M., Zhu H. H. and Shi B., 2014. Probabilistic prediction of rainfall-induced slope failure using a mechanics-based model, Engineering Geology, Vol. 168, 16, Pp.
[6] Lulu Z., 2005, Probabilistic study of slope stability under rainfall condition. Ph.D. Civil Engineering Thesis, Hong Kong University of Science and Technology. Hong Kong.
[7] Tarolli P., Borga M., Chang K. T. and Chiang S-H., 2011, Modelling shallow landsliding susceptibility by incorporating heavy rainfall statistical properties. Geomorfology, 133 (3-4), pp. 199-211.
[8] Fredlund D., 2007. Slope stability hazard management systems, Journal of Zhejiang University: Science, Vol. 8, pp. 1879-2040. Zhejiang University Press.
[9] White J. A. and Singham D. I. 2012. Slope Stability Assessment using Stochastic Rainfall Simulation, Vol. 9, pp. 699–706, Proceedings of the International Conference on Computational Science,
ICCS 2012.
[10] Alcantara-Ayala, I. 2004. Hazard assessment of rainfall induced landsliding in Mexico, Geomorphology 61, 19-40.
[11] Alcantara-Ayala, I. 2008, On the historical account of disastrous landslides in Mexico: the challenge of risk management and disaster prevention. Adv. Geosci., 14, 159-164.
[12] Rahardjo H. and Fredlund D. G. 1984. General limit equilibrium method for lateral earth force. Canadian Geotechnical Journal 21 (1), pp. 166-175.
[13] Vanapalli S. K., Fredlund D. G., Pufahi D. E. and Clifton A. W. 1996. Model for the prediction of shear strength with respect to soil suction. Canadian Geotechnical Journal,1996, 33(3), pp.
[14] The SoilVision Systems Ltd. Team, 2017, SVOFFICE 5 Help Manual, Canada.
[15] Ang, A. and De Leon, D. 2005. Modeling and Analysis of Uncertainties for Risk-Informed Decision in Infrastructures Engineering, Journal of Structure and Infrastructure Engineering, Vol.1, No. 1,
pp. 19-31.
[16] Lind N. C. y Davenport A. G. 1972. Towards practical application of Structural Reliability Theory”, ACI Publication SP- 31, Probabilistic Design of Reinforced Concrete Buildings, Detroit, Mich.,
pp. 63-110.
[17] Rosenblueth, E., (1982). “Information value in certain class of problems” (In Spanish), Internal Report 448, Instituto de Ingeniería, UNAM, Mexico.
|
{"url":"https://publications.waset.org/10008927/optimal-mitigation-of-slopes-by-probabilistic-methods","timestamp":"2024-11-14T13:57:48Z","content_type":"text/html","content_length":"19945","record_id":"<urn:uuid:913201e2-c7a5-4ab9-9b7d-b4dd1df1dcf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00374.warc.gz"}
|
10.2: The Hyperbola
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
• Locate a hyperbola’s vertices and foci.
• Write equations of hyperbolas in standard form.
• Graph hyperbolas centered at the origin.
• Graph hyperbolas not centered at the origin.
• Solve applied problems involving hyperbolas.
What do paths of comets, supersonic booms, ancient Grecian pillars, and natural draft cooling towers have in common? They can all be modeled by the same type of conic. For instance, when something
moves faster than the speed of sound, a shock wave in the form of a cone is created. A portion of a conic is formed when the wave intersects the ground, resulting in a sonic boom (Figure \(\PageIndex
Figure \(\PageIndex{1}\): A shock wave intersecting the ground forms a portion of a conic and results in a sonic boom.
Most people are familiar with the sonic boom created by supersonic aircraft, but humans were breaking the sound barrier long before the first supersonic flight. The crack of a whip occurs because the
tip is exceeding the speed of sound. The bullets shot from many firearms also break the sound barrier, although the bang of the gun usually supersedes the sound of the sonic boom.
Locating the Vertices and Foci of a Hyperbola
In analytic geometry, a hyperbola is a conic section formed by intersecting a right circular cone with a plane at an angle such that both halves of the cone are intersected. This intersection
produces two separate unbounded curves that are mirror images of each other (Figure \(\PageIndex{2}\)).
Figure \(\PageIndex{2}\): A hyperbola
Like the ellipse, the hyperbola can also be defined as a set of points in the coordinate plane. A hyperbola is the set of all points \((x,y)\) in a plane such that the difference of the distances
between \((x,y)\) and the foci is a positive constant.
Notice that the definition of a hyperbola is very similar to that of an ellipse. The distinction is that the hyperbola is defined in terms of the difference of two distances, whereas the ellipse is
defined in terms of the sum of two distances.
As with the ellipse, every hyperbola has two axes of symmetry. The transverse axis is a line segment that passes through the center of the hyperbola and has vertices as its endpoints. The foci lie on
the line that contains the transverse axis. The conjugate axis is perpendicular to the transverse axis and has the co-vertices as its endpoints. The center of a hyperbola is the midpoint of both the
transverse and conjugate axes, where they intersect. Every hyperbola also has two asymptotes that pass through its center. As a hyperbola recedes from the center, its branches approach these
asymptotes. The central rectangle of the hyperbola is centered at the origin with sides that pass through each vertex and co-vertex; it is a useful tool for graphing the hyperbola and its asymptotes.
To sketch the asymptotes of the hyperbola, simply sketch and extend the diagonals of the central rectangle (Figure \(\PageIndex{3}\)).
Figure \(\PageIndex{3}\): Key features of the hyperbola
In this section, we will limit our discussion to hyperbolas that are positioned vertically or horizontally in the coordinate plane; the axes will either lie on or be parallel to the \(x\)- and \(y\)
-axes. We will consider two cases: those that are centered at the origin, and those that are centered at a point other than the origin.
Deriving the Equation of an Ellipse Centered at the Origin
Let \((−c,0)\) and \((c,0)\) be the foci of a hyperbola centered at the origin. The hyperbola is the set of all points \((x,y)\) such that the difference of the distances from \((x,y)\) to the foci
is constant. See Figure \(\PageIndex{4}\).
Figure \(\PageIndex{4}\)
If \((a,0)\) is a vertex of the hyperbola, the distance from \((−c,0)\) to \((a,0)\) is \(a−(−c)=a+c\). The distance from \((c,0)\) to \((a,0)\) is \(c−a\). The sum of the distances from the foci to
the vertex is
If \((x,y)\) is a point on the hyperbola, we can define the following variables:
\(d_2=\) the distance from \((−c,0)\) to \((x,y)\)
\(d_1=\) the distance from \((c,0)\) to \((x,y)\)
By definition of a hyperbola, \(d_2−d_1\) is constant for any point \((x,y)\) on the hyperbola. We know that the difference of these distances is \(2a\) for the vertex \((a,0)\). It follows that \
(d_2−d_1=2a\) for any point on the hyperbola. As with the derivation of the equation of an ellipse, we will begin by applying the distance formula. The rest of the derivation is algebraic. Compare
this derivation with the one from the previous section for ellipses.
\[\begin{align*} d_2-d_1&=2a\\ \sqrt{{(x-(-c))}^2+{(y-0)}^2}-\sqrt{{(x-c)}^2+{(y-0)}^2}&=2a\qquad \text{Distance Formula}\\ \sqrt{{(x+c)}^2+y^2}-\sqrt{{(x-c)}^2+y^2}&=2a\qquad \text{Simplify
expressions.}\\ \sqrt{{(x+c)}^2+y^2}&=2a+\sqrt{{(x-c)}^2+y^2}\qquad \text{Move radical to opposite side.}\\ {(x+c)}^2+y^2&={(2a+\sqrt{{(x-c)}^2+y^2})}^2\qquad \text{Square both sides.}\\ x^2+2cx+c^
2+y^2&=4a^2+4a\sqrt{{(x-c)}^2+y^2}+{(x-c)}^2+y^2\qquad \text{Expand the squares.}\\ x^2+2cx+c^2+y^2&=4a^2+4a\sqrt{{(x-c)}^2+y^2}+x^2-2cx+c^2+y^2\qquad \text{Expand remaining square.}\\ 2cx&=4a^2+4a\
sqrt{{(x-c)}^2+y^2}-2cx\qquad \text{Combine like terms.}\\ 4cx-4a^2&=4a\sqrt{{(x-c)}^2+y^2}\qquad \text{Isolate the radical.}\\ cx-a^2&=a\sqrt{{(x-c)}^2+y^2}\qquad \text{Divide by 4.}\\ {(cx-a^2)}^2&
=a^2{\left[\sqrt{{(x-c)}^2+y^2}\right]}^2\qquad \text{Square both sides.}\\ c^2x^2-2a^2cx+a^4&=a^2(x^2-2cx+c^2+y^2)\qquad \text{Expand the squares.}\\ c^2x^2-2a^2cx+a^4&=a^2x^2-2a^2cx+a^2c^2+a^2y^2\
qquad \text{Distribute } a^2\\ a^4+c^2x^2&=a^2x^2+a^2c^2+a^2y^2\qquad \text{Combine like terms.}\\ c^2x^2-a^2x^2-a^2y^2&=a^2c^2-a^4\qquad \text{Rearrange terms.}\\ x^2(c^2-a^2)-a^2y^2&=a^2(c^2-a^2)\
qquad \text{Factor common terms.}\\ x^2b^2-a^2y^2&=a^2b^2\qquad \text{Set } b^2=c^2−a^2\\. \dfrac{x^2b^2}{a^2b^2}-\dfrac{a^2y^2}{a^2b^2}&=\dfrac{a^2b^2}{a^2b^2}\qquad \text{Divide both sides by } a^
2b^2\\ \dfrac{x^2}{a^2}-\dfrac{y^2}{b^2}&=1\\ \end{align*}\]
This equation defines a hyperbola centered at the origin with vertices \((\pm a,0)\) and co-vertices \((0,\pm b)\).
The standard form of the equation of a hyperbola with center \((0,0)\) and transverse axis on the \(x\)-axis is
• the length of the transverse axis is \(2a\)
• the coordinates of the vertices are \((\pm a,0)\)
• the length of the conjugate axis is \(2b\)
• the coordinates of the co-vertices are \((0,\pm b)\)
• the distance between the foci is \(2c\), where \(c^2=a^2+b^2\)
• the coordinates of the foci are \((\pm c,0)\)
• the equations of the asymptotes are \(y=\pm \dfrac{b}{a}x\)
See Figure \(\PageIndex{5a}\).
The standard form of the equation of a hyperbola with center \((0,0)\) and transverse axis on the \(y\)-axis is
• the length of the transverse axis is \(2a\)
• the coordinates of the vertices are \((0,\pm a)\)
• the length of the conjugate axis is \(2b\)
• the coordinates of the co-vertices are \((\pm b,0)\)
• the distance between the foci is \(2c\), where \(c^2=a^2+b^2\)
• the coordinates of the foci are \((0,\pm c)\)
• the equations of the asymptotes are \(y=\pm \dfrac{a}{b}x\)
See Figure \(\PageIndex{5b}\).
Note that the vertices, co-vertices, and foci are related by the equation \(c^2=a^2+b^2\). When we are given the equation of a hyperbola, we can use this relationship to identify its vertices and
Figure \(\PageIndex{5}\): (a) Horizontal hyperbola with center \((0,0)\) (b) Vertical hyperbola with center \((0,0)\)
1. Determine whether the transverse axis lies on the \(x\)- or \(y\)-axis. Notice that \(a^2\) is always under the variable with the positive coefficient. So, if you set the other variable equal to
zero, you can easily find the intercepts. In the case where the hyperbola is centered at the origin, the intercepts coincide with the vertices.
□ If the equation has the form \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\), then the transverse axis lies on the \(x\)-axis. The vertices are located at \((\pm a,0)\), and the foci are located at \
((\pm c,0)\).
□ If the equation has the form \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\), then the transverse axis lies on the \(y\)-axis. The vertices are located at \((0,\pm a)\), and the foci are located at \
((0,\pm c)\).
2. Solve for \(a\) using the equation \(a=\sqrt{a^2}\).
3. Solve for \(c\) using the equation \(c=\sqrt{a^2+b^2}\).
Identify the vertices and foci of the hyperbola with equation \(\dfrac{y^2}{49}−\dfrac{x^2}{32}=1\).
The equation has the form \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\), so the transverse axis lies on the \(y\)-axis. The hyperbola is centered at the origin, so the vertices serve as the y-intercepts of
the graph. To find the vertices, set \(x=0\), and solve for \(y\).
\[\begin{align*} 1&=\dfrac{y^2}{49}-\dfrac{x^2}{32}\\ 1&=\dfrac{y^2}{49}-\dfrac{0^2}{32}\\ 1&=\dfrac{y^2}{49}\\ y^2&=49\\ y&=\pm \sqrt{49}\\ &=\pm 7 \end{align*}\]
The foci are located at \((0,\pm c)\). Solving for \(c\),
\[\begin{align*} c&=\sqrt{a^2+b^2}\\ &=\sqrt{49+32}\\ &=\sqrt{81}\\ &=9 \end{align*}\]
Therefore, the vertices are located at \((0,\pm 7)\), and the foci are located at \((0,9)\).
Identify the vertices and foci of the hyperbola with equation \(\dfrac{x^2}{9}−\dfrac{y^2}{25}=1\).
Vertices: \((\pm 3,0)\); Foci: \((\pm \sqrt{34},0)\)
Writing Equations of Hyperbolas in Standard Form
Just as with ellipses, writing the equation for a hyperbola in standard form allows us to calculate the key features: its center, vertices, co-vertices, foci, asymptotes, and the lengths and
positions of the transverse and conjugate axes. Conversely, an equation for a hyperbola can be found given its key features. We begin by finding standard equations for hyperbolas centered at the
origin. Then we will turn our attention to finding standard equations for hyperbolas centered at some point other than the origin.
Hyperbolas Centered at the Origin
Reviewing the standard forms given for hyperbolas centered at \((0,0)\),we see that the vertices, co-vertices, and foci are related by the equation \(c^2=a^2+b^2\). Note that this equation can also
be rewritten as \(b^2=c^2−a^2\). This relationship is used to write the equation for a hyperbola when given the coordinates of its foci and vertices.
1. Determine whether the transverse axis lies on the \(x\)- or \(y\)-axis.
□ If the given coordinates of the vertices and foci have the form \((\pm a,0)\) and \((\pm c,0)\), respectively, then the transverse axis is the \(x\)-axis. Use the standard form \(\dfrac{x^2}
□ If the given coordinates of the vertices and foci have the form \((0,\pm a)\) and \((0,\pm c)\), respectively, then the transverse axis is the \(y\)-axis. Use the standard form \(\dfrac{y^2}
2. Find \(b^2\) using the equation \(b^2=c^2−a^2\).
3. Substitute the values for \(a^2\) and \(b^2\) into the standard form of the equation determined in Step 1.
What is the standard form equation of the hyperbola that has vertices \((\pm 6,0)\) and foci \((\pm 2\sqrt{10},0)\)?
The vertices and foci are on the \(x\)-axis. Thus, the equation for the hyperbola will have the form \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\).
The vertices are \((\pm 6,0)\), so \(a=6\) and \(a^2=36\).
The foci are \((\pm 2\sqrt{10},0)\), so \(c=2\sqrt{10}\) and \(c^2=40\).
Solving for \(b^2\), we have
\[\begin{align*} b^2&=c^2-a^2\\ b^2&=40-36\qquad \text{Substitute for } c^2 \text{ and } a^2\\ b^2&=4\qquad \text{Subtract.} \end{align*}\]
Finally, we substitute \(a^2=36\) and \(b^2=4\) into the standard form of the equation, \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\). The equation of the hyperbola is \(\dfrac{x^2}{36}−\dfrac{y^2}{4}=1\),
as shown in Figure \(\PageIndex{6}\).
Figure \(\PageIndex{6}\)
What is the standard form equation of the hyperbola that has vertices \((0,\pm 2)\) and foci \((0,\pm 2\sqrt{5})\)?
Hyperbolas Not Centered at the Origin
Like the graphs for other equations, the graph of a hyperbola can be translated. If a hyperbola is translated \(h\) units horizontally and \(k\) units vertically, the center of the hyperbola will be
\((h,k)\). This translation results in the standard form of the equation we saw previously, with \(x\) replaced by \((x−h)\) and \(y\) replaced by \((y−k)\).
The standard form of the equation of a hyperbola with center \((h,k)\) and transverse axis parallel to the \(x\)-axis is
• the length of the transverse axis is \(2a\)
• the coordinates of the vertices are \((h\pm a,k)\)
• the length of the conjugate axis is \(2b\)
• the coordinates of the co-vertices are \((h,k\pm b)\)
• the distance between the foci is \(2c\), where \(c^2=a^2+b^2\)
• the coordinates of the foci are \((h\pm c,k)\)
The asymptotes of the hyperbola coincide with the diagonals of the central rectangle. The length of the rectangle is \(2a\) and its width is \(2b\). The slopes of the diagonals are \(\pm \dfrac{b}{a}
\),and each diagonal passes through the center \((h,k)\). Using the point-slope formula, it is simple to show that the equations of the asymptotes are \(y=\pm \dfrac{b}{a}(x−h)+k\). See Figure \(\
The standard form of the equation of a hyperbola with center \((h,k)\) and transverse axis parallel to the \(y\)-axis is
• the length of the transverse axis is \(2a\)
• the coordinates of the vertices are \((h,k\pm a)\)
• the length of the conjugate axis is \(2b\)
• the coordinates of the co-vertices are \((h\pm b,k)\)
• the distance between the foci is \(2c\), where \(c^2=a^2+b^2\)
• the coordinates of the foci are \((h,k\pm c)\)
Using the reasoning above, the equations of the asymptotes are \(y=\pm \dfrac{a}{b}(x−h)+k\). See Figure \(\PageIndex{7b}\).
Figure \(\PageIndex{7}\): (a) Horizontal hyperbola with center \((h,k)\) (b) Vertical hyperbola with center \((h,k)\)
Like hyperbolas centered at the origin, hyperbolas centered at a point \((h,k)\) have vertices, co-vertices, and foci that are related by the equation \(c^2=a^2+b^2\). We can use this relationship
along with the midpoint and distance formulas to find the standard equation of a hyperbola when the vertices and foci are given.
1. Determine whether the transverse axis is parallel to the \(x\)- or \(y\)-axis.
□ If the \(y\)-coordinates of the given vertices and foci are the same, then the transverse axis is parallel to the \(x\)-axis. Use the standard form \(\dfrac{{(x−h)}^2}{a^2}−\dfrac{{(y−k)}^2}
□ If the \(x\)-coordinates of the given vertices and foci are the same, then the transverse axis is parallel to the \(y\)-axis. Use the standard form \(\dfrac{{(y−k)}^2}{a^2}−\dfrac{{(x−h)}^2}
2. Identify the center of the hyperbola, \((h,k)\),using the midpoint formula and the given coordinates for the vertices.
3. Find \(a^2\) by solving for the length of the transverse axis, \(2a\), which is the distance between the given vertices.
4. Find \(c^2\) using \(h\) and \(k\) found in Step 2 along with the given coordinates for the foci.
5. Solve for \(b^2\) using the equation \(b^2=c^2−a^2\).
6. Substitute the values for \(h\), \(k\), \(a^2\), and \(b^2\) into the standard form of the equation determined in Step 1.
What is the standard form equation of the hyperbola that has vertices at \((0,−2)\) and \((6,−2)\) and foci at \((−2,−2)\) and \((8,−2)\)?
The \(y\)-coordinates of the vertices and foci are the same, so the transverse axis is parallel to the \(x\)-axis. Thus, the equation of the hyperbola will have the form
First, we identify the center, \((h,k)\). The center is halfway between the vertices \((0,−2)\) and \((6,−2)\). Applying the midpoint formula, we have
Next, we find \(a^2\). The length of the transverse axis, \(2a\),is bounded by the vertices. So, we can find \(a^2\) by finding the distance between the \(x\)-coordinates of the vertices.
\[\begin{align*} 2a&=| 0-6 |\\ 2a&=6\\ a&=3\\ a^2&=9 \end{align*}\]
Now we need to find \(c^2\). The coordinates of the foci are \((h\pm c,k)\). So \((h−c,k)=(−2,−2)\) and \((h+c,k)=(8,−2)\). We can use the \(x\)-coordinate from either of these points to solve for \
(c\). Using the point \((8,−2)\), and substituting \(h=3\),
\[\begin{align*} h+c&=8\\ 3+c&=8\\ c&=5\\ c^2&=25 \end{align*}\]
Next, solve for \(b^2\) using the equation \(b^2=c^2−a^2\):
\[\begin{align*} b^2&=c^2-a^2\\ &=25-9\\ &=16 \end{align*}\]
Finally, substitute the values found for \(h\), \(k\), \(a^2\),and \(b^2\) into the standard form of the equation.
What is the standard form equation of the hyperbola that has vertices \((1,−2)\) and \((1,8)\) and foci \((1,−10)\) and \((1,16)\)?
Graphing Hyperbolas Centered at the Origin
When we have an equation in standard form for a hyperbola centered at the origin, we can interpret its parts to identify the key features of its graph: the center, vertices, co-vertices, asymptotes,
foci, and lengths and positions of the transverse and conjugate axes. To graph hyperbolas centered at the origin, we use the standard form \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\) for horizontal
hyperbolas and the standard form \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\) for vertical hyperbolas.
1. Determine which of the standard forms applies to the given equation.
2. Use the standard form identified in Step 1 to determine the position of the transverse axis; coordinates for the vertices, co-vertices, and foci; and the equations for the asymptotes.
□ If the equation is in the form \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\), then
☆ the transverse axis is on the \(x\)-axis
☆ the coordinates of the vertices are \((\pm a,0)\0
☆ the coordinates of the co-vertices are \((0,\pm b)\)
☆ the coordinates of the foci are \((\pm c,0)\)
☆ the equations of the asymptotes are \(y=\pm \dfrac{b}{a}x\)
□ If the equation is in the form \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\), then
☆ the transverse axis is on the \(y\)-axis
☆ the coordinates of the vertices are \((0,\pm a)\)
☆ the coordinates of the co-vertices are \((\pm b,0)\)
☆ the coordinates of the foci are \((0,\pm c)\)
☆ the equations of the asymptotes are \(y=\pm \dfrac{a}{b}x\)
3. Solve for the coordinates of the foci using the equation \(c=\pm \sqrt{a^2+b^2}\).
4. Plot the vertices, co-vertices, foci, and asymptotes in the coordinate plane, and draw a smooth curve to form the hyperbola.
Graph the hyperbola given by the equation \(\dfrac{y^2}{64}−\dfrac{x^2}{36}=1\). Identify and label the vertices, co-vertices, foci, and asymptotes.
The standard form that applies to the given equation is \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\). Thus, the transverse axis is on the \(y\)-axis
The coordinates of the vertices are \((0,\pm a)=(0,\pm \sqrt{64})=(0,\pm 8)\)
The coordinates of the co-vertices are \((\pm b,0)=(\pm \sqrt{36}, 0)=(\pm 6,0)\)
The coordinates of the foci are \((0,\pm c)\), where \(c=\pm \sqrt{a^2+b^2}\). Solving for \(c\), we have
\(c=\pm \sqrt{a^2+b^2}=\pm \sqrt{64+36}=\pm \sqrt{100}=\pm 10\)
Therefore, the coordinates of the foci are \((0,\pm 10)\)
The equations of the asymptotes are \(y=\pm \dfrac{a}{b}x=\pm \dfrac{8}{6}x=\pm \dfrac{4}{3}x\)
Plot and label the vertices and co-vertices, and then sketch the central rectangle. Sides of the rectangle are parallel to the axes and pass through the vertices and co-vertices. Sketch and extend
the diagonals of the central rectangle to show the asymptotes. The central rectangle and asymptotes provide the framework needed to sketch an accurate graph of the hyperbola. Label the foci and
asymptotes, and draw a smooth curve to form the hyperbola, as shown in Figure \(\PageIndex{8}\).
Figure \(\PageIndex{8}\)
Graph the hyperbola given by the equation \(\dfrac{x^2}{144}−\dfrac{y^2}{81}=1\). Identify and label the vertices, co-vertices, foci, and asymptotes.
vertices: \((\pm 12,0)\); co-vertices: \((0,\pm 9)\); foci: \((\pm 15,0)\); asymptotes: \(y=\pm \dfrac{3}{4}x\);
Figure \(\PageIndex{9}\)
Graphing Hyperbolas Not Centered at the Origin
Graphing hyperbolas centered at a point \((h,k)\) other than the origin is similar to graphing ellipses centered at a point other than the origin. We use the standard forms \(\dfrac{{(x−h)}^2}{a^2}−\
dfrac{{(y−k)}^2}{b^2}=1\) for horizontal hyperbolas, and \(\dfrac{{(y−k)}^2}{a^2}−\dfrac{{(x−h)}^2}{b^2}=1\) for vertical hyperbolas. From these standard form equations we can easily calculate and
plot key features of the graph: the coordinates of its center, vertices, co-vertices, and foci; the equations of its asymptotes; and the positions of the transverse and conjugate axes.
1. Convert the general form to that standard form. Determine which of the standard forms applies to the given equation.
2. Use the standard form identified in Step 1 to determine the position of the transverse axis; coordinates for the center, vertices, co-vertices, foci; and equations for the asymptotes.
1. If the equation is in the form \(\dfrac{{(x−h)}^2}{a^2}−\dfrac{{(y−k)}^2}{b^2}=1\), then
☆ the transverse axis is parallel to the \(x\)-axis
☆ the center is \((h,k)\)
☆ the coordinates of the vertices are \((h\pm a,k)\)
☆ the coordinates of the co-vertices are \((h,k\pm b)\)
☆ the coordinates of the foci are \((h\pm c,k)\)
☆ the equations of the asymptotes are \(y=\pm \dfrac{b}{a}(x−h)+k\)
2. If the equation is in the form \(\dfrac{{(y−k)}^2}{a^2}−\dfrac{{(x−h)}^2}{b^2}=1\), then
☆ the transverse axis is parallel to the \(y\)-axis
☆ the center is \((h,k)\)
☆ the coordinates of the vertices are \((h,k\pm a)\)
☆ the coordinates of the co-vertices are \((h\pm b,k)\)
☆ the coordinates of the foci are \((h,k\pm c)\)
☆ the equations of the asymptotes are \(y=\pm \dfrac{a}{b}(x−h)+k\)
3. Solve for the coordinates of the foci using the equation \(c=\pm \sqrt{a^2+b^2}\).
4. Plot the center, vertices, co-vertices, foci, and asymptotes in the coordinate plane and draw a smooth curve to form the hyperbola.
Graph the hyperbola given by the equation \(9x^2−4y^2−36x−40y−388=0\). Identify and label the center, vertices, co-vertices, foci, and asymptotes.
Start by expressing the equation in standard form. Group terms that contain the same variable, and move the constant to the opposite side of the equation.
Factor the leading coefficient of each expression.
Complete the square twice. Remember to balance the equation by adding the same constants to each side.
Rewrite as perfect squares.
Divide both sides by the constant term to place the equation in standard form.
The standard form that applies to the given equation is \(\dfrac{{(x−h)}^2}{a^2}−\dfrac{{(y−k)}^2}{b^2}=1\), where \(a^2=36\) and \(b^2=81\),or \(a=6\) and \(b=9\). Thus, the transverse axis is
parallel to the \(x\)-axis. It follows that:
the center of the ellipse is \((h,k)=(2,−5)\)
the coordinates of the vertices are \((h\pm a,k)=(2\pm 6,−5)\), or \((−4,−5)\) and \((8,−5)\)
the coordinates of the co-vertices are \((h,k\pm b)=(2,−5\pm 9)\), or \((2,−14)\) and \((2,4)\)
the coordinates of the foci are \((h\pm c,k)\), where \(c=\pm \sqrt{a^2+b^2}\). Solving for \(c\),we have
\(c=\pm \sqrt{36+81}=\pm \sqrt{117}=\pm 3\sqrt{13}\)
Therefore, the coordinates of the foci are \((2−3\sqrt{13},−5)\) and \((2+3\sqrt{13},−5)\).
The equations of the asymptotes are \(y=\pm \dfrac{b}{a}(x−h)+k=\pm \dfrac{3}{2}(x−2)−5\).
Next, we plot and label the center, vertices, co-vertices, foci, and asymptotes and draw smooth curves to form the hyperbola, as shown in Figure \(\PageIndex{10}\).
Figure \(\PageIndex{10}\)
Graph the hyperbola given by the standard form of an equation \(\dfrac{{(y+4)}^2}{100}−\dfrac{{(x−3)}^2}{64}=1\). Identify and label the center, vertices, co-vertices, foci, and asymptotes.
center: \((3,−4)\); vertices: \((3,−14)\) and \((3,6)\); co-vertices: \((−5,−4)\); and \((11,−4)\); foci: \((3,−4−2\sqrt{41})\) and \((3,−4+2\sqrt{41})\); asymptotes: \(y=\pm \dfrac{5}{4}(x−3)−4
Figure \(\PageIndex{11}\)
Solving Applied Problems Involving Hyperbolas
As we discussed at the beginning of this section, hyperbolas have real-world applications in many fields, such as astronomy, physics, engineering, and architecture. The design efficiency of
hyperbolic cooling towers is particularly interesting. Cooling towers are used to transfer waste heat to the atmosphere and are often touted for their ability to generate power efficiently. Because
of their hyperbolic form, these structures are able to withstand extreme winds while requiring less material than any other forms of their size and strength (Figure \(\PageIndex{12}\)). For example,
a \(500\)-foot tower can be made of a reinforced concrete shell only \(6\) or \(8\) inches wide!
Figure \(\PageIndex{12}\): Cooling towers at the Drax power station in North Yorkshire, United Kingdom (credit: Les Haines, Flickr)
The first hyperbolic towers were designed in 1914 and were \(35\) meters high. Today, the tallest cooling towers are in France, standing a remarkable \(170\) meters tall. In Example \(\PageIndex{6}\)
we will use the design layout of a cooling tower to find a hyperbolic equation that models its sides.
The design layout of a cooling tower is shown in Figure \(\PageIndex{13}\). The tower stands \(179.6\) meters tall. The diameter of the top is \(72\) meters. At their closest, the sides of the tower
are \(60\) meters apart.
Figure \(\PageIndex{13}\): Project design for a natural draft cooling tower
Find the equation of the hyperbola that models the sides of the cooling tower. Assume that the center of the hyperbola—indicated by the intersection of dashed perpendicular lines in the figure—is the
origin of the coordinate plane. Round final values to four decimal places.
We are assuming the center of the tower is at the origin, so we can use the standard form of a horizontal hyperbola centered at the origin: \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\), where the branches
of the hyperbola form the sides of the cooling tower. We must find the values of \(a^2\) and \(b^2\) to complete the model.
First, we find \(a^2\). Recall that the length of the transverse axis of a hyperbola is \(2a\). This length is represented by the distance where the sides are closest, which is given as \(65.3\)
meters. So, \(2a=60\). Therefore, \(a=30\) and \(a^2=900\).
To solve for \(b^2\),we need to substitute for \(x\) and \(y\) in our equation using a known point. To do this, we can use the dimensions of the tower to find some point \((x,y)\) that lies on the
hyperbola. We will use the top right corner of the tower to represent that point. Since the \(y\)-axis bisects the tower, our \(x\)-value can be represented by the radius of the top, or \(36\)
meters. The y-value is represented by the distance from the origin to the top, which is given as \(79.6\) meters. Therefore,
\[\begin{align*} \dfrac{x^2}{a^2}-\dfrac{y^2}{b^2}&=1\qquad \text{Standard form of horizontal hyperbola.}\\ b^2&=\dfrac{y^2}{\dfrac{x^2}{a^2}-1}\qquad \text{Isolate } b^2\\ &=\dfrac{{(79.6)}^2}{\
dfrac{{(36)}^2}{900}-1}\qquad \text{Substitute for } a^2,\: x, \text{ and } y\\ &\approx 14400.3636\qquad \text{Round to four decimal places} \end{align*}\]
The sides of the tower can be modeled by the hyperbolic equation
\(\dfrac{x^2}{900}−\dfrac{y^2}{14400.3636}=1\),or \(\dfrac{x^2}{{30}^2}−\dfrac{y^2}{{120.0015}^2}=1\)
A design for a cooling tower project is shown in Figure \(\PageIndex{14}\). Find the equation of the hyperbola that models the sides of the cooling tower. Assume that the center of the
hyperbola—indicated by the intersection of dashed perpendicular lines in the figure—is the origin of the coordinate plane. Round final values to four decimal places.
Figure \(\PageIndex{14}\)
The sides of the tower can be modeled by the hyperbolic equation. \(\dfrac{x^2}{400}−\dfrac{y^2}{3600}=1\) or \(\dfrac{x^2}{{20}^2}−\dfrac{y^2}{{60}^2}=1\).
Access these online resources for additional instruction and practice with hyperbolas.
• Conic Sections: The Hyperbola Part 1 of 2
• Conic Sections: The Hyperbola Part 2 of 2
• Graph a Hyperbola with Center at Origin
• Graph a Hyperbola with Center not at Origin
Key Equations
Hyperbola, center at origin, transverse axis on x-axis \(\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\)
Hyperbola, center at origin, transverse axis on y-axis \(\dfrac{y^2}{a^2}−\dfrac{x^2}{b^2}=1\)
Hyperbola, center at \((h,k)\),transverse axis parallel to x-axis \(\dfrac{{(x−h)}^2}{a^2}−\dfrac{{(y−k)}^2}{b^2}=1\)
Hyperbola, center at \((h,k)\),transverse axis parallel to y-axis \(\dfrac{{(y−k)}^2}{a^2}−\dfrac{{(x−h)}^2}{b^2}=1\)
Key Concepts
• A hyperbola is the set of all points \((x,y)\) in a plane such that the difference of the distances between \((x,y)\) and the foci is a positive constant.
• The standard form of a hyperbola can be used to locate its vertices and foci. See Example \(\PageIndex{1}\).
• When given the coordinates of the foci and vertices of a hyperbola, we can write the equation of the hyperbola in standard form. See Example \(\PageIndex{2}\) and Example \(\PageIndex{3}\).
• When given an equation for a hyperbola, we can identify its vertices, co-vertices, foci, asymptotes, and lengths and positions of the transverse and conjugate axes in order to graph the
hyperbola. See Example \(\PageIndex{4}\) and Example \(\PageIndex{5}\).
• Real-world situations can be modeled using the standard equations of hyperbolas. For instance, given the dimensions of a natural draft cooling tower, we can find a hyperbolic equation that models
its sides. See Example \(\PageIndex{6}\).
|
{"url":"https://math.libretexts.org/Courses/Truckee_Meadows_Community_College/TMCC%3A_Precalculus_I_and_II/Under_Construction_test2_10%3A_Analytic_Geometry/Under_Construction_test2_10%3A_Analytic_Geometry_10.2%3A_The_Hyperbola","timestamp":"2024-11-09T17:01:43Z","content_type":"text/html","content_length":"188382","record_id":"<urn:uuid:59606a34-f104-408f-8e88-a97996c36a49>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00040.warc.gz"}
|
Introduction to Martingales - The Culture SGIntroduction to Martingales
This is a rather important topics for anyone interested in doing Finance.
Lets look at their definition first.
A Martingale is a random process
Martingales are used widely and one example is to model fair games, thus it has a rich history in modelling of gambling problems. If you google Martingale, you will get an image related to a Horse,
because it started with Horse-betting.
We define a submartigale by replacing the above condition 2 with
and a supermartingale with
Take note that a martingale is both a submartingale and a supermartingale. Submartingale in layman terms, refers to the player expecting more as time progresses, and vice versa for supermartingale.
Let us try to construct a Martingale from a Random Walk now.
So how will a martingale betting strategy be like?
Here, we let
– player win
Consider further now a doubling strategy where we keep doubling the bet until we eventually win. Once we win, we stop and our initial bet is
pingbacks / trackbacks
• […] Introduction to Martingales […]
|
{"url":"https://theculture.sg/2016/01/martingales/","timestamp":"2024-11-02T17:17:01Z","content_type":"text/html","content_length":"124844","record_id":"<urn:uuid:7d984f2a-9101-47fb-a418-74d0bb476269>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00362.warc.gz"}
|
the absolute value
Lesson goal: For loops and plotting the absolute value
Graph a line in slope-intercept form
| Next:
Graph a parabola
Once you learn about absolute values, you may have to make plots of them. They often result in strange "V-shaped" plots. You can learn how to plot absolute value functions here.
Now you try. Try fixing the pset statement to plot the variable $x$ on the x-axis and $|x|$ on the y-axis.
Type your code here:
See your results here:
This code will not run. You have to fill in the y= line to be assigned to the absolute value of x, then fix the pset statement first. Fix it to plot $x$ on the x-axis, and the $|x|$ on the y-axis.
|
{"url":"https://www.codebymath.com/index.php/welcome/lesson/for-loop-abs-plot","timestamp":"2024-11-09T11:03:46Z","content_type":"text/html","content_length":"15819","record_id":"<urn:uuid:76a08329-3b88-4412-8e6d-4d4608995d30>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00003.warc.gz"}
|
Table of Contents
1. Definition
A function \( f(x) \) is a set \( S \) of ordered pairs that map the first value of the ordered pair to the second value in the ordered pair, where the first value may not have duplicates in the set
\(S\). The map from the first value to the second value has the notation \(f(x) = y \) for some x and y, where \(f\) is the mapping. Note that we can also define rules for \(f\) and do not therefore
have to explicitly define all the mappings:
\begin{align*} S = \{(x, y): x^{2} = y, x, y \in \mathbb{R} \} \end{align*}
Which is an example of a parabolic function. \(x\) and \(y\) can both conceptually be any object, but usually they are mathematical objects. Some examples of such objects include tensors and scalars.
2. ordered pair
However, we must find a way to define what an ordered pair is. Sets have no order by default, so we need to add order by doing the following:
\begin{align*} (x_{0}, y_{0}) := \{x_{0}, \{x_{0}, y_{0}\}\} \end{align*}
Where the element that is not explicitly a set gives us the definition of the first element.
3. Function Group
Let \((S, \circ)\) define a group where \(S\) is the set of all functions, and \(\circ\) is the composition binary operator. Then \(f(x) = x\) is the identity element, and an inverse of a function is
defined as \( (f \circ f^{-1})(x) = (f^{-1} \circ f)(x) = x \).
|
{"url":"https://ret2pop.nullring.xyz/mindmap/function.html","timestamp":"2024-11-05T03:19:14Z","content_type":"application/xhtml+xml","content_length":"11376","record_id":"<urn:uuid:dac325a9-87c4-4f96-b35d-1b2e3992e7ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00441.warc.gz"}
|
Unlock the World of Number Properties | Option Education AE
As promised in the last post, we will investigate the number of properties tested frequently on the GMAT. We first begin with prime numbers.
A number is prime if it has only two factors: ‘1’ and the number itself, e.g. ‘2’. A number with more than two factors is called composite, e.g., ‘4’. The concept of prime numbers applies only to
positive integers. ‘1’ is itself neither prime nor composite. The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 and 31… Once you know these basic points about prime numbers, the next
important concept is prime factorization.
Every positive integer can be written uniquely as a product of prime numbers, called its prime factorization.
Let’s say we want to find the prime factors of 120. We start testing all integers to see if and how often they divide 120 and the subsequent resulting quotients evenly.
15 ÷ 2 = 7.5, not evenly, so try the next highest number, 3
Using a “factor tree,” your work would look like this
Prime factorization helps us deduce a lot of things about the number. Let’s look at one of the most important deductions.
Once we have the prime factorization of a number, we can quickly find the total number of factors (this would otherwise be quite tedious since we want ‘all’ the factors).
Let’s say you are asked to find the total number of factors of 120.
First of all, the prime factorization of 120 = 23 x 3 x 5
– Find the exponents of each prime number in the prime factorization ( here 3, 1, and 1)
– Add one to each value (so 4, 2, 2)
So the number of all factors of 120 = 4x2x2 = 16
Another variant is to find the number of odd and even factors. We leave out the exponent of 2 for odd factors since even a single ‘2’ on multiplication would give an even number. So, the number of
odd factors = 2×2= 4. To find the number of even factors, we subtract the number of odd factors from the total number of factors (here, 16 – 4 = 12).
At Option Training Institute, we constantly endeavor to help people do well in Quant even if they found Math scary in grade school.
|
{"url":"https://optioneducation.ae/tag/number-properties/","timestamp":"2024-11-03T20:09:20Z","content_type":"text/html","content_length":"140963","record_id":"<urn:uuid:5db6bea3-0715-49bb-bd62-e9f6d71a671a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00141.warc.gz"}
|
High Voltage Practical Robotic Two Port S Parameter Passive Circuit Using Spice
Comment errors or corrections found for this circuit, and get the chance to win big!
RF-controlled robot design requires S parameters to model and simulate various electronic passive resistors, inductors, and capacitor circuits represented as two-port networks. Examples demonstrating
the usefulness of the technique and models in simulation are provided. The procedure and technique for simulation of two ports using Z, H parameters are extended by similar steps using analogue
behavioural modelling, as is done for S parameters in a two-port with original SPICE2G for robotic circuits, using computer-aided simulation tools like SPICE/ Pspice at high frequencies.
In the present age of modern electronics, almost all engineers and scientists are familiar with robots as they are extensively used in human life. The RF-controlled Robot utilises a transmitting
device containing an RF transmitter and an RF Encoder, sending commands to the robots for specified tasks such as moving back and forth, reversing, turning right/left, and stopping. To design RF
transmitters in RF and microwave systems, S-parameters [1-6] are common, and these parameters can be measured with the help of network analysers.
SPICE [7-16] is employed to simulate various electronic circuits comprising different types of devices and linear circuits. The elements used in typical CAD device environments using SPICE include
resistors, capacitors, inductors (including mutual), voltage/current DC and AC sources, dependent linear and non-linear current/voltage sources, transmission lines, and electronic devices such as
diodes, BJT, JFET, MOSFET. Two-port circuit models for microwave passive networks, both in classroom theory and industrial practice, are important for students and researchers and are useful in
synthesising T-PI models [17-24]. There are two methods to simulate [18-24] active or passive circuits: the first is by using active or passive models for each component or group of components, and
the second is by representing input and output voltages and currents by suitable port parameter equations when that portion of the circuit does not contain independent voltage and current sources.
Models represented by non-linear analogue behavioural options of Pspice are added to these equations, and the system is solved for voltages and currents, either manually or using sophisticated
simulation software. To model the two-port of an RF network for robotic use using existing software programs like Spice2g and Pspice, where there are no models to represent them, we must use existing
model descriptions like behavioural models and adapt them suitably. Attempts have been made to achieve this using only the Spice2G program. The present method can be used in conjunction with original
passive circuits to simulate networks. The analogue behavioural modelling option of PSPICE includes additional descriptions of DC and AC voltages and currents using expressions, tables, Laplacian
operators, and complex voltage and current multiplications with complex numbers.
Passive circuit modelling method
The power waves incident and reflected from the two ports of a linear network are defined as a1, a2, b1, b2, respectively. These are related to the scattering matrix that characterises the circuit.
b[2]=S[21]a[1]+S[22]a[2] (1)
Here, both ports are assumed to have the same characteristic impedance Z0. Substituting for a1, a2, b1, b2 in equation (1),
V[2]=S[22](V[2]+I[2]Z[0])+S[21](V[1]+I[1]Z[0])+I[2]Z[0] …(4)
putting S[22]=Real{S[22]}+ j Imag{S[22]}
and S[21]=Real{S[21]}+ j Imag{S[21]} …(5)
Now, equation (2) and (4) describing the two port voltages and currents (Fig.1a) could be represented by Fig. 1b. Now,
V[1 ]i.e., equation(3) can be written as V[1]=I[1]Z[0]+E[1]+E[2]
V[2 ]i.e,[ ]equation (5) can be written as V[2]=I[2]Z[0]+E[7]+E[8]
where E[7]=Real{S[22]}(V[2]+I[2]Z[0])+ j Imag{S[22]}(V[2]+I[2]Z[0])
=E[9]+j E[10]
E[8]=Real{S[21]}(V[1]+I[1]Z[0])+ j Imag{S[21]}(V[1]+I[1]Z[0])
=E[11]+j E[12]
If the S parameters are defined in rectangular(Cartesian) co-ordinates, i.e., S[ij]=Real{S[ij]}+jImag{S[ij]}, i,j (1,2) the sub-circuit in Table I should be used for simulating two-port network
between terminals 1 and 2.
If the S parameters are defined in polar coordinates, the Table II sub circuits should be used for defining two port networks. If SPICE2G version is used, a lossless transmission line of length NL
chosen according to Polar angles of the S parameters is used to realise the e ^jPhase Angle term. If phase angle is negative NL(Spice transmission line parameter) is given by
And if Phase Angle is positive then NL is given by
The overall S parameters of the entire circuit (Figure 2) which is used generally in radio transmitters to control robots, are computed by PSpice files.
Figure 2. An Electronic Component in RF Robotic Transmitter Control
A two-port model for the measured S parameters for radio frequency transmitter component (FIGURE 2) with S11, S21, S12, and S22 as inputs is obtained USING BOTH SPICE2G and Pspice circuit analyses
and simulation software. Analog behavioural modelling technique is used with Pspice. Table I and Table II show the SPICE2G subroutines to obtain any two port equivalent usind scattering matrix for
radio frequency transmitter component for robo control.
Table I. SPICE2G file model for Simulating Two Port with Cartesian S Parameters
.SUBCKT MODEL1 1 2
E3 10 0 POLY(2) 1 0 3 4 0 ReS11 ReS11
E4 20 0 POLY(2) 1 0 3 4 0 ImS11 ImS11
V13 1 3 AC 0
R34 3 4 50
*where Real{S[11]}=ReS11 and Imag{S[11]}=ImS11
*The following description gets jE[4]
E210 21 0 20 0 1
V22 22 0 AC 0
*CURRENT THROUGH CAPACITANCE C[2122 ]= j ωC[2122] E[210 ]where ω is angular *frequency
*AT THE GIVEN S PARAMETER FREQUENCY CHOOSING ωC[2122]=1 OR C[2122]=1/ω
C2122 21 22 1/ω
*CURRENT GENERATOR F[230]
F230 0 23 V22 1
R230 23 0 1
E1 4 5 POLY(2) 10 0 23 0 0 1 1
E5 11 0 POLY(2) 2 0 6 7 0 ReS12 ReS12
E6 12 0 POLY(2) 2 0 6 7 0 ImS12 ImS12
*where Real{S12}=ReS12 and Imag{S12}=ImS12
*TO GET jE[6]
E240 24 0 12 0 1
V25 25 0 AC 0
*CURRENT THROUGH CAPACITANCE C[2425]=j ωC[2425] E[240 ]where ω is angular *frequency
*AT THE GIVEN S PARAMETER FREQUENCY CHOOSING ωC[2425]=1 OR C[2425]=1/ω
C2425 24 25 1/ω
*CURRENT GENERATOR F[260]
F260 0 26 V25 1
R260 26 0 1
E2 5 0 POLY(2) 11 0 26 0 0 1 1
E9 30 0 POLY(2) 2 0 6 7 0 ReS22 ReS22
E10 40 0 POLY(2) 2 0 6 7 0 ImS22 ImS22
R67 6 7 50
V26 2 6 AC 0
*where Real{S22}=ReS22 and Imag{S22}=ImS22
*TO GET jE10
E310 31 0 40 0 1
V32 32 0 AC 0
*CURRENT THROUGH CAPACITANCE C[3132 ]= j ωC[3132] E[310 ]where ω is angular *frequency
*AT THE GIVEN S PARAMETER FREQUENCY CHOOSING ωC[3132]=1 OR C[3132]=1/ω
C3132 31 32 1/ω
*CURRENT GENERATOR F[330]
F330 0 33 V32 1
R330 33 0 1
E7 7 8 POLY(2) 30 0 33 0 0 1 1
E11 41 0 POLY(2) 1 0 3 4 0 ReS21 ReS21
E12 42 0 POLY(2) 1 0 3 4 0 21 ImS21 ImS21
*where Real{S21}=ReS21 and Imag{S21}=ImS21
*TO GET jE12
E430 43 0 42 0 1
V44 44 0 AC 0
*CURRENT THROUGH CAPACITANCE C[4344 ]= j ωC[4344] E[430 ]where ω is angular *frequency
*AT THE GIVEN S PARAMETER FREQUENCY CHOOSING ωC[4344]=1 OR C[4344]=1/ω
C4344 43 44 1/ω
*CURRENT GENERATOR F[450]
F450 0 45 V44 1
R450 45 0 1
E8 8 0 POLY(2) 41 0 45 0 0 1 1
.ENDS MODEL1
Table II. SPICE2G file model to simulate Two ports with Polar S parameters
.SUBCKT MODEL2 1 2
*S11M is the magnitude of the S11 of the two port
*F0 is the frequency at which S parameters are defined
E3 10 0 POLY(2) 1 0 3 4 0 S11M S11M
T1 10 0 11 0 Z0=50 F= F0 NL=ANG1 or ANG2 for S11 Phase
RZ0 11 0 50
*S12M is the magnitude of the S12 of the two port
E4 12 0 POLY(2) 2 0 6 7 0 S12M S12M
T2 12 0 13 0 Z0=50 F= F0 NL=ANG1 or ANG2 for S12 Phase
R130Z0 13 0 50
*PORT 1 DESCRIPTION
V13 1 3 AC 0
R34 3 4 50
E1 4 5 11 0 1
E2 5 0 13 0 1
*S22M is the magnitude of the S22 of the two port
E5 20 0 POLY(2) 2 0 6 7 0 S22M S22M
T3 20 0 21 0 Z0=50 F = F0 NL=ANG1 or ANG2 for S22 Phase
R210Z0 21 0 50
*S21M is the magnitude of the S21 of the two port
E6 30 0 POLY(2) 1 0 3 4 0 S21M S21M
T4 30 0 31 0 Z0=50 F= F0 NL= ANG1 or ANG2 for S21 Phase
R310Z0 31 0 50
*PORT 2 DESCRIPTION
V26 2 6 AC 0
R67 6 7 50
E7 7 8 21 0 1
E8 8 0 31 0 1
.ENDS MODEL2
Tables III & IV show the circuit netlist to obtain S parameters for Figure 2. Table V shows the equivalent two-port circuit for Figure 2. with Pspice software. Figures 3a & 3b show the output voltage
of the transmitter component with 100Ω source and load.
Table III. Netlist for S parameter determination S11, S21
**** 08/18/23 16:25:38 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
S parameters of capacitive passive circuit
C13 1 3 2NF
C30 3 0 1NF
C34 3 4 3NF
C40 4 0 2NF
C42 4 2 1NF
R20 2 0 1E24
R40 4 0 1E24
R30 3 0 1E24
R10 1 0 1E24
R15 1 5 50
V10 5 0 AC 1
R220 2 0 50
E1 6 0 VALUE={2*V(1)-V(7)}
R60 6 0 1
V70 7 0 AC 1
R70 7 0 1
E2 8 0 2 0 2
R80 8 0 1
.AC LIN 10 1GHZ 10GHZ
*VOLTAGE AT NODE (8) GIVES S21
*VOLTAGE AT NODE (6) GIVES S11
.PRINT AC VM(8) VP(8) VM(6) VP(6)
.PRINT AC IM(V10) IP(V10)
**** 08/18/23 16:25:38 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
S parameters of capacitive passive circuit
**** 08/18/23 16:25:38 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
S parameters of capacitive passive circuit
**** AC analysis Temperature = 27.000 DEG C
FREQ VM(8) VP(8) VM(6) VP(6)
1.000E+09 1.736E-03 -8.958E+01 1.000E+00 -1.797E+02
2.000E+09 8.681E-04 -8.979E+01 1.000E+00 -1.798E+02
3.000E+09 5.787E-04 -8.986E+01 1.000E+00 -1.799E+02
4.000E+09 4.341E-04 -8.989E+01 1.000E+00 -1.799E+02
5.000E+09 3.472E-04 -8.992E+01 1.000E+00 -1.799E+02
6.000E+09 2.894E-04 -8.993E+01 1.000E+00 -1.799E+02
7.000E+09 2.480E-04 -8.994E+01 1.000E+00 -1.800E+02
8.000E+09 2.170E-04 -8.995E+01 1.000E+00 -1.800E+02
9.000E+09 1.929E-04 -8.995E+01 1.000E+00 -1.800E+02
1.000E+10 1.736E-04 -8.996E+01 1.000E+00 -1.800E+02
Table IV Netlist for S parameter determination S12, S22
**** 08/18/23 16:36:51 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
S parameters of capacitive passive (FILE2)
C13 1 3 2NF
C30 3 0 1NF
C34 3 4 3NF
C40 4 0 2NF
C42 4 2 1NF
R20 2 0 1E24
R40 4 0 1E24
R30 3 0 1E24
R10 1 0 1E24
R15 2 5 50
V10 5 0 AC 1
R220 1 0 50
E1 6 0 VALUE={2*V(2)-V(7)}
R60 6 0 1
V70 7 0 AC 1
R70 7 0 1
E2 8 0 1 0 2
R80 8 0 1
*VOLTAGE AT NODE (6) GIVES S22
*VOLTAGE AT NODE (8) GIVES S12
.AC LIN 10 1GHZ 10GHZ
.PRINT AC VM(8) VP(8) VM(6) VP(6)
S parameters of capacitive passive (FILE2)
TOTAL POWER DISSIPATION 0.00E+00 WATTS
S parameters of capacitive passive (FILE2)
**** AC analysis Temperature = 27.000 DEG C
FREQ VM(8) VP(8) VM(6) VP(6)
1.000E+09 1.736E-03 -8.958E+01 1.000E+00 -1.795E+02
2.000E+09 8.681E-04 -8.979E+01 1.000E+00 -1.798E+02
3.000E+09 5.787E-04 -8.986E+01 1.000E+00 -1.798E+02
4.000E+09 4.341E-04 -8.989E+01 1.000E+00 -1.799E+02
5.000E+09 3.472E-04 -8.992E+01 1.000E+00 -1.799E+02
6.000E+09 2.894E-04 -8.993E+01 1.000E+00 -1.799E+02
7.000E+09 2.480E-04 -8.994E+01 1.000E+00 -1.799E+02
8.000E+09 2.170E-04 -8.995E+01 1.000E+00 -1.799E+02
9.000E+09 1.929E-04 -8.995E+01 1.000E+00 -1.799E+02
1.000E+10 1.736E-04 -8.996E+01 1.000E+00 -1.800E+02
Job concluded
The results by running the file(Table V) show that the two port representations are correct and valid in the 1Ghz, and 2GHz and verified by Tables III and IV.
Table V Two-port representation of the Transmitter Control Circuit
**** 08/18/23 16:49:19 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
Capacitive passive circuit equivalent simulation
**** Circuit description
V50 9 0 AC 1
V93 9 3
R31 3 1 50
R12 1 11 50
E1 11 4 FREQ {V(1)+I(V93)*50}=(1GHZ, 0.,-179.7) (2GHZ,0.,-179.8)
E2 4 0 FREQ {V(2)+I(V27)*50}=(1GHZ,-55.209,-89.58) (2GHZ, -61.228604, -89.79)
V27 2 7
R25 7 5 50
E3 5 6 FREQ {V(2)+I(V27)*50} =(1GHZ,0.,-179.7) (2GHZ, 0.,-179.8)
E4 6 0 FREQ {V(1)+I(V93)*50} = (1GHZ, -55.209,-89.58) (2GHZ,-61.228604,-89.79)
R20 2 0 50
ER 8 0 2 0 2
R80 8 0 1
ERR1 10 0 VALUE ={2*V(1)-V(9)}
R100 10 0 1
.AC LIN 2 1GHZ 2GHZ
.PRINT AC VM(8) VP(8) VM(10) VP(10)
Capacitive passive circuit equivalent simulation
Total power dissipation 0.00E+00 WATTS
**** 08/18/23 16:49:19 ****** PSpice Lite (October 2012) ****** ID# 10813 ****
Capacitive passive circuit equivalent simulation
**** AC analysis Temperature = 27.000 DEG C
FREQ VM(8) VP(8) VM(10) VP(10)
1.000E+09 1.736E-03 -8.958E+01 1.000E+00 -1.797E+02
2.000E+09 8.681E-04 -8.979E+01 1.000E+00 -1.798E+02
Job concluded
Figure 3a. Output voltage of Transmitter component in mV
Figure 3b. Output voltage phase of transmitter component in degrees
For operating temperature analysis, parameters sensitive to temperature have to be modified. They can be increased/decreased according to the difference between measured and operating temperature.
Simulation packages like Spice specify at which the analysis is being done and give results at this temperature. For various temperatures, various linear component values, and linearised small signal
equivalent model parameters are calculated prior to simulation. The variation of small signal S, Z, and H parameters with respect to temperature can be stored in the form of look-up tables and
polynomial or spline functions at a frequency and bias. To evaluate the circuit at a temperature we must load those values of two port parameters stored in the computer either in rectangular or polar
coordinates and the rest of the simulation follows with no changes in the procedure for all three representations.
Methods and procedures to predict ac/rf responses of integrated circuits employed in robot design applications in which parts of the circuit are described by two ports (both active and passive) whose
S parameters are known are described and explained. An example with capacitive loading showing the usefulness of the approach is provided. It is shown from the definition of two-port parameters we
obtain two equations in V1, V2, I1 and I2. The S, Z, and H parameters can be used to represent two ports, with input and output voltages and currents. The complex terms in the description of each of
these circuit equations contain a linear combination of two-port Z, Y, and S parameters with voltage/current at different input/output ports. As these are circuit/network equations, no change is
needed in the method to obtain two-port models as explained in the case of S parameters for the other two port representations (combination of V1, V2, I1, I2) for Z, H parameters. Also, similar
simulation procedures and techniques using other sophisticated simulation RF packages could be employed for two ports described by Z transmission and inverse transmission parameters.
1. Nidhi Agarwal, Electronics for you Express Magazine, Design: Circuit, Interesting Reference Designs of Wireless Chargers, pp. 66-67, pp. 66-67, July 2023
2. B. Epler, SPICE2 application notes for dependent sources, IEEE Circuits & Devices Magazine, pp. 36-44,3(5), 1984
3. K. Bharath Kumar,” Inverse ABCD parameter determination using SPICE”, International Journal of Analog Integrated Circuits, IJAIC, Vol-3, issue 1, pp. 1-6, 2017
4. K. Bharath Kumar, Using Substitution Theorem to Derive Thevenin Resistance Values with SPICE, EDN-Magazine, Aspencore, Design Ideas, September 10, 2021
5. K. Bharath Kumar, SPICE to derive Thevenin and Norton equivalent Circuits, February 14, 2022, EDN Magazine Online edition, Aspencore Media
6. K. Bharath Kumar, Two Port S Parameter Simulation Using SPICE2G, International Journal of Electronics Letters,DOI:10.1080/21681724.2017.1378374, published online 21 Sept. 2017
7. Hyunji Koo, Martin Salter, No-Weon Kang, Nick Ridler, Young-Pyo Hong, Uncertainty of S-Parameter Measurements on PCBs due to Imperfections in the TRL Line Standard, Journal of Electromagnetic
Engineering and Science. Vol 21(5), pp. 369-378, 2021
8. Xiaoyu Lu, Xiang Zhou, Calculation of Capacitance Matrix of Non-uniform Multi-conductor Transmission Lines based on S-parameters, 2021 DOI: 10.1109/itnec52019.2021.9586871.
9. K. Yanagawa, K. Yamanaka, T. Furukawa, A. Ishihara, A measurement of balanced transmission lines using S- parameters, IEEE Instrumentation and Technology Conference, Conference Proceedings, IMTC/
94, Advanced Technologies in I & M, 1994
10. Gorecki, K.(2008). Spice modeling and the analysis of the self-excited push-pull dc-dc converter with self-heating were taken into account. Mixed Design of Integrated Circuits and Systems, MIXDES
(6), pp. 19-21
11. Izydorczyk, J., Chojcan, J.(2008).Tuning of Coupled resonator LC filter aided by SPICE sensitivity analysis. Computational Technologies in Electrical and Electronics Engineering, SIBIRCON (7),
pp. 21-25
12. Radwan, A. G., Madian, A. H., Soliman, A. M.(2016). Two-port two impedances fractional order Oscillators. Microelectronics Journal,(9), pp. 40-52
13. Steenput,E.(1999). A Spice circuit can be synthesized with a specific set of S-Parameters. IEEE PES Winter Meeting, January 31
14. Yang, W. Y., and Lee, S. C.(2007) Circuit Systems with MATLAB and PSpice, Chapter X, State: John Wiley & Sons
15. Rashid, M. H. (1995) SPICE for Circuits and Electronics Using PSPICE. Prentice Hall
16. J. Keown, J.(2001). OrCAD PSpice and Circuit Analysis, Prentice Hall
17. Du, H., Gorcea, D., So, P.P.M, Hoefer, W. J. R.(2008). A Spice analog behaviour model of two-port devices with arbitrary port impedances based on the S- parameters extracted from time-domain
field responses. International Journal of Numerical Modelling Electronic Networks, Devices and Fields, 21(1-2), pp. 77-90
18. Xia, L., Bell, I. M., Wilkinson, A. J.(2011). Automated model conversion for analog simulation based on SPICE-level description. 6^th International Conference on Design & Technology of Integrated
Systems in Nanoscale Era(DTIS),pp 1-4
19. Roy, C. D., and Shail, B. J. Linear Integrated Circuits, New Age International Publisher, New Delhi
20. Dillard, W. C.(2004). Is Spice applicable across the ECE curriculum and proceedings ASEE Southeast Section References Conference
21. Kumar, R. and Kumar, K.(2015). Pspice and Matlab/SimElectronic based teaching of Linear Integrated Circuit: A New Approach. International Journal of Electronics and Electrical Engineering, 3(2),
pp. 34-37
22. Bharath Kumar, K.(1990). Multi-two port parameter simulation using Pspice, Technical Report, Semiconductor Research Laboratory, Oki Electric Industry, Japan
23. 2C. C. Timmermann, C. C.(1995). Exact S Parameter models boost SPICE, IEEE Circuits & Devices Magazine,(9), pp, 17-22
24. Kumar, K. B. and Wong, T. (1988). Methods to obtain Z,Y,G,H,S Parameters from SPICE program, IEEE Circuits and Devices Magazine, pages:30-31
K. Bharath Kumar obtained a B. Tech degree in E & CE with highest honours from JNTUniversity, Anantapur in 1981and M. Tech degree from Indian Institute of Technology, Kharagpur in the area of
Microwave and Optical Communication in the year 1983. Later joined Indian Telephone Industries, Bangalore and worked in the area of Fiber optics, and in the year 1990 obtained M. S. degree from the
Illinois Institute of Technology, Chicago,USA and joined the Oki Electric, Japan as a researcher. He has over thirty-four publications retired, and his current address is 10-365,Sarojini Road,
Anantapur,AP 515 001 and can be reached at email:[email protected]
|
{"url":"https://www.electronicsforu.com/electronics-projects/high-voltage-practical-robotic-two-port-s-parameter-passive-circuit-using-spice","timestamp":"2024-11-10T01:44:31Z","content_type":"text/html","content_length":"566393","record_id":"<urn:uuid:d0a6e40a-ace7-4764-9595-dff8069b6832>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00706.warc.gz"}
|
How to Generate Random 10 Digit Number in Excel (6 Methods) - ExcelDemy
Method 1 – Combine ROUND and RAND Functions to Generate Random 10 Digit Number
• Enter the following formula in cell C5.
• Press Enter.
• It will return a random 10 digits number in cell C5.
• Drag the Fill Handle tool from cell C5 to cell C9.
• We will get the result as shown in the following image.
How Does the Formula Work?
• RAND()*9999999999+1: This part multiplies the random number generated by 9999999999 and adds 1 to it.
• ROUND(RAND()*9999999999+1,0): This part rounds the result that we get from the RAND function.
Method 2 – Use RANDBETWEEN Function to Create Random 10 Digit Number in Excel
• Enter the following formula in cell C5.
• Press Enter.
• We will get a random 10 digits number in cell C5.
• Drag the Fill Handle tool for the remaining cells.
• The result will be as shown in the following image.
Method 3 – Generate Random 10 Digit Number Based on Number of Digits You Type in Different Cell
• Select the cells (D5:D9) and enter the following formula.
=LEFT(RANDBETWEEN(1,9)&RANDBETWEEN(0,999999999999999)&RANDBETWEEN(0,999999999999999), C5)
• Enter the value 10 in cell C5.
• Press Enter.
• We will get a random 10 digits number in cell D5.
• Enter the value 10 in cells (C6:C9). As a result, we also get random 10 digits numbers in cells (D6:D9).
How Does the Formula Work?
• RANDBETWEEN(0,999999999999999): This part returns a random 10 digit number.
• LEFT(RANDBETWEEN(1,9)&RANDBETWEEN(0,999999999999999)&RANDBETWEEN(0,999999999999999), C5): Returns a random number of fixed digits in cell D5 that we type in cell C5.
Method 4 – Apply RANDARRAY Function to Generate Random 10 Digit Number
• Enter the following formula in cell C5.
• Press Enter.
• We will get random numbers in cells (C5:D9).
Method 5 – Generate10 Digit Number with Analysis Toolpak
• Select Options from the menu.
• A dialog box named ‘Excel Options’ will pop up.
• Click on the option Add-ins on the left side of the window.
• On the right side, scroll down to the bottom. Select the option ‘Excel Add-ins’ from the drop-down and click on Go.
• This will open a pop-up window with a list of all accessible Excel add-ins. Click OK after checking the box ‘Analysis ToolPak’.
• Select the ‘Data Analysis’ option from the Data tab.
• It will open a new pop-up window named ‘Data Analysis’.
• Scroll down the options in the ‘Analysis Tools’ section. Select the ‘Random Number Generation’ option and click OK.
• We will get a pop-up window named ‘Random Number Generation’. We will input values for different parameters to generate random 10 digits numbers.
• The ‘Number of Variables’ field specifies how many columns we want to fill with random data. We have used the value 1.
• The number of rows is indicated by the ‘Number of Random Numbers’. We have entered the value 5.
• In the Distribution field, we have chosen the option Uniform.
• Set the parameters to a range of 1 to 9999999999.
• Set the ‘Output Range’ to the array’s beginning, which is cell C5.
• Click on OK.
• Random 10 digits numbers will be generated in cells (C5:C9).
Method 6 – Insert VBA Code to Create 10 Digit Number in Excel
• Right-click on the active sheet and select the option ‘View Code’.
• The above command will open a new blank VBA code window for that worksheet.
• Enter the following code in the code window:
Sub Random_10_Digit()
Dim GRN As Integer
For GRN = 5 To 9
ActiveSheet.Cells(GRN, 3) = Round((Rnd() * 9999999999# - 1) + 1, 0)
Next GRN
End Sub
• Click on the Run or press the F5 key to run the code.
• We will get 10 digits numbers generated in cells (C5:C9).
Download Practice Workbook
<< Go Back to Random Number in Excel | Randomize in Excel | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply
|
{"url":"https://www.exceldemy.com/generate-random-10-digit-number-in-excel/","timestamp":"2024-11-07T22:49:25Z","content_type":"text/html","content_length":"196903","record_id":"<urn:uuid:bf4d96b3-c161-4efa-a3aa-a8b9b7cef299>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00773.warc.gz"}
|
OpenStax College Physics for AP® Courses, Chapter 23, Problem 47 (Problems & Exercises)
(a) What is the voltage output of a transformer used for rechargeable flashlight batteries, if its primary has 500 turns, its secondary 4 turns, and the input voltage is 120 V? (b) What input current
is required to produce a 4.00 A output? (c) What is the power input?
Question by
is licensed under
CC BY 4.0
Final Answer
1. $0.960 \textrm{ V}$
2. $0.0320 \textrm{ A}$
3. $3.84 \textrm{ W}$
Solution video
OpenStax College Physics for AP® Courses, Chapter 23, Problem 47 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. A transformer is being plugged into a wall outlet at 120 volts and the number of turns in the primary is 500 and the number of turns in the
secondary is four. So the question is what voltage will that be in the secondary. Now since this is a chance for me being used to charge flashlight batteries we're expecting the secondary voltage to
be very small because batteries are only one and a half volts. So we have voltage in the secondary divided by voltage in the primary equals the number of turns in the secondary divided by number of
turns in the primary. This is the transformer equation and we’ll solve this for Vs by multiplying both sides by Vp. And so the voltage in the he secondary will be that of the primary times the
secondary turns divided by primary turns. So it’s 120 volts times four divided by 500 which gives 0.960 volts. Now as for the currents, current in the primary divided by the current in the secondary
equals the number of turns in the secondary divided by the number of turns in the primary. And so we’ll multiply both sides by Is to solve for Ip and so we end up with four amps because that's what
we're told is the current in the secondary. So that's the current in the output. And that's multiplied by four turns in the secondary divided by 500 turns in the primary which is 0.032 amps. The
power input is the current in the primary times voltage in the primary. So that's 0.032 amps times 120 volts which is 3.84 watts.
|
{"url":"https://collegephysicsanswers.com/openstax-solutions/what-voltage-output-transformer-used-rechargeable-flashlight-batteries-if-its-0","timestamp":"2024-11-04T02:32:09Z","content_type":"text/html","content_length":"241368","record_id":"<urn:uuid:615d1bef-01db-46f7-a6c8-96fcd7aa99af>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00463.warc.gz"}
|
In Exercises 1-10, classify the random variable \(X\) as finite, discrete infinite, or continuous, and indicate the values that \(X\) can take. [HINT: See Quick Examples 5-10.] Roll two dice; \(X=\)
the sum of the numbers facing up.
Short Answer
Expert verified
The random variable \(X\) is classified as finite, and it can take the values \(\{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}\).
Step by step solution
Determine the possible outcomes of rolling two dice
When rolling two dice, each die has 6 sides, numbered from 1-6. So for two dice, there are 6 x 6 = 36 possible outcomes.
Calculate the possible values for X
To find the possible values for the sum of the dice, we'll look at the lowest and highest possible outcomes: - The minimum sum would be when both dice show a 1 - hence, Xmin = 1+1 = 2. - The maximum
sum would be when both dice show a 6 - hence, Xmax = 6+6 = 12.
Write the possible values for X
The possible values for X range from the minimum sum to the maximum sum (inclusive). Therefore, the values of random variable \(X\) can be: \[X = \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}\]
Classify the random variable X
Since there is a finite number of possible values for X, we can classify the random variable \(X\) as a finite random variable. In conclusion, the random variable \(X\) is classified as finite, and
it can take the values \(\{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}\).
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Discrete Random Variables
Random variables can be broadly classified into different types, with one of the main categories being discrete random variables. These are variables that can take on a finite or countably infinite
set of values. In simple terms, think of discrete random variables like digital steps on a staircase.When you roll two dice and add up the numbers that show on top, you're dealing with a discrete
random variable. The possible sums you can get from rolling two dice range from 2 to 12, making a countable list: \( \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\} \). Each number within this set represents
a possible distinct outcome when you roll the dice, showcasing the discrete nature of the variable.Unlike continuous variables, which can take on any value in a range (like measuring height or
temperature), discrete random variables stick to clearly identifiable values. They are perfect for situations involving counts or any scenario where data can only take specific points or integers.
Random Variable Classification
Understanding how to classify random variables is essential in probability and statistics. There are typically three main classifications:
• Finite: These variables take on a fixed number of possible values. The set of potential outcomes is countable and limited, like the sum of two rolled dice.
• Discrete Infinite: This type involves a countable number of values, but they are infinite. Prime numbers are an example since you can list them, but they go on indefinitely.
• Continuous: These variables assume an infinite number of values within a given range. Think of measurements like time or weight, where any fractional value is possible.
In the dice rolling example, the random variable \( X \) is classified as finite because there are only 11 possible distinct outcomes for the sum. This classification helps simplify the analysis
because you know exactly the number of values to consider.
Dice Probability
Dice probability is a classic example used to illustrate the principles of probability distributions. It revolves around predicting the outcomes when rolling dice, which is particularly useful for
understanding finite and discrete variables.When you roll two dice, several factors come into play:
• Each die has 6 sides, meaning 6 possible outcomes per roll from 1 to 6.
• The total number of combinations when rolling two dice is 36, as each die is independent: \(6 \times 6 = 36\).
• The probability of rolling a specific sum can be calculated based on the combinations that result in that sum. For instance, there is only one way to roll a 2 (both dice showing 1s), while there
are six ways to roll a 7 (1+6, 2+5, 3+4, etc.).
Understanding dice probability is not just about knowing the odds; it's essential for grasping how probability distribution works. It's a gateway to more complex probability topics and underlines the
importance of discrete finite variables in everyday probability scenarios.
|
{"url":"https://www.vaia.com/en-us/textbooks/math/finite-mathematics-and-applied-calculus-7-edition/chapter-8/problem-1-in-exercises-1-10-classify-the-random-variable-x-a/","timestamp":"2024-11-03T17:21:23Z","content_type":"text/html","content_length":"248050","record_id":"<urn:uuid:6745a68e-4868-43d0-8159-d9105850dc54>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00793.warc.gz"}
|
Item type Current library Call number Status Date due Barcode
CMI Salle 1 12 SWA (Browse shelf(Opens below)) Available 03050-01
The book under review provides a very first introduction to the ideas and methods of Galois theory. As the title suggests, its pedagogical goal is very special, and its methodological strategy also
differs from those underlying other introductory texts on the subject. More precisely, the author's intention is to develop elementary Galois theory in as accessible a manner as possible for
undergraduates, and that by an exploration-based approach. In other words, the text aims at building intuition and insight by experimenting with concrete examples from the theory of algebraic
numbers, thereby presenting a Galois theory balanced between abstract concepts and computational methods. Methodologically, the text grounds the presentation in the concept of algebraic numbers with
complex approximations, assuming just a modest background knowledge of abstract algebra, and the author develops the theory around elementary questions about algebraic numbers. This is done in a slow
and leisurely, however direct and perspective-oriented way, together with an introduction to those technological tools for hands-on experimentation that help students to acquire a more profound
familiarity with concrete number fields. Although working exclusively over the field of complex numbers, and number fields therein, the author outlines, at the end of the book, the generalization of
Galois theory to arbitrary fields for the purpose of further study. The theoretical part of the exposition culminates in a discussion of the Galois correspondence for subfields of ℂ and its classical
applications such as cyclotomic extensions, solvability questions for polynomial equations, and ruler-and-compass constructions. The practical part ends with explicit computations of concrete Galois
groups and Galois resolvents using “Maple” or “Mathematica” as technological tools. As to the contents, the book consists of six chapters, each of which is subdivided into five or more sections.
Chapter 1 provides the necessary preliminaries from abstract algebra: polynomials, polynomial rings, prime factorization in polynomial rings, computational methods, general rings and fields, general
groups and, in particular, symmetric groups. Chapter 2 discusses algebraic numbers and subfields generated by algebraic numbers, together with their computational aspects. Chapter 3 illustrates the
practical work with algebraic numbers and number fields generated by one algebraic number, whereas Chapter 4 deals with multiply generated number fields. Again, the computational (or exploratory)
aspects, in particular the working with “Maple” or “Mathematica”, are strongly emphasized. Chapter 5 turns to normal field extensions, splitting fields, Galois groups invariant polynomials,
resolvents, and discriminants, together with the related computational methods and many concrete examples. Chapter 6 is devoted to some of the celebrated classical applications as mentioned above.
This chapter concludes with an outlook to Galois theory over arbitrary fields of any characteristic. Historical notes and an appendix on subgroups of symmetric groups (as used in the text) conclude
the entire exposition. The rich bibliography is ordered by topics and historical relevance, which is just as helpful as the vast amount of examples and well-guided exercises throughout the whole
book. Altogether, this is an excellent introduction to elementary Galois theory for very beginners. The exploration-based approach to the subject is very down-to-earth, entertaining, motivating,
encouraging and enlightening, making the text highly suitable for undergraduate courses, for seminars, or for self-paced independent study by interested beginners. The computer-oriented practical
aspects are rather novel, in this context and for a textbook of this kind, and they are just as timely as enhancening with regard to the current textbook literature. (Zentralblatt)
Bibliogr. p. 201-204. Index
There are no comments on this title.
|
{"url":"https://catalogue.i2m.univ-amu.fr/bib/10627","timestamp":"2024-11-13T07:57:43Z","content_type":"text/html","content_length":"67924","record_id":"<urn:uuid:1e7313ea-efc1-4dcd-a560-c75269276f68>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00066.warc.gz"}
|
How To Solve Trig Equations With A Graphing Calculator - Tessshebaylo
How To Solve Trig Equations With A Graphing Calculator
Ex solve a trigonometric equation using graphing calculator you how to use trigonometry study com solving trig equations on ti 84 graphically with the easily find all solutions in given interval
casio fx cg50 calculus 7 5 precalculus 2e openstax angles not unit circle pre steps examples lesson transcript
Ex Solve A Trigonometric Equation Using Graphing Calculator You
How To Use A Graphing Calculator Solve Trigonometric Equation Trigonometry Study Com
Ex Solve A Trigonometric Equation Using Graphing Calculator You
Solving Trig Equations On A Ti 84 Calculator You
Solving Trigonometric Equations Graphically You
Solving Trigonometric Equations With The Ti 84 Graphing Calculator You
Easily Find All The Solutions To Trigonometric Equations In A Given Interval Using Casio Fx Cg50 You
Solving Trig Equations On A Ti 84 Calculator Graphing Calculus
7 5 Solving Trigonometric Equations Precalculus 2e Openstax
Solving Trig Equations Angles Not On Unit Circle Pre Calculus You
Solving Trigonometric Equations Steps Examples Lesson Transcript Study Com
Solving Trigonometric Equations Math Hints
Trigonometry Desmos Help Center
How To Solve This Equation On My Ti Nspire Cx Cas Sin X 1 Quora
Solving Trig Equations By Graphing Approximate Solutions You
Find The Equation Of A Sine Or Cosine Graph Lessons Examples And Solutions
Exponential Trigonometry Notes
Calculus I Solving Trig Equations
Trigonometry Calculator Step By
7 5 Solving Trigonometric Equations Precalculus 2e Openstax
How To Use Trigonometry Functions On The Ti 84 Calculator You
5 04 Solve Trigonometric Equations
Mfg Solving Basic Trigonometric Equations
Ex solve a trigonometric equation how to use graphing calculator solving trig equations on ti 84 using casio fx cg50 7 5 angles not steps
Trending Posts
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.tessshebaylo.com/how-to-solve-trig-equations-with-a-graphing-calculator/","timestamp":"2024-11-12T15:37:39Z","content_type":"text/html","content_length":"59551","record_id":"<urn:uuid:4ad67a06-fe3c-4ab1-99e4-6c5b26b3c2ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00304.warc.gz"}
|
RankCorrel(x, y, I, rankType, w, resultIfNoVariation)
RankCorrel(x, y) computes the Spearman rank correlation between two uncertain quantities «x» and «y».
RankCorrel(x, y, I) computes the Spearman rank correlation between two paired sets of data points, «x» and «y», sharing a common index «I».
The Spearman rank correlation coefficient is defined to be the Pearson's Correlation Coefficient of the ranks of the data points. In other words, to compute, you find Rank(x, I) and Rank(y, I), the
ranks of the data points, and then compute the standard Correlation between these ranks. As an Analytica expression, this is written:
RankCorrel(x, y, I) := Correlation(Rank(x, I), Rank(y, I), I)
Rank correlation is a measure of how monotonic the relationship between two quantities appears to be. The measure is between -1 and 1, with larger negative numbers indicating a strong negative
monotonic tendency, larger positive numbers indicating a stronger positive monotonic tendency, and numbers near zero indicating little or no monotonic relationship. For example, a large positive
number (meaning close to 1) indicates that increases in one quantity tends to be associated with increases in the other quantity. A large negative number (meaning close to -1) indicates that
increases in one quantity tends to be associated with decreases in the other quantity.
The expected rank correlation between two statistically independent quantities, «x» and «y», is zero. The actual computed rank correlation may differ from zero slightly due to sampling error, but
would be expected to be very close to zero, and to approach zero as the sample size increases. Rank correlation is often used to test for statistical dependence, but you need to be careful with your
conclusions, since while a (statistically significant) non-zero rank correlation does imply statistical dependence, a zero rank correlation does not necessarily imply statistical independence. For
example, in the relation y = x^2, sampled uniformly from «x» in the interval [-a, a], the two quantities «x» and «y» are clearly dependent, yet the expected rank correlation will zero.
Given a set of uncertain inputs, x[1], x[2], .., x[N], and a computed output, y = f(x[1],.., x[N]), the absolute value of the rank correlation, Abs(RankCorrel(x[i], y)), provides a good measure of
the relative degree to which the uncertainty in input x[i] contributes to the uncertainty in output «y». This measure is used by Analytica's Make Importance sensitivity analysis feature.
Rank correlation is often described as nonparametric correlation. This is because the correlation does not assume any functional form for the relationship between the quantities, other than the more
general concept that they are statistically related by some monotone function. In contract, Pearson's correlation assumes that «x» and «y» are related by a linear function with Gaussian noise added,
in which there are clearly underlying parameters.
Rank Type
The optional «rankType» parameter controls how ties are treated. By default, when there are ties, they are assigned the same mid-rank (i.e., average rank among those points that are tied). The
«rankType» can also be set to -1 (= lower ranks), 0 (= mid ranks (default)), +1 (= upper ranks), or null (= unique ranks). See the Rank function for more details. The mid-rank is almost always used
for RankCorrel, so alternative values were are unusual.
You should note that the Rank function, in contrast, uses lower-rank by default, but also accepts these same options. So strictly speaking, when ties are present, the real equivalent is:
RankCorrel(x, y, I) := Correlation(Rank(x, y, I, type: 0), Rank(x, y, I, type: 0), I)
Early versions of Analytica (Analytica 3.1 and earlier) used the lower-rank for ties when computing RankCorrel.
Weighted Rank Correlation
The function can also be used to compute the weighted rank correlation, in which each data point (or uncertain sample) is assigned a different weighting, contributing unequally to the rank
correlation coefficient.
When computing the weighted rank correlation between two uncertain quantities, you need to provide a weighting for each Monte Carlo sample. This weighting is in the form of an array indexed by Run,
where each point indicates the relative importance of the point. In Bayesian analysis, the weighting is often equal to the likelihood of observed data given the uncertain inputs (so that computed
values are posterior values). You can specify this weighting explicitly using the «w» parameter, e.g.:
RankCorrel(x, y, w: likelihood)
Or, if «w» is not specified, the weighting specified by the global system variable SampleWeighting is used.
To compute the weighted rank correlation between two paired data sets, you need to providing a weighting for each pair in the form of an array of relative weights indexed by «I». This array is then
passed in the «w» parameter, e.g.:
RankCorrel(x, y, I, w: my_wts)
Result if there is no variation
If either «X» or «Y» has no variation (among points with non-zero weights), the result is NaN.
If you prefer a different value for this case, such as 0 or Null, specify it in the optional parameter «resultIfNoVariation»..
Null Values
Any data points having a Null value for «x» or «y» are ignored in the computation of RankCorrel.
Note that you must have at least 3 non-Null data points to get a meaningful rank correlation coefficient. When you have fewer than 3 points, the result is NaN.
Statistical Significance
A non-zero RankCorrel indicates that a statistical dependence exists between «x» and «y». But, with a finite sample size, the non-zero value may be nothing more than an artifact of sampling error. A
p-value quantifies the probability that a rank correlation as large as the value observed could have been observed if the quantities really are statistically independent. The p-Value is a standard
measure of statistical significance, with a small value (e.g., <5%) indicating that the evidence for statistical dependence is significant.
The p-Value for statistical dependence can be computed using a formula devised by Fisher as follows:
Var n := Sum(x <> Null and y <> Null, I);
Var rc := RankCorrel(x, y, I);
Var z := 0.4856*Ln((1 + rc)/(1 - rc)) * Sqrt(n - 3);
This is known as a 2-sided p-Value, meaning it measure significance of the departure from 0 in either direction. You can also test for one-sided significance -- e.g., the significance of negative
dependence, or the significance of positive dependence only, by changing the last line to:
CumNormal(z) { For negative dependence }
CumNormal(-z) { For positive dependence }
Generally you would not be justified in using a 1-sided test unless you has a priori reason to believe that the only two possibilities are statistical independence or dependence in the given
Distribution of Expected Rank Correlation
Note: This topic is explained in more detail in the webinar on Rank Correlation by Lonnie Chrisman.
RankCorrel(x, y) computes the sample rank correlation -- the measure that exists among the finite set of data points. This provides information about the true underlying expected rank correlation,
but because of the finite sample size, the presence of sampling error means that the two are not equal. When computing rank correlation, we are really interested in what the underlying expected rank
correlation is, which of course cannot be known with certainty to us mere mortals. However, we can use Monte Carlo techniques to obtain a distribution on the underlying expected rank correlation,
with the distribution reflecting our degree of uncertainty. Larger sample sizes will result in tighter (lower-variance) distributions.
The following function generalizes RankCorrel, so that in Mid-mode, the sample rank correlation is returned, while in probabilistic evaluation mode, a distribution for the underlying expected rank
correlation is returned.
Function RankCorrel_Dist(x, y: ContextSamp[I]; I: Index = Run) :=
Index xy := ['x', 'y'];
Var sampleRc := RankCorrel(x, y, I);
If IsSampleEvalMode Then (
Var pt := BiNormal(0, 1, xy, sampleRc, over: I);
RankCorrel(pt[xy = 'x'], pt[xy = 'y'], I)
Sensitivity Analysis library (Sensitivity Analysis Library.ana)
The library is not included with Analytica and needs to be downloaded and installed before it can be used.
See Also
|
{"url":"https://docs.analytica.com/index.php/RankCorrel","timestamp":"2024-11-02T05:26:00Z","content_type":"text/html","content_length":"32751","record_id":"<urn:uuid:fec8d95f-1b53-45f0-a0b1-b809a6b4f7e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00729.warc.gz"}
|
1013 - Grid
There is a grid size of 1*N. The spanning tree of the grid connects all the vertices of the grid only with the edges of the grid, and every vertex has only one path to any other vertex.
Your task is to find out how many different spanning trees in a given grid.
Every line there is a single integer N(0 < N <= 1000000000), a line of 0 represents the end of the input.
Every line you should only print the result, as the result may be very large, please module it with 1000000007.
sample input
sample output
When N=1, the spanning trees are an follows: _ |_ , |_| , _ _| , _ | | . There are four ways to construct the spanning tree.
ASCII & ACman
|
{"url":"http://hustoj.org/problem/1013","timestamp":"2024-11-13T14:12:08Z","content_type":"text/html","content_length":"7845","record_id":"<urn:uuid:e6553e07-ed62-40ee-bd87-b0e2688d95eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00356.warc.gz"}
|
Archive: forth/pfe/qfloating
The qfloating module is located in the dnw branch of Guido Draheim's PFE site at SourceForge. It implements an optional Quad Floating-Point Word Set. With a few natural exceptions, there is a quad
word with Q prefixed to the name for each word in the updated floating module. To see the scope of both modules, look at the synonyms in the file qfsynos.fs, which can be used for testing quad float
words with existing floating-point tests.
The code uses libquadmath, which is part of gcc. As far as we know, the only official way to install the library is as part of a gcc installation. The dnw branch ./configure step finds both the
library and its include file on our macos and linux systems, but the paths are not readily visible. Here's the current version of the include file at GitHub: quadmath.h. And here's the API
documentation: GCC libquadmath.
The floating point encoding corresponds to the IEEE 754-2008 binary128 format, which has a storage width of 128 bits, a significand precision p of 113 bits, including an implicit leading bit, a
maximum exponent emax of 16,383, and a minimum exponent emin of -16,382. We call these quad floats, or simply quads. Like the binary32 single float and binary64 double float formats, there are no
gaps in the binary representation of quad floats. Double floats are the PFE default.
Separate qfloat stack
The floating and qfloating modules are independent of each other, with one exception to be mentioned shortly, and can be loaded individually or simultaneously. The qfloat stack is separate from both
the float and data stacks.
Floating-point syntax
The syntax for floating-point data in text input and output is identical for the two modules, but with independent input conversion and output print precision.
The traditional PFE outer interpreter has one slot for floating-point interpretation, which can hold a null pointer for no interpretation, a pointer to the double-float interpreter for a separate
floating-point stack, or a pointer to a double-float interpreter for an integrated stack, implemented in Krishna Myneni's fpnostack module. The quad-float interpreter uses the same slot.
Switching the pointers between the double-float and quad-float interpreters when both modules are loaded is an experimental feature, provided by the floating module word SET-INTERPRET-FLOAT and the
qfloating module word SET-INTERPRET-QFLOAT. Further invocations of LOADM on the modules don't have that effect, because LOADM <module> literally becomes a noop once <module> is loaded.
Known problems
1. SEE does not work with quad floats in definitions, nor in QFCONSTANTs.
2. .S ignores quad floats.
3. Documentation is skimpy.
|
{"url":"https://public.websites.umich.edu/~williams/archive/forth/pfe/qfloating/index.html","timestamp":"2024-11-12T06:31:47Z","content_type":"text/html","content_length":"5298","record_id":"<urn:uuid:92b91412-7697-4e3b-9cf4-6e78fb774a2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00133.warc.gz"}
|
1. Introduction
The use and performance of chromatographic separation in today’s laboratories and industry is greatly impeded by the pressure drop required by available particulates column packings. Chromatography
today imposes to force flows through beds of extremely small particles from 50 down to 1.7 μm diameter with extremely high resistance to the percolation of a fluid. This induces high pressure drops
of several hundreds of bars [1].
In the analytical field, ultrahigh-performance liquid chromatography (UHPLC) technology is ultimately limited by the extremely high pressure drop of the ultrafine powders used as separating media.
The packing uniformity is of relevance to the chromatographic efficiency. The non-uniformities of the bed will cause irregular flow patterns leading to losses in efficiency. Particulate beds are
difficult to stack uniformly [2] due to their size and ultrafine nature and the wall effects of the container, and they suffer from a lack of long-term stability due to their slow deformations when
subjected to flow, pressure and cycles. In industrial liquid chromatography (LC), stabilizing and containing large column beds involve levels of complexity and high costs that prohibit the use of
this powerful separation method in most chemical processes.
Attempts to solve this problem have been directed toward the use of monolithic packings, manufactured both in polymers [3] and in silica gel [4]. Extrapolation to large sizes has proven a difficult
problem for both organic and inorganic monoliths.
The permeability of capillary tubes is much higher. In the field of gas chromatography (GC) the empty monocapillary Golay column [5], 50 to 500 μm diameter, has gained large acceptance. Its low
pressure drop allows to make it exceptionally long, and to develop high number of plates or efficiency. This technique has not been used in LC because of the small channels diameters that would be
required, 1 to 10 μm. This difference, consequence of the much lower molecular diffusion coefficient in liquids, makes the corresponding LC system unpractical.
Another route has been to use packings composed of a multiplicity of capillary tubes working in parallel [6, 7, 8, 9, 10, 11]. The stationary phase is constituted by or covers the wall material and
exchanges mass by diffusion with the fluid in the channel core. The fluid flow path becomes perfectly linear with no obstacles. This geometry could combine the advantage of the low pressure drop of a
capillary, and the throughput of existing columns. It could thus be compatible with existing LC instruments.
Multicapillary packings are monolithic in nature and are potentially much easier to extrapolate due to their manufacturing processes.
Unfortunately, the small differences in dimensions between the individual capillaries, the difficulty of coating them evenly with the stationary phase and the differences in their individual aging
negatively affect their short- and long-term efficiency and has made this solution very limited in practice [12, 13].
Colocated monolith microstructures with highly interconnected capillary channels have been described and developed for analytical purposes [11].
The field of multicapillary chromatography is nevertheless currently attracting growing interest.
This interest stems from the fact that this approach offers an attractive solution to the pressure drop problem.
Schisla et al. [12], Sidelnikov [13, 14] and Jespers [15] analysed the behaviour of a bunch of individual capillaries.
It has been derived theoretically by Giddings [16] that the band broadening due to mobile phase flow can be affected by the coupling between diffusional transverse flow between fluid elements and the
dispersion caused by independent fluid element flowing at different velocities (or so called eddy diffusion).
Their importance has been evaluated [17] for particulate packings. This effect has been studied by several researchers [18, 19, 20, 21, 22].
The simulation of radial diffusional effects in chromatographic columns has been undertaken. The focus of previous studies has been mainly on considering axial diffusion, in one-dimensional column
models, because of its direct effect on peak broadening [23, 24, 25, 26, 27]. Some studies [28, 29, 30] examined the effect of a radial transfer phenomenon, heat transfer, on the efficiency of UHPLC
columns. Studies did explore by simulation or random walk consideration the effect of radial diffusion through packed beds [18, 19, 20, 31]. Giddings considers that the exchange of mass between
different convective flow path can occur by mixing (leading to a partial HETP h[f]) and by diffusional exchange (leading to a partial HETP h[D]), both phenomena being coupled. The coupling of both
effects must lead to an efficiency better than the one of each of the phenomenon taken separately.
The present multicapillary columns can be considered as limited to the mixing step only. It can thus be expected that the addition of a diffusional exchange between the independant fluid flow paths
constituted by the channels core will improve the behaviour of those systems. To the best of our knowledge, the application of this coupling on the theory of multicapillary chromatography and as a
way of improvement of this technology has nevertheless never been identified and quantified.
The aim of this study is to estimate the effect of a diffusional flow, passing through bridges of pores between channels causing the eluting bands in the fast or slow channels to discharge by
molecular diffusion in channels flowing with an average velocity.
To quantify and check this effect, we thus conducted a short theoretical derivation based on Giddings Random Walk theory. As the quantitative predictions of this coupling were considered by Giddings
himself as estimates within an order of magnitude, we verified the results by two independent approaches, by transfer function analysis of the coupled system, and by computer dynamic simulations of a
multicapillary array based on the method of lines. The simulations of the residence time distribution (RTD) of a capillary array were performed, recorded, and analyzed, in presence or absence of an
interchannel diffusive effect, or diffusional bridging. This first work will focus mainly on the zero-retention case, as the case with solute retention appears to be still the object of discussions [
16, 18, 19, 20] and as it describes the case where the resistance to mass transfer of the stationary phase layer is negligible because its thickness is low, its mass diffusivity high, or the solute
retention factor is high.
The presented work will be divided in the following sections:
1. 1. We will recall some basic findings regarding the pressure drop gain that can be expected using capillary packings as chromatographic beds instead of particulate packings or other existing
2. 2. We will present our results derived from Giddings theories applied to chromatography in multicapillary arrays. The effect of diffusional bridging will be estimated and its effect compared with
the performance of present state of the art multicapillary chromatography.
3. 3. We will analyze the transfer function of a coupled system constituted of two coupled capillaries.
4. 4. We will present our results obtained by the numerical simulation of multicapillary arrays by an ordinary differential equation (ODE) integration and compare them with the theoretical
5. 5. We will conclude on the potential of this new approach for analytical and preparative chromatography.
2. Theoretical results
2.1. Fundamental equations
We adopted for this study the Golay equation [5] written:
$h=2u+1+6⋅k+11(k)296⋅(1+k)2⋅u+16k(1+k)2DmDsecrc2⋅u$ (1)
where h is the reduced HETP, k is the retention factor, D[m] the solute diffusivity in the mobile phase, D[s] the solute diffusivity in the stationary phase layer. u is the true reduced mobile phase
velocity in the open channel’s core, e[c] the thickness of the stationary phase, r[c] the radius of the open core. This form is valid for thin layers of porous stationary phase.
Let us recall the expression of the Knox Van Deemter equation:
$hp=A⋅up1∕3+B′up+C⋅up$ (2)
With for $C$ the expression proposed by [32, 33], for example:
$C=(1+k−z)230⋅Ppart⋅(1−z)⋅(1+k)2⋅DmDp$ (3)
In this case the reduced velocity of the mobile phase in the bed of particles $up$ is relative to the retention time of an unretained species. h[p] is the reduced height of a theoretical plate in the
bed of particles. P[part] is the tortuosity coefficient of the particle stacking, z is the fraction of mobile phase outside of the particle, D[p] is the intraparticle diffusivity, D[m] is the
diffusivity in the mobile phase.
2.2. Geometry simulated
Figure 1.
Detailed morphological structure of the multicapillary packing.
Figure 1 shows the geometry simulated: the packing is composed of a porous mass containing the stationary phase or being constituted by it. This porous mass can be coated with a stationary liquid or
gel of thickness e[c], or constituted by a mesoporous, high specific surface solid like silica gel or a polymeric, cross linked, gel.
Straight empty channels of diameter d[c] run through this mass and conduct the convective flow of mobile phase through the packing. The porous mass allows diffusion of sample molecules from each
convective channel to its neighbors.
2.3. Pressure drop comparison between capillary and particulate packings
The comparison of pressure drop in capillaries and particulates beds can be established based on Darcy’s law:
where $𝛥P$ is the pressure difference between inlet and outlet, K is a permeability coefficient, 𝜇 the mobile phase viscosity, L the column length, d a characteristic diameter, and v the empty drum
For present chromatographic beds made by stacking spherical beads, the permeability coefficient K is comprised between 500 to 800 [1]. For empty capillaries, we derived from Poiseuille’s law:
where $𝜀c$ is the volume fraction of the channels in the multicapillary bed. In practice, it can vary from 0.4 to 0.8, which means K can vary from 80 to 40 for multicapillary beds, respectively. As
discussed above, the much higher permeability of multicapillary beds stems from their simpler, straight, minimally dissipative flow pattern.
We found that the ratio R[p] of the pressure drops through particulate beds and multicapillary packings at similar diffusional path length, particle diameter and channel diameter is simply given by
the ratio of their permeability coefficients and can be expressed as follows:
We consider here, as a simplification, that the characteristic diffusional distance governing the mass transfer resistance is the particle diameter for particulate beds, and the channel diameter in
the multicapillary case.
Within the validity of those assumptions, at equivalent efficiency, the operating pressure of the multicapillary system is from 10 to 30 times lower than that of the particulate packing system
depending on the thickness of the stationary phase.
At equivalent HETPs, identical stationary phase, identical void fraction and identical velocities the operating pressure of the multicapillary system $ΔPc$ is 15 times lower than that of the
particulate packing system ΔP[p], more than one order of magnitude (ΔP[p]∕ΔP[c] = 14.6).
The multicapillary monoliths offers an additional advantage over beds of fully porous particles: the void fraction of the monoliths can be changed by an adjustment of the thickness of the porous
layer surrounding the channel. This may give them a kinetic behavior like that of core shell microspheres, or oppositely allow them to carry higher loads of molecules to purify.
The core shell packings have the advantage over fully porous particles of a better kinetic behavior, due to the shortened diffusional path in the outer particle layer and better packing density [34].
With multicapillary packings, the kinetic behavior can be freely improved by making the stationary phase layer coating the tubes as thin as necessary, keeping the advantage of a one order of
magnitude gain in pressure drop at equivalent efficiency or speed.
New ordered packings like micropillar arrays [11, 15] have been found efficient but lack in several respects the benefits of radial diffusional dampening, and are subject to wall effects. Micropillar
arrays are today limited in resolution to micrometric structures [11] and need specific, microflow equipment. Multicapillary arrays can be manufactured with submicrometric channels with simple
processes [35, 36, 37, 38], down to 0.2 μm diameter channels or less. This allows theoretically unsurpassed performances on achievable number of plates and analysis velocity with existing equipment.
2.4. Case of independent channels
As stated previously, the theoretical performance of multicapillary packing has been up to now limited by the difference in individual behavior of the distinct capillaries.
The state of art of multicapillary packings are constituted of a bunch of independent capillaries.
Figure 2 presents a scheme of their behavior. Independent channels behave as independent chromatographic columns with unequal output signals due to differences in channels diameters, stationary phase
loading, length, and aging. This unevenness can be considered by an additional variance of the output signal.
Figure 2.
Hydrodynamic behavior of an array of independant parallel capillary columns. The different residence times cause a dispersion of the mixed signal at the column outlet.
Among those sources of variances, the most sensible is the channel diameter, as the velocity of the mobile phase vary with its square, and the flow rate vary with its fourth power.
Shisla et al. [12] gave an analysis of the chromatographic behavior of a multiplicity of independent channels. This study is based on the Golay equation and considers a normalized distribution of
several parallel capillaries of radii distributed according to a function g(R) and of equal lengths.
Writing the Golay equation in partial contributions with a polydispersity term gives:
$〈𝜎2〉=〈𝜎2〉axialdiffusion+〈𝜎2〉TaylorArisdispersion +〈𝜎2〉stationaryphase+〈𝜎2〉polydispersity$ (7)
The conclusion of the study by Shisla et al. is that the polydispersity of the channels is a potentially devastating effect, particularly near optimum velocity.
Shisla equation can be rewritten in terms of partial height contribution $h$, as usual:
$h=haxialdiffusion+hTaylorArisdispersion +hstationaryphase+hpolydispersity$
Sidelnikov [13] studied the case of a capillary array with a random distribution of diameters. The capillaries are supposed to have a diameter constant along their length, and the diameters are
distributed according to a Normal Law of standard deviation $σd$. Sidelnikov proposes the following law:
$H=Hc+L⋅𝜎d2dc2⋅[2+(3−𝛼)⋅k]2(1+k)2$ (8)
where $𝛼$ is a coefficient close to 1 in the present case.
Consequently, the number of equivalent theoretical plates (NETP) increases with length from L∕H[c] for short lengths up to a limiting value NETP[limiting] that cannot be exceeded.
H is the HETP of the array, H[c] is the HETP of the single average capillary, the third term is a HETP H[d] attributable to the diameter dispersion of the channels
$NETPlimiting=LHd=𝜎ddc−2⋅(1+k)2[2+(3−𝛼)⋅k]2$ (9)
2.5. Theory of diffusional bridging effect
If the capillaries are not independent but instead communicate by diffusion, the behavior becomes different.
Figure 3 gives a scheme of this disposition. Molecular diffusion is allowed between channels through a porous wall supporting the stationary phase.
Figure 3.
Effect of diffusive bridging on an array of parallel capillary columns. The coupling levels the individual differences.
2.5.1. Previous theoretical works: the Giddings random walk considerations
The phenomenon considered in this theoretical work can be considered as a coupling between molecular diffusivity and the diffusion caused by mobile phase velocity inequalities, or eddy diffusivity.
Giddings [16] presents in his book theoretical guidelines to consider this aspect of band spreading in particulate beds. Giddings considers that the exchange of mass between different convective flow
paths can occur by mixing (leading to a partial HETP $hf$) and by diffusional exchange (leading to a partial HETP h[D]), both phenomena being coupled. Giddings formulates his reasoning by analyzing
the path of a diffusion like a random walker in a column.
The reasoning of Giddings is based on three equations (see also Khirevitch, Tallarek et al. [20]):
1. 1. Definition of plate height as the ratio of the variance of the analyte zone to the distance traveled by the center of the band:
2. 2. Einstein’s mean square displacement formula which can be expressed as a characteristic time $tmp$ associated with a characteristic diffusion length l:
3. 3. The relation for the variance $𝜎$ of the displacement of a random walker: Here, $L$ is the column length, l is the average length of the random walker step and n is the number of steps.
Giddings quantifies the effect through the use of three parameters, 𝜔[𝛼] is the distance between extreme flow velocities in the packing measured in particles diameters d[p] units; 𝜔[𝛽] is the ratio
between one flow extreme and the average flow; 𝜔[𝜆] is the persistence-of-velocity distance between which two independent flow paths persist before remixing, measured in d[p] units.
This derivation gives for the partial HETP term due to this coupling:
where $hdisp$ is the reduced overall HETP resulting of the coupling of h[D] reduced height resulting of diffusion and h[f] reduced height resulting of flow (eddy) mixing.
Diffusive exchange leads to the plate height expression:
$hD=𝜔𝛼2∗𝜔𝛽22∗dc2∗vcD$ (14)
with $vc$ being the true mobile phase velocity in the channel’s core, and flow exchange leads to: Giddings distinguishes five scales corresponding to five contributions $hf,hD$ to the overall
1. 1. The transchannel contribution. In the case of Golay columns, this corresponds to the dispersion due to the laminar parabolic velocity profile, or Taylor Aris term.
2. 2. The transparticle effect. In the case of a multicapillary packing, this corresponds to a transchannel effect. The source of zone spreading are the inhomogeneities between the individual
channels, arising from differences in diameters, in stationary phase loading, in length, and in aging.
3. 3. The short range interchannel effect. We will suppose that the capillaries are regularly distributed with only purely random variations in diameter and stationary phase thickness. This effect
disappears in this case.
4. 4. The long range interchannel effect. Multicapillary structures are monolithic in nature. They are not subject to packing procedures. As such this effect will be supposed to have no validity.
5. 5. The transcolumn effect. Multicapillary structures present no defect near column walls, as they have their own fluidic characteristics with their own stationary phase layer. As such this effect
will be supposed to have no validity.
Other variables can be possibly affected of a random variation:
• the channels’ averaged ratios of eluent to stationary phase,
• the length of the channels
• the differences in curvatures between channels,
• the channels’ internal surface imperfection or asperities
• other mechanical inhomogeneities or imperfections of the stationary phase.
Those different factors can themselves vary locally along the channel length. We have restricted this first study to the transchannel effect linked to random variations of channels’ diameter at
constant retention factor.
𝜔[𝛼] is the distance between flow extremes between which molecular diffusion occurs, leading to the dispersion phenomena, measured in this case in d[c] units. In our case we will assume it equal to
1, distance separating two adjacent channels.
𝜔[𝛽] is the ratio in velocity between the extreme and average flow measured in v[c] units. The velocity in a channel is proportional to the square of its diameter. Neglecting second order terms, we
will take 𝜔[𝛽] equal to two times the diameter relative standard deviation 2 ∗𝜎[rel].
𝜔[𝜆] is the persistence-of-velocity distance between which two independent flow paths persist before remixing, measured in d[c] units. In our case the persistence distance is simply the reduced
column length l, so 𝜔[𝜆] is equal to l∕d[c].
Those values must be taken as simple starting hypothesis, as the underlying phenomenon is in fact constituted of a much more complex arrangement of randomly disposed channels. A complete
establishment of their probabilistic relevance has not been done. The problem of a grid of channels having random diameters exchanging mass by diffusion with six neighbors is of a much higher
mathematical complexity and a control of the result is not obvious. We have found more complementary to conduct first virtual experiments by computer simulation. In addition, the transfer function of
a twin channel has been derived analytically and used to validate the general physical findings and the simulation results.
With those omega values h[D] and h[f] are written:
$hD=𝛼disp⋅2⋅𝜎rel2⋅dc2⋅vcDm$ (16)
$𝛼disp$ is a geometrical factor considering the cylindrical nature of the channel and the numbers of exchanging neighbors. In a hypothetical planar and parallel configuration of the channels, 𝛼[disp]
= 1.
Reintegrating those values in (13), one obtains:
$hdisp=1Dm𝛼disp⋅2⋅𝜎rel2⋅dc2⋅v+14∗𝜎rel2∗l$ (18)
This last value agrees with (8).
2.5.2. Assessment of packing performance with a stationary phase layer
The influence of the mass transfer in the stationary phase and its coupling with the Giddings eddy-mass transfer theory has been examined by various authors [16, 17, 18, 19, 20].
According to Giddings [16], the eddy dispersion is only slightly influenced by retention factors.
Writing in reduced terms:
$hd=2⋅𝜎rel21D⋅u+12⋅l=A$ (19)
When, according to Giddings, the resistance to transfer can be neglected in the stationary phase, the dispersion term $D$ is written (24): And according to Sidelnikov [11, 14] (9) we can write:
$l=Ldc⋅4⋅(1+k)2[2+(3−𝛼)⋅k]2$ (21)
2.5.3. Final equation
We suggest writing the modified Golay equation with a term $A$:
For the present numerical computations, in agreement with Giddings’ work, we took $𝛼disp=1$.
3. Transfer function analysis
3.1. Determination of the transfer function
Since the pioneering work of Martin and Synge [39], the transfer function of chromatographic systems has been examined by various authors, Lapidus and Amundson [40], VanDeemter, Zuiderweg and
Klinkenberg [41], and others.
The basic method consists in writing the mass balances characteristic of the system in the time domain and transfer them in the Laplace domain to find the expression of the transfer function.
This last expression is inversed in the time domain whenever possible, or its moments are directly computed from the Van Der Laan relationship:
$𝜇k$ being the moment of order k and G the transfer function.
Analytical solutions of the transfer function is in practice possible only for systems limited to two or three differential equations.
We will consider and limit the problem to two adjacent channel exchanging mass by diffusion with an exchange coefficient k[12] through a fraction f of their circumference a[12]. This geometry is
represented on Figure 4. The axial diffusion D[m] is not considered in the analysis. This limits the ODE order to two and has no impact on the final results as the different contributions are
additive in the limit of a high number of plates.
Figure 4.
Mass Exchange scheme of the transfer function: twin tubes exchanging mass through an arbitrary fraction f of their circumference.
The differential equations to resolve are written:
$S1⋅u1𝛿C1𝛿z+S1𝛿C1𝛿t=k12⋅a12(C2−C1)$ (24)
$S2⋅u2𝛿C2𝛿z+S2𝛿C2𝛿t=k12⋅a12(C1−C2)$ (25)
The index 1 relates to a first channel, the index 2 to a second channel. $Si$ is the channel i section, u[i] is the velocity of the fluid in the channel i, C[i] is the molar concentration in channel
In the Laplace domain, Equations (24) and (25) become:
$S1⋅u1𝛿C1¯𝛿z+S1⋅s⋅C1¯=k12⋅a12(C2¯−C1¯)$ (26)
$S2⋅u2𝛿C2¯𝛿z+S2⋅s⋅C2¯=k12⋅a12(C1¯−C2¯)$ (27)
Introducing: After transformation in function of $C2$ and C[1]:
$C2¯=𝛾⋅S1⋅u1𝛿C1¯𝛿z+𝛾⋅S1⋅s⋅C1¯+C1¯$ (29)
$C1¯=𝛾⋅S2⋅u2𝛿C2¯𝛿z+𝛾⋅S2⋅s⋅C2¯+C2¯$ (30)
From (27) and (29):
$u1⋅u2⋅𝛿2C1𝛿z2+u2⋅s+u2𝛾⋅S1+u1⋅s+u1𝛾⋅S2⋅𝛿C1¯𝛿z +C1¯⋅s2+s⋅1𝛾⋅S1+1𝛾⋅S2=0$ (31)
which is an ODE of the second order with two distinct roots.
The determinant $Δ$ of (30) is:
$Δ=(u1−u2)⋅s+u1𝛾⋅S2−u2𝛾⋅S12+4⋅u1⋅u2𝛾2⋅S1⋅S2$ (32)
$Δ$ is always positive.
The roots are written:
$ri=−u2⋅s+u2𝛾⋅S1+u1⋅s+u1𝛾⋅S2±(u1−u2)⋅s+u1𝛾⋅S2−u2𝛾⋅S12+4⋅u1⋅u2𝛾2⋅S1⋅S22⋅u1⋅u2$ (33)
When u[1] or u[2] approach zero, the only definite root is the one with a plus sign before the square root, noted r[1].
We obtain for the transfer function of a column with a length L:
The first moment $𝜇1$ and the second centered moment $𝜇2′$ will be evaluated with the Van Der Laan relationship, in the particular case of two channels system. S[i] and u[i] are set distinct from an
average value by a negative and positive standard deviation 𝜎, assuming two channels with an equal length and pressure drop:
$u1=v⋅(1−𝜎)S1=S⋅(1−𝜎)u2=v⋅(1+𝜎)S2=S⋅(1+𝜎)$ (35)
We pose that the standard deviation on the circular channel’s section is twice the one of the channel’s diameter: After straightforward tractation of (23), (33), (34), (35), (38) and (39), we obtain
from Van der Laan theorem: The retention time is identical for each channel.
$𝜇2=Lv⋅(1−𝜎2)2+L⋅S⋅𝜎2v⋅k12⋅a12$ (38)
For the second moment, and for the corresponding central moment:
$𝜇2′=L⋅S⋅𝜎2v⋅k12⋅a12$ (39)
The $k12$ term can be expressed with the help of the Sherwood Number definition, d being here the average channel diameter: And taking $a12$ as a fraction f of the channel’s circumference We obtain
from (36) and (37), taking into account that mass transfer occurs between two tubes flowing in laminar regime in series:
$Sk12⋅a12=d22⋅Sh⋅f⋅Dm$ (42)
Dividing $𝜇2′$ by $𝜇12$ we get:
$𝜇2′𝜇12=1N=2⋅u⋅𝜎d2l⋅f⋅Sh$ (43)
And for the reduced partial height of theoretical plate of the dispersion phenomena: From comparison of (18) and (44) we get:
3.2. Determination of the transfer function
$k12$ in (42) depends on the sum of the mass transfer resistance of the mobile phase in tubes 1 and 2, and of the mass transfer resistance of an eventual stationary phase in between. k[12] will
approximate the resistance of the mobile phase when the mass transfer resistance of a stationary phase can be neglected. This means that in most practical cases of retention factors, with significant
surface diffusion on the stationary phase, liquid–liquid partition chromatography, or small e[c]∕d[c] the formula (42) must give a good approximation of the losses due to channel dispersion. This
agrees with Gidding’s guesses.
Quantitatively, this can be shortly estimated by the comparison of the transfer coefficient k[m] in the flowing mobile phase of diameter d[c] and the transfer coefficient k[s] in the stationary phase
layer of thickness e[c] considering the partition coefficient K[s].
With $Ks$ being related to the average stationary phase layer concentration, and D[s] to the effective diffusivity in the stationary phase layer. The effect of the interstitial material (see Figure 1
) is neglected.
It follows:
If $ks∕km>10$, the right part of (48) reduces to k[m]∕2. If we consider that $Ds=0.1Dm$, e[c] = 0.1d[c], and as a simplifying assumption Sh[Tube] = Sh[Wall], Equation (49) is valid for K[s] > 10. For
e[c] = 0.1d[c], this means that the retention factor k must be superior to 2, condition which is in general realized.
It must be underlined that this constitutes an overestimate, as the transfer phenomena to the stationary phase occurs partly in an unsteady state manner. This will be the object of further
4. Modeling
4.1. Simulation methods and starting hypothesis
The system to model is an infinite array of parallel chromatographic capillary columns with randomly variable diameters.
An abundant literature describes the computation and simulation of chromatographic performance of beds of particles based on either purely mathematical or computational calculations [23, 24, 25, 26,
27, 42]. Computational models are based on different classical schemes, generally propagation through a grid, or an integrated set of differential equations. In this last category of computational
models are general rate models, lumped pore diffusion models, and equilibrium dispersive and transport dispersive models. The present modeling approach differs from previous attempts in that it
adopts as its starting point a discretized model based on the method of lines [43]. The method of lines consists in writing locally on a grid the partial differential equations to be integrated,
separating the space terms and the time dependent terms. The space terms are calculated with the differential terms approximated by algebraic equation linearly interpolated from the grid values, and
the integration in time is conducted by an ODE solver.
This allows to rely on a purely physical integration of first principles of chemical engineering, describing basic physical laws, diffusion, fluid dynamics and thermodynamics at a very small scale
where they can be assumed linear, in an explicit way easier to understand, correct, and debug. This needs minimal hypotheses. It relies on the discretization to build the integrated result taking
account naturally the various effects like Taylor Aris dispersion, dominant mass transfer phenomena, and purely diffusional effects. The result can allow quantitative and precise results with the
lowest possible risk due to erroneous preliminaries.
Our aim will be to check the diffusional bridging phenomena limited to the resistance contribution of the channel’s cores. Two resistances can be expected to limit the mass transfer between adjacent
channels, the mobile phase flowing in the core channel and the stationary phase separating them. Previous Sections 2.5.2 and 3 did discuss this limitation. We will focus on the analysis of the effect
of the mobile phase resistance, that must be the predominant one for practical cases of retention factors, diffusivities, and stationary phase thicknesses. To study the basis of the effect of radial
diffusion on column efficiency, the simplest geometry is in this case sufficient, and we will restrict the analysis to the peak at zero retention (residence time distribution) of a bundle of
capillaries without stationary phase accumulation and with channels directly exchanging mass.
4.2. Mass balances and initial and boundary conditions
Channels are considered circular with a hexagonal arrangement. Mass is exchanged through one sixth of their circumference with each of the six adjacent channels. The channel diameter is assumed to be
uniform along each capillary, and the individual values vary according to a normal law with a standard deviation 𝜎[d] around a mean value d[c]. The array of channels is discretized along its length
in N equidistant slices defining the cells (Figure 5).
Figure 5.
Axial (a) and radial (b) discretization scheme of the multicapillary array.
The model is a cascade of continuously stirred tank reactors (CSTRs) (one CSTR of molar accumulation X per cell) (Figure 5 (a)).
The capillary array is arranged in a regular geometry with a hexagonal pattern (Figure 5 (b)). The side effect of a limited size system on corners (two neighbors) and sides (three neighbors) are
taken into consideration. The overall simulated system is approximately a square of N × N channels.
N ∗ (ComponentNumber − 1) mass balances (one for each cell) are written in differential form:
Ordinary differential equation set
$dXl,i,jndt=Fl−1,i,j∗Cl−1,i,jn−Fl,i,j∗Cl,i,jn +Daxial∗(Cl−1,i,jn−Cl,i,jn)∗Saxial,i,jl +Daxial∗(Cl+1,i,jn−Cl,i,jn)∗Saxial,i,jl + ∑m=16kradial,l,im,jm∗(Cl,im,jmn−Cl,i,jn)∗Sradial,l,im,jm$ (50)
$X$ is the molar accumulation in one cell, F is the convective flow, C is the molar concentration, D is the diffusion coefficient, S is the exchange surface, l is the effective diffusional distance,
k[radial] is the radial exchange coefficient. Index l is the rank of the cell in the axial direction, index i and j are the algebraic coordinate position of the cell in each slice, n is the component
The initial and boundary conditions are:
$t=]0,+∞[,Fl,i,j=F0,i,jt=]−∞,0[,Xl,i,jn=X0,i,jn,X0,i,j0=ai,j,X0,i,jn>0=0t=0,X0,i,j0=0,X0,i,jn>0=bnXl>0,i,j0=ai,j,Xl>0,i,jn>0=0t=]0,+∞[,X0,i,j0=ai,j,X0,i,jn>0=0(Daxial)x=0=0(Daxial)x=l=0$ (51)
The pressure drop is imposed and identical for every channel, the convective flow in each channel varies accordingly.
$kradial$ is defined by the following equation:
$kradial=Dradialefferadialeff$ (52)
where $eradialeff$ is the effective diffusion length. It is fitted by considering a Sherwood number equal to 3.66 for a laminar cylindrical flow as usually done in chemical engineering.
$kradial=Dradialeff∗3.66dc$ (53)
Each individual channel diameter is calculated according to data calculated by a pseudorandom algorithm generating a normal distribution.
The molar volume of the mixture of solvent and analytes is assumed to be equal to the molar volume of the pure mobile phase, which means that only the case of small molecules (Molar Weight $<$ 500 g)
are considered in this first study.
4.3. Numerical parameters adjustments
Three main sources of error must be considered.
• The discretization grid
• The time step
• The arrays dimension (number of channels N × N).
The dynamic simulation has been conducted on a meshed model of an array of 41 × 41 capillary columns. This dimension has been chosen after a sensibility study of this parameter. The simulation
results are fully stabilized for 41 × 41 grids, with a difference from the asymptotic infinitely wide array of less than 2%.
The elementary cell volume axial thickness is taken to be equal to a fraction of the final HETP (10 to 20%) in order to limit numerical dispersion effects to less than 2% on the final measured
asymptotic value. The mobile phase is considered as a molar accumulation in each cell.
The differential equations are integrated with an explicit Runge–Kutta algorithm with a sufficiently small time step to avoid numerical instabilities and inaccuracy, according to the method of lines.
In preliminary testing we found the algorithm to be stable if the ratio of the convective flow in one cell over one-time step over cell accumulation is greater than 20, which is like a Courant
number. In practice, we used typically time steps of 1 × 10^−4 s for simulating 10 μm channels. In this case, the simulation result is overestimated by less than 2% with regard to the limiting case
of an infinitely small time step.
4.4. Software testing
Several tests have been done on the final code.
• The unsteady state radial diffusivity from the central channel of the grid has been successfully matched with Fourier heat equation with a difference on the standard deviation of the signal
better than 5%.
• The first and second moments of the theoretical transfer function of a single input–single output (SISO) two channels system have been matched with results from a two channels simulation with a
difference lower than 4.0%.
4.5. Hardware
The computation was conducted on an Intel Core i7 -6700K with 8 cores and 4 GHz frequency. The average duration of a run was 30 min.
The coding has been realized in object-oriented programming under C++ on Microsoft Visual 2010 platform.
5. Numerical results
5.1. Case of independent channels
The following table (Table 1) gives the evaluation of the number of theoretical plates achievable in the case of independent, unbridged, channels, for random distribution of their individual
diameters and for increasing relative standard deviation.
Table 1.
Numerical values of NETP[limiting] calculated according to (9)
RSD =𝜎 [r]∕r[0]% k = 0 k = 5
0.5 10000 4981
10 25 12.5
The quantitative results of (8) and (9) are well correlated with the simulation finding reported on Figure 6 as seen from their fit with the theoretical asymptotic values of Table 1.
Figure 6.
Simulation of the limitation of NETP due to dispersive phenomena in non diffusive multicapillary array; channel diameter d[c], 10 μm; mobile phase velocity v, 940 μm/s.
Table 2.
Theoretical partial height of a multicapillary array with diffudional bridging from Gidding’s RW Theory function of the relative standard deviation of channel diameter
𝜎[rel] (%) H[D] (μm) H[f] (μm) H[disp] (μm) H (μm) % Dispersive
0 0 0 0 3.11 0
1 0.0188 40.02 0.0188 3.1288 0.601%
2 0.0752 160.09 0.0752 3.1852 2.36%
5 0.47 1000.55 0.47 3.58 13.1%
10 1.88 4002.20 1.88 4.99 37.7%
20 7.52 16008.81 7.52 10.63 70.8%
Average channel diameter d[c] 10 μm; average velocity v 940 μm/s; column length L 100 mm; diffusion coefficient in mobile phase D[m] 1 × 10^−9 m^2/s; 𝛼[disp] = 1.
A relative standard deviation of 2% can be considered as achievable. It is the standard value of commercial threads of artificial textiles. This maximizes the efficiency at only 625 to 300 plates
which is very modest for analytical and preparative purposes, where values from 5000 to 50000 plates are commonly achieved with present particulate packings. A relative standard deviation of 0.5% is
required to reach 5000 theoretical plates, which is the best performance obtained in the monofilament stretching for the optical fiber industry. For a 5 μm channel, this represent 25 nm of average
tolerance, in the size range of one nanoparticle. For the smaller channels required to take advantage of the superior characteristics of the multicapillary packings in terms of pressure drop in
analytical HPLC, the tolerance would become close to molecular dimensions.
5.2. Effect of diffusional bridging
5.2.1. Theoretical computations from Giddings RW analysis
The Table 2 below reports the values of theoretical plates heights computed by (16) and (17) in the simplest case of virtual channels with separating walls having negligible mass transfer resistance
and no retention characteristics.
The main observation is that the loss in efficiency induced by the channels’ diameters dispersion is limited to moderate fractions of the total height for relative square deviation (RSD) lower than
Table 3.
Correlation of theoretical dispersion height h[D] ((16) and (17)) of multicapillary packings with diffusional bridging, given by RW Theory with simulated values
d[c] (m) RSD v (m/s) D[m] (m^2/s) Theor. h[d] (μm) Sim. h[d] (μm)
1 × 10^−5 0.01 9.40 × 10^−4 1 × 10^−9 0.0188 0.0168
1 × 10^−5 0.02 9.40 × 10^−4 1 × 10^−9 0.0752 0.0714
1 × 10^−5 0.05 9.40 × 10^−4 1 × 10^−9 0.47 0.4636
1 × 10^−5 0.1 9.40 × 10^−4 1 × 10^−9 1.88 1.9431
1 × 10^−5 0.05 4.70 × 10^−4 1 × 10^−9 0.235 0.1967
1 × 10^−5 0.05 1.88 × 10^−3 1 × 10^−9 0.94 0.8997
5 × 10^−6 0.1 9.40 × 10^−4 1 × 10^−9 0.47 0.4780
2 × 10^−5 0.025 9.40 × 10^−4 1 × 10^−9 0.47 0.4946
1 × 10^−5 0.05 9.40 × 10^−4 5 × 10^−10 0.94 0.8997
1 × 10^−5 0.05 9.40 × 10^−4 2 × 10^−9 0.235 0.1967
5.2.2. Comparison of simulated and random walk (RW) results
Table 3 reports the results of our simulation tools and the corresponding partial height of dispersion computed according to (16) and (17) with 𝛼[disp] = 1 in different conditions, namely by varying:
• The channel’s diameter RSD at 1, 2, 5, 10%
• The Diffusion coefficient D[m] at 0.5 × 10^−9, 1.0 × 10^−9, 2.0 × 10^−9.
• The mobile phase velocity v at 470, 940, 1880 μm/s
• The channel diameter d[c] at 5, 10, 20 μm.
For the same column length of 1 mm.
Table 4.
Correlation of theoretical dispersion height h[D] (Equation (44)) from the transfer function of a twin channel geometry with simulated values; Sh = 3.66; f = 1∕6
d[c] (m) RSD v (m/s) D[m] (m^2/s) Theor. h[D] (μm) Sim. h[D] (μm)
0.00001 0.01 0.00094 1 × 10^−9 0.031 0.031
0.00001 0.02 0.00094 1 × 10^−9 0.123 0.122
0.00001 0.05 0.00094 1 × 10^−9 0.770 0.747
0.00001 0.05 0.00094 2 × 10^−9 0.385 0.325
0.00001 0.05 0.00047 1 × 10^−9 0.385 0.365
0.00001 0.05 0.00188 1 × 10^−9 1.541 1.59
The relation between theoretical and simulated data is linear with a correlation coefficient R^2 of 0.9976. The slope shows less than 3% deviation from perfect equality.
5.2.3. Comparison of simulated and transfer function results
Table 4 report the results of the simulation of the twin channel configuration studied in Section 3 and the corresponding partial height of dispersion computed according to (44) in different
conditions, namely by varying:
• The channel’s diameter RSD at 1, 2, 5, 10%
• The mobile phase velocity v at 470, 940, 1880 μm/s
• The diffusion coefficient D[m] at 1 × 10^−9 and 2 × 10^−9.
For the same column length of 1 mm.
The relation between theoretical and simulated data is linear with a correlation coefficient R^2 of 0.998. The slope shows less than 1% deviation from perfect equality. The simulated twin tube h
[disp] is in 61% excess over the 41 × 41 channel’s array value, equal to f⋅Sh.
Figure 7 summarizes the fundamental behavior difference between systems with and without diffusional bridging. Diffusional bridging restores the chromatographic functionality of the multicapillary
Figure 7.
NETP vs length of multicapillary packings with diffusive or non diffusive walls d[c] = 10 μm; v = 940 μm∕s; D[m] = 1 × 10^−9 m^2∕s; RSD 5%; e[c] = 0; k = 0.
NETP values obtained for channel lengths up to 25 mm with 𝜎 = 5% of the average diameter did show linearity up to at least 5000 plates. No deviation from linearity has been noted.
The agreement of the theoretical results, of the transfer function results, and simulation results is surprisingly good in view of the simplicity of the starting hypothesis of the theoretical
interpretation. The agreement between the three models support very efficiently each other. Their very similar numerical predictions lead to a good confidence in the quantitative results of this
5.3. Discussion
Several conclusions can be drawn from the analysis of (20) and of this numerical example:
• First, the previous limitation attached to the Shisla–Sidelnikov formula disappears. Due to the coupling between eddy diffusion and transverse molecular diffusivity, the height of a theoretical
plate decreases with short column lengths and tends to a constant value for infinite column lengths. The NETP increases linearly with the length and the separative power of chromatography is
• A constant loss of efficiency with respect to the single average column occurs, the polydispersity term on overall variance forecast by Shisla analysis. This correction is strongly dependent on
the channel’s diameter dispersity and increases as the square of the RSD.
• The question arises as to what relative dispersity can be reasonably expected. For ordinary textile threads, the usual variance is of 2%. The loss of efficiency attributable to the channel
diameter random distribution will in this case be lower than 10%. The counterpart is a slightly higher pressure drop, without changing the order of magnitude of this advantage.
• For very irregular distributions of channels diameters, the loss in NETP can be compensated for by an increase in column length or decrease in channel’s diameter. Due to the large potential gain
on pressure drop, the solution remains attractive.
This suggests that provided there is suitable diffusional bridging between capillaries, multicapillary packings allow the throughput capacity and efficiency of conventional particulate packings.
Given that multicapillary arrays have a pressure drop that is lower by one order of magnitude, they should be a superior candidate for separation applications.
6. Conclusion
Starting from a short theoretical basis, we have used computer simulation to study quantitatively the behavior of multicapillary arrays with statistical variation in their diameters. In the absence
of interchannel radial diffusion, only a slight statistical dispersion between channel dimensions induces a purely hydrodynamic limitation of the efficiency of the packing in terms of NETP. This
effect and its magnitude are verified by computer simulation. A 2% standard deviation limits the NETP to a value of 625. This limit cannot be exceeded even with increased packing length. This effect
has to date precluded the use of these potentially powerful and very low-pressure-drop systems in analytical and preparative chromatography.
Our theoretical and simulated results show that superimposing a radial diffusive term between adjacent channels, or diffusional bridging, removes this limitation. In this case, the multicapillary
array behaves like particulate packing, with an NETP increasing linearly with packing length. In the case where the multicapillary array has comparable effective diffusivity to classical LC porous
stationary phases (i.e. silica gel, PS-DVB gels), this effect is strong enough that the efficiency loss due to inhomogeneity in the capillary diameters, which is catastrophic for a non-diffusive
array, becomes negligible for typical standard deviations in the capillary diameter. The excellent numerical agreement between the purely mathematical transfer function of a twin tube system and its
simulation results without the need of any adjustment parameter constitute a sound proof of the existence and order of magnitude of this diffusional bridging effect.
The 5 to 10% loss in resolving power can be compensated by an increased length of the packing. Due to the large gain on the pressure drop over conventional packings, even very imperfect arrays,
easier to manufacture, will still show advantage in operation.
This result could have numerous practical consequences for chromatography and chemical engineering, stemming from the fact that at an identical characteristic dimension (particle diameter versus
channel diameter), the pressure drop in multicapillary packing is one order of magnitude lower.
In LC-UHPLC, analytical chromatography has reached its technological limit by decreasing particle sizes to 1.7 μm and working at extremely high pressures, such as 1500 bar. Diffusive multicapillary
packing could be an important advance in this field. With multicapillary packing and for a given available pressure drop, the available HETP in LC can increase by one order of magnitude, approaching
with standard instruments the 100000 plates of golay columns. The analysis time can decrease by one order of magnitude making control ultrafast. The high flow rate of the multicapillary structure
allows the use of the existing range of detectors, injectors, and pumps.
In industrial separation, this approach will allow chromatography to be conducted with high efficiency and with low-pressure pumps, injectors, lines, and other fluidic appliances. The investment and
cost operation will be reduced by an important factor.
The difficulty in stabilizing large particulate beds in columns disappear naturally due to the monolithic and rigid nature of multicapillary packings [28]. Multicapillary packing may be able to be
used as modules in parallel or in series for achieving large scale separations.
|
{"url":"https://comptes-rendus.academie-sciences.fr/chimie/articles/en/10.5802/crchim.37/","timestamp":"2024-11-05T20:00:27Z","content_type":"text/html","content_length":"203020","record_id":"<urn:uuid:064a9c51-a00d-4014-a603-b46339138c57>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00891.warc.gz"}
|
साधारण ब्याज कैलकुलेटर
Byaj Calculator Online: Calculate Simple Interest Easily
A Byaj Calculator Online is a tool that can be used to calculate simple interest. Simple interest is the interest that is earned on a loan or investment, based on the principal amount, the interest
rate, and the length of time.
To use a Byaj Calculator Online, you will need to enter the following information:
• The principal amount
• The interest rate
• The length of time
Once you have entered this information, the calculator will calculate the simple interest and display the results.
Byaj Calculator Onlines are a convenient way to calculate simple interest. They are also accurate, as they use the same formulas that are used by banks and other financial institutions.
Here are some of the benefits of using a Byaj Calculator Online:
• They are easy to use.
• They are accurate.
• They are free to use.
If you need to calculate simple interest, a Byaj Calculator Online is a great option. They are easy to use, accurate, and free.
Here are some of the websites that offer Byaj Calculator Online:
These websites offer a variety of features, such as the ability to calculate simple interest in different currencies, with different interest rates, and for different lengths of time.
|
{"url":"https://interestcalculator.gkfriend.com/","timestamp":"2024-11-08T12:12:47Z","content_type":"text/html","content_length":"12801","record_id":"<urn:uuid:ef07b140-4847-4dd3-8833-2b8842ca0af3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00844.warc.gz"}
|
Diffraction by a half-plane immersed in a moving anisotropic plasma
The boundary value problem of plane-wave diffraction by a conducting half-plane immersed in a moving anisotropic plasma is solved, using the method of diffraction functions. The E-polarization case
yields solutions similar to that of a half-plane in a stationary medium and does not give any new features. The H-polarization problem is solved with the assumption that the plasma motion is
nonrelativistic. The solution shows that under these circumstances the half-plane scatters as if its surfaces have an equivalent surface reactance. The solution then contains a surface wave
propagating along both the half-plane surfaces.
International Journal of Electronics
Pub Date:
February 1976
□ Anisotropic Media;
□ Boundary Value Problems;
□ Half Planes;
□ Magnetohydrodynamic Flow;
□ Plasma-Electromagnetic Interaction;
□ Wave Diffraction;
□ Electric Conductors;
□ Electromagnetic Scattering;
□ Electromagnetic Surface Waves;
□ Plane Waves;
□ Polarization Characteristics;
□ Communications and Radar
|
{"url":"https://ui.adsabs.harvard.edu/abs/1976IJE....40..137T/abstract","timestamp":"2024-11-10T15:20:35Z","content_type":"text/html","content_length":"35039","record_id":"<urn:uuid:4066f927-e68c-4ca1-8ac8-6127a6111772>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00337.warc.gz"}
|
Temporal Logic
First published Mon Nov 29, 1999; substantive revision Thu Feb 7, 2008
The term Temporal Logic has been broadly used to cover all approaches to the representation of temporal information within a logical framework, and also more narrowly to refer specifically to the
modal-logic type of approach introduced around 1960 by Arthur Prior under the name of Tense Logic and subsequently developed further by logicians and computer scientists.
Applications of Temporal Logic include its use as a formalism for clarifying philosophical issues about time, as a framework within which to define the semantics of temporal expressions in natural
language, as a language for encoding temporal knowledge in artificial intelligence, and as a tool for handling the temporal aspects of the execution of computer programs.
1.1 Tense Logic
Tense Logic was introduced by Arthur Prior (1957, 1967, 1969) as a result of an interest in the relationship between tense and modality attributed to the Megarian philosopher Diodorus Cronus (ca.
340-280 BCE). For the historical context leading up to the introduction of Tense Logic, as well as its subsequent developments, see Øhrstrøm and Hasle, 1995.
The logical language of Tense Logic contains, in addition to the usual truth-functional operators, four modal operators with intended meanings as follows:
P “It has at some time been the case that …”
F “It will at some time be the case that …”
H “It has always been the case that …”
G “It will always be the case that …”
P and F are known as the weak tense operators, while H and G are known as the strong tense operators. The two pairs are generally regarded as interdefinable by way of the equivalences
On the basis of these intended meanings, Prior used the operators to build formulae expressing various philosophical theses about time, which might be taken as axioms of a formal system if so
desired. Some examples of such formulae, with Prior's own glosses (from Prior 1967), are:
Gp→Fp “What will always be, will be”
G(p→q)→(Gp→Gq) “If p will always imply q, then if p will always be the case, so will q”
Fp→FFp “If it will be the case that p, it will be — in between — that it will be”
¬Fp→F¬Fp “If it will never be that p then it will be that it will never be that p”
Prior (1967) reports on the extensive early work on various systems of Tense Logic obtained by postulating different combination of axioms, and in particular he considered in some detail what light a
logical treatment of time can throw on classic problems concerning time, necessity and existence; for example, “deterministic” arguments that have been advanced over the ages to the effect that “what
will be, will necessarily be”, corresponding to the modal tense-logical formula Fp→□Fp.
Of particular significance is the system of Minimal Tense Logic K[t], which is generated by the four axioms
p→HFp “What is, has always been going to be”
p→GPp “What is, will always have been”
H(p→q)→(Hp→Hq) “Whatever has always followed from what always has been, always has been”
G(p→q)→(Gp→Gq) “Whatever will always follow from what always will be, always will be”
together with the two rules of temporal inference:
RH: From a proof of p, derive a proof of Hp
RG: From a proof of p, derive a proof of Gp
and, of course, all the rules of ordinary Propositional Logic. The theorems of K[t] express, essentially, those properties of the tense operators which do not depend on any specific assumptions about
the temporal order. This characterisation is made more precise below.
Tense Logic is obtained by adding the tense operators to an existing logic; above this was tacitly assumed to be the classical Propositional Calculus. Other tense-logical systems are obtained by
taking different logical bases. Of obvious interest is tensed predicate logic, where the tense operators are added to classical First-order Predicate Calculus. This enables us to express important
distinctions concerning the logic of time and existence. For example, the statement A philosopher will be a king can be interpreted in several different ways, such as
∃x(Philosopher(x) & F King(x)) Someone who is now a philosopher will be a king at some future time
∃xF(Philosopher(x) & King(x)) There now exists someone who will at some future time be both a philosopher and a king
F∃x(Philosopher(x) & F King(x)) There will exist someone who is a philosopher and later will be a king
F∃x(Philosopher(x) & King(x)) There will exist someone who is at the same time both a philosopher and a king
The interpretation of such formulae is not unproblematic, however. The problem concerns the domain of quantification. For the second two formulae above to bear the interpretations given to them, it
is necessary that the domain of quantification is always relative to a time: thus in the semantics it will be necessary to introduce a domain of quantification D(t) for each time t. But this can lead
to problems if we want to establish relations between objects existing at different times, as for example in the statement “One of my friends is descended from a follower of William the Conqueror”.
These problems are related to the so-called Barcan formulae of modal logic, a temporal analogue of which is
F∃xp(x)→∃xFp(x) (“If there will be something that is p, then there is now something that will be p”)
This formula can only be guaranteed to be true if there is a constant domain that holds for all points in time; under this assumption, bare existence (as expressed by the existential quantifier) will
need to be supplemented by a temporally restricted existence predicate (which might be read 'is extant') in order to refer to different objects existing at different times. For more on this and
related matters, see van Benthem, 1995, Section 7.
1.2 Extensions to Tense Logic
Soon after its introduction, the basic “PFGH” syntax of Tense Logic was extended in various ways, and such extensions have continued to this day. Some important examples are the following:
The binary temporal operators S and U (“since” and “until”). These were introduced by Kamp (1968). The intended meanings are
Spq “q has been true since a time when p was true”
Upq “q will be true until a time when p is true”
It is possible to define the one-place tense operators in terms of S and U as follows:
Pp ≡ Sp(p∨¬p)
Fp ≡ Up(p∨¬p)
The importance of the S and U operators is that they are expressively complete with respect to first-order temporal properties on continuous, strictly linear temporal orders (which is not true for
the one-place operators on their own).
Metric tense logic. Prior introduced the notation Fnp to mean “It will be the case the interval n hence that p”. We do not need a separate notation Pnp since we can write F(-n)p for “It was the case
the interval n ago that P”. The case n=0 gives us the present tense. We can define the general, non-metric operators by
Pp ≡ ∃n(n<0 & Fnp)
Fp ≡ ∃n(n>0 & Fnp)
Hp ≡ ∀n(n<0→Fnp)
Gp ≡ ∀n(n>0→Fnp)
The “next time” operator O. This operator assumes that the time series consists of a discrete sequence of atomic times. The formula Op is then intended to mean that p is true at the immediately
succeeding time step. Given that time is discrete, it can be defined in terms of the “until” operator U by
Op ≡ Up(p&¬p)
which says that p will be true at some future time, between which and the present time nothing is true. This can only mean the time immediately following the present in a discrete temporal order.
In discrete time, the future-tense operator F is related to the next-time operator by the equivalence
Fp ≡ Op ∨ OFp.
Indeed, F can here be defined as the least fixed point of the transformation which maps an arbitrary propositional operator X onto the operator λp.Op∨OXp.
One could similarly define a past-time version of O; but since the main usefulness of this particular operator has been in relation to the logic of computer programming, where one is mainly
interested in execution sequences of programs extending into the future, this has not so often been done.
1.3 Semantics of Tense Logic
The standard model-theoretic semantics of Tense Logic is closely modelled on that of Modal Logic. A temporal frame consists of a set T of entities called times together with an ordering relation < on
T. This defines the “flow of time” over which the meanings of the tense operators are to be defined. An interpretation of the tense-logical language assigns a truth value to each atomic formula at
each time in the temporal frame. Given such an interpretation, the meanings of the weak tense operators can be defined using the rules
Pp is true at t if and only if p is true at some time t′ such that t′<t
Fp is true at t if and only if p is true at some time t′ such that t<t′
from which it follows that the meanings of the strong operators are given by
Hp is true at t if and only if p is true at all times t′ such that t′<t
Gp is true at t if and only if p is true at all times t′ such that t<t′
We can now provide a precise characterisation of system K[t] of Minimal Tense Logic. The theorems of K[t] are precisely those formulae which are true at all times under all interpretations over all
temporal frames.
Many tense-logical axioms have been suggested as expressing this or that property of the flow of time, and the semantics gives us a precise way of defining this correspondence between tense-logical
formulae and properties of temporal frames. A formula p is said to characterise a set of frames F if
• p is true at all times under all interpretations over any frame in F.
• For any frame not in F, there is an interpretation which makes p false at some time.
Thus any theorem of K[t] characterises the class of all frames.
A first-order formula in < determines a class of frames, namely those in which the formula is true. A tense-logical formula p corresponds to a first-order formula q just so long as p characterises
the class of frames for which q is true. Some well-known examples of such pairs of formulae are:
Hp→Pp ∀t∃t′(t′<t) (unbounded in the past)
Gp→Fp ∀t∃t′(t<t′) (unbounded in the future)
Fp→FFp ∀t, t′(t<t′ → ∃t″(t<t″<t′)) (dense ordering)
FFp→Fp ∀t, t′(∃t″(t<t″<t′) → t<t′) (transitive ordering)
FPp → Pp∨p∨Fp ∀t, t′, t″((t<t″ & t′<t″) →
(t<t′ ∨ t=t′ ∨ t′<t)) (linear in the past)
PFp → Pp∨p∨Fp ∀t, t′, t″((t″<t & t″<t′) →
(t<t′ ∨ t=t′ ∨ t′<t)) (linear in the future)
However, there are tense-logical formulae (such as GFp→FGp) which do not correspond to any first-order temporal frame properties, and there are first-order temporal frame properties (such as
irreflexivity, expressed by ∀t¬(t<t)) which do not correspond to any tense-logical formula. For details, see van Benthem (1983).
2.1 The method of temporal arguments
In this method, the temporal dimension is captured by augmenting each time-variable proposition or predicate with an extra argument-place, to be filled by an expression designating a time, as for
Kill(Brutus, Caesar, 44BCE).
If we introduce into the first-order language a binary infix predicate < denoting the temporal ordering relation “earlier than”, and a constant “now” denoting the present moment, then the tense
operators can be readily simulated by means of the following correspondences, which not surprisingly bear more than a passing resemblance to the formal semantics for Tense Logic given above. Where p(
t) represents the result of introducing an extra temporal argument place to the time-variable predicates occurring in p, we have:
Pp ∃t(t<now & p(t))
Fp ∃t(now<t & p(t))
Hp ∀t(t<now → p(t))
Gp ∀t(now<t → p(t))
Before the advent of Tense Logic, the method of temporal arguments was the natural choice of formalism for the logical expression of temporal information.
2.2 Hybrid approaches
The reification of time instants implied by the method of temporal arguments may be regarded as philosophically suspect, instants being rather artificial constructs unsuited to playing a foundational
role in temporal discourse. Following a suggestion of Prior (1968, Chapter XI), one might equate an instant with ‘the conjunction of all those propositions which would ordinarily be said to be true
at that instant’. Instants are thus replaced by propositions which uniquely characterise them. A statement of the form “True(p, t)”, saying that proposition p is true at instant t, can then be
paraphrased as “□ (t→ p)”, i.e., the instant-proposition t necessarily implies p.
This kind of manoeuvre lies at the heart of hybrid temporal logics in which the standard apparatus of propositions and tense operators is supplemented by propositions which are true at unique
instants, thereby effectively naming those instants without invoking philosophically dubious reification. This can give one some of the expressive power of a predicate-logic approach while retaining
the modal character of the logic. (See Areces and Ten Cate, 2006)
2.3 State and event-type reification
The method of temporal arguments encounters difficulties if it is desired to model aspectual distinctions between, for example, states, events and processes. Propositions reporting states (such as
“Mary is asleep”) have homogeneous temporal incidence, in that they must hold over any subintervals of an interval over which they hold (e.g., if Mary is asleep from 1 o'clock to 6 o'clock then she
is asleep from 1 o'clock to 2 o'clock, from 2 o'clock to 3 o'clock, and so on). By contrast, propositions reporting events (such as “John walks to the station”) have inhomogeneous temporal incidence;
more precisely, such a proposition is not true of any proper subinterval of an interval of which it is true (e.g., if John walks to the station over the interval from 1 o'clock to a quarter past one,
then it is not the case that he walks to the station over the interval from 1 o'clock to five past one — rather, over that interval he walks part of the way to the station).
The method of state and event-type reification was introduced to cater for distinctions of this kind. It is an approach that has been especially popular in Artificial Intelligence, where it is
particularly associated with the name of James Allen, whose influential paper (Allen 1984) is often cited in this connection. In this approach, state and event types are denoted by terms in a
first-order theory; their temporal incidence is expressed using relational predicates “Holds” and “Occurs”, as for example,
Holds(Asleep(Mary), (1pm, 6pm))
Occurs(Walk-to(John, Station), (1pm, 1.15pm))
where terms of the form (t, t′) denote time intervals in the obvious way.
The homogeneity of states and inhomogeneity of events is secured by axioms such as
∀s, i, i′(Holds(s, i) & In(i′, i) → Holds(s, i′))
∀e, i, i′(Occurs(e, i) & In(i′, i) → ¬Occurs(e, i′))
where “In” expresses the proper subinterval relation.
2.4 Event-token reification
The method of event-token reification was proposed by Donald Davidson (1967) as a solution to the so-called “variable polyadicity” problem. The problem is to give a formal account of the validity of
such inferences as
John saw Mary in London on Tuesday.
Therefore, John saw Mary on Tuesday.
The key idea is that each event-forming predicate is endowed with an extra argument-place to be filled with a variable ranging over event-tokens, that is, particular dated occurrences. The inference
above is then cast in logical form as
∃e(See(John, Mary, e) & Place(e, London) & Time(e, Tuesday)),
Therefore, ∃e(See(John, Mary, e) & Time(e, Tuesday)).
In this form, the inference does not require any additional logical apparatus over and above standard first-order predicate logic; on that basis, the validity of the inference is considered to be
explained. This approach has also been used in a computational context in the Event Calculus of Kowalski and Sergot (1986).
Prior's motivation for inventing Tense Logic was largely philosophical, his idea being that the precision and clarity afforded by a formal logical notation was indispensible for the careful
formulation and resolution of philosophical issues concerning time. See the article on Arthur Prior for a discussion of some of these.
3.1 Realist vs reductionist approaches to tense
The rivalry between the modal and first-order approaches to formalising the logic of time reflects an important set of underlying philosophical issues related to the work of McTaggart. This work is
especially well-known, in the context of temporal logic, for introducing the distinction between the “A-series” and the “B-series”. By the “A-series” is meant, essentially, the characterisation of
events as Past, Present, or Future. By contrast, the “B-series” involves their characterisation as relatively “Earlier” or “Later”. A-series representations of time inescapably single out some
particular moment as present; of course, at different times, different moments are present — a circumstance which, followed to what appeared to be its logical conclusion, led McTaggart to assert that
time itself was unreal (see Mellor, 1981). B-series representations have no place for a concept of the present, instead taking the form of a synoptic view of all time and the (timeless)
interrelations between its parts.
There is a clear affinity between the A-series and the modal approach and between the B-series and the first-order approach. In the terminology of Massey (1969), adherents of the former approach are
called “tensers” while adherents of the latter are called “detensers”. This issue is related in turn to the question of how seriously to take the representation of space-time as a single
four-dimensional entity in which the four dimensions are at least in some respects on a similar footing. In view of the Theory of Relativity, it might be argued that this issue is not so much a
matter for Philosophy as for Physics.
3.2 Determinism vs non-determinism
The choice of flow of time can be of philosophical significance. For example, one way of capturing the distinction between deterministic and non-deterministic theories is to model the former using a
strictly linear flow of time, and the latter with a temporal structure which allows branching into the future. If we adopt the latter approach, then it is helpful in describing the semantics of tense
and other operators to introduce the idea of a history, which is a maximal linearly-ordered set of instants. The branching future model will then stipulate that for any two histories there is an
instant such that both histories share all the times up to and including that instant, but do not share any times after it. For each history containing a given instant, the times in that history
which are later than the instant constitute a “possible future” for that instant.
In branching time semantics it is natural to evaluate formulae with respect to an instant and a history, rather than just an instant. With respect to the pair (h, t), we might interpret “Fp” to be
true so long as “p” is true at some time in the future of t as determined by the history h. A separate operator ◊ can be introduced to allow, in effect, quantification over histories: “◊p” is true at
(h, t) so long as there is some history h’ such that “p” is true at (h’, t). Then “◊Fp” says that “p” holds in some possible future, and “□Fp” (where “□” is the strong modal operator dual to “◊”)
says that “p” is inevitable (i.e., holds in all possible futures). Prior calls this kind of interpretation “Ockhamist”.
Another interpretation (called “Peircean” by Prior) takes “Fp” to be equivalent to the Ockhamist “□Fp”, i.e., “p” is true at some time in every possible future. Under this interpretation there is no
formula equivalent to the Ockhamist “Fp”; hence Peircean tense logic is a proper fragment of Ockhamist tense logic. It was favoured by Prior on the grounds that future contingent propositions really
do lack truth value: only if a future-tense proposition is inevitable (all possible futures) or impossible (no possible futures) can we ascribe a truth value to it now. For Prior's discussion of
these issues, see Prior 1967, Chapter VII. Further discussion can be found in Øhrstrøm, and Hasle 1995, chapters 2.6 and 3.2.
The non-determinism implicit in branching time frames has led to their being used to support theories of action and choice. An important example is the STIT logics of Belnap and Perloff (1988), with
many subsequent variants (see Xu, 1995). The primitive expression of agency in STIT theories is that an agent a “sees to it that” some proposition P holds, written [a stit: P]. The meaning of this
construction is specified in relation to a branching time structure, in which the choices made by agents are representated by means of sets of possible futures branching forward from the choice
point. The precise interpretation of [a stit: P] varies from one system to another, but typically it is specified to be true at a particular moment if P holds in all histories selected by the agent's
choice function at that moment, with the further condition usually added that P fails to hold in at least one history not so selected (this is in order to avoid the unwelcome conclusion that an agent
sees to it that some tautology holds).
4.1 Applications to natural language
Prior (1967) lists amongst the precursors of Tense Logic Hans Reichenbach's (1947) analysis of the tenses of English, according to which the function of each tense is to specify the temporal
relationships amongst a set of three times related to the utterance, namely S, the speech time, R, the reference time, and E, the event time. In this way Reichenbach was neatly able to distinguish
between the simple past “I saw John”, for which R=E<S, and the present perfect “I have seen John”, for which E<R=S, the former statement referring to a past time coincident with the event of my
seeing John, the latter referring to the present time, relative to which my seeing John is past.
Prior notes that Reichenbach's analysis is inadequate to account for the full range of tense usage in natural language. Subsequently much work has been done to refine the analysis, not only of tenses
but also other temporal expressions in language such as the temporal prepositions and connectives (“before”, “after”, “since”, “during”, “until”), using the many varieties of temporal logic. For some
examples, see Dowty (1979), Galton (1984), Taylor (1985), Richards et al. (1989). A useful collection of landmark papers in this area is Mani et al. (2005).
4.2 Applications in artificial intelligence
We have already mentioned the work of Allen (1984), which is concerned with finding a general framework adequate for all the temporal representations required by AI programs. The Event Calculus of
Kowalski and Sergot (1986) is pursued more specifically within the framework of logic programming, but is otherwise similarly general in character. A useful survey of issues involving time and
temporal reasoning in AI is Galton (1995), and a comprehensive recent coverage of the area is Fisher et al. (2005).
Much of the work on temporal reasoning in AI has been closely tied up with the notorious frame problem, which arises from the necessity for any automated reasoner to know, or be able to deduce, not
only those properties of the world which do change as the result of any event or action, but also those properties which do not change. In everyday life, we normally handle such facts fluently
without consciously adverting to them: we take for granted without thinking about it, for example, that the colour of a car does not normally change when one changes gear. The frame problem is
concerned with how to formalise the logic of actions and events in such a way that indefinitely many inferences of this kind are made available without our having to encode them all explicitly. A
seminal work in this area is McCarthy and Hayes (1969). A useful recent reference for the frame problem is Shanahan, 1997.
4.3 Applications in computer science
Following Pnueli (1977), the modal style of Temporal Logic has found extensive application in the area of Computer Science concerned with the specification and verification of programs, especially
concurrent programs in which the computation is performed by two or more processors working in parallel. In order to ensure correct behaviour of such a program it is necessary to specify the way in
which the actions of the various processors are interrelated. The relative timing of the actions must be carefully co-ordinated so as to ensure that integrity of the information shared amongst the
processors is maintained. Amongst the key notions here is the distinction between “liveness” properties of the tense-logical form Fp, which ensure that desirable states will obtain in the course of
the computation, and “safety” properties of the form Gp, which ensure that undesirable states will never obtain.
Non-determinism is an important issue in computer science applications, and hence much use has been made of branching time models. Two important such systems are CTL (Computation Tree Logic) and a
more expressive system CTL*; these correspond very nearly to the Ockhamist and Peircean semantics discussed above.
Further information may be found in Galton (1987), Goldblatt (1987), Kroger (1987), Bolc and Szalas (1995).
• Allen, J. F., 1984,“Towards a general theory of action and time”, Artificial Intelligence, volume 23, pages 123-154.
• Areces, C., and ten Cate, B., 2006, “Hybrid Logics”, in Blackburn et al., 2006.
• Belnap, N. and Perloff, M., 1988, “Seeing to it that: A canonical form for agentives”, Theoria, volume 54, pages 175-199, reprinted with corrections in H. E. Kyberg et al. (eds.), Knowledge
Representation and Defeasible Reasoning, Dordrecht: Kluwer, 1990, pages 167-190.
• van Benthem, J., 1983, The Logic of Time, Dordrecht, Boston and London: Kluwer Academic Publishers, first edition (second edition, 1991).
• van Benthem, J., 1995, “Temporal Logic”, in D. M. Gabbay, C. J. Hogger, and J. A. Robinson, Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 4, Oxford: Clarendon Press,
pages 241-350.
• Blackburn, P., van Benthem, J, and Wolter, F., 2006, Handbook of Modal Logics, Elsevier.
• L. Bolc and A. Szalas (eds.), 1995, Time and Logic: A Computational Approach, London: UCL Press.
• Davidson, D., 1967, “The Logical Form of Action Sentences”, in N. Rescher (ed.), The Logic of Decision and Action, University of Pittsburgh Press, 1967, pages 81-95. Reprinted in D. Davidson,
Essays on Actions and Events, Oxford: Clarendon Press, 1990, pages 105-122.
• Dowty, D., 1979, Word Meaning and Montague Grammar, Dordrecht: D. Reidel.
• Fisher, M., Gabbay, D., and Vila, L., 2005, Handbook of Temporal Reasoning in Artificial Intelligence, Amsterdam: Elsevier.
• Gabbay, D. M., Hodkinson, I., and Reynolds, M., 1994, Temporal Logic: Mathematical Foundations and Computational Aspects, Volume 1,. Oxford: Clarendon Press.
• Galton, A. P., 1984, The Logic of Aspect, Oxford: Clarendon Press.
• Galton, A. P., 1987, Temporal Logics and their Applications, London: Academic Press.
• Galton, A. P., 1995, “Time and Change for AI”, in D. M. Gabbay, C. J. Hogger, and J. A. Robinson, Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 4, Oxford: Clarendon
Press, pages 175-240.
• Goldblatt, R., 1987, Logics of Time and Computation, Center for the Study of Language and Information, CSLI Lecture Notes 7.
• Hodkinson, I. and Reynolds, M., 2006, “Temporal Logic”, in Blackburn et al., 2006.
• Kamp, J. A. W., 1968. Tense Logic and the Theory of Linear Order, Ph.D. thesis, University of California, Los Angeles.
• Kowalski, R. A. and Sergot, M. J., 1986, “A Logic-Based Calculus of Events”, New Generation Computing, volume 4, pages 67-95.
• Kroger, F., 1987, “Temporal Logic of Programs”, Springer-Verlag.
• Mani, I., Pustejovsky, J., and Gaizauskas, R., 2005, The Language of Time: A Reader, Oxford: Oxford University Press.
• Massey, G., 1969, “Tense Logic! Why Bother?”, Noûs, volume 3, pages 17-32.
• McCarthy, J. and Hayes, P. J., 1969, “Some Philosophical Problems from the Standpoint of Artificial Intelligence”, in D. Michie and B. Meltzer (eds.), Machine Intelligence 4, Edinburgh University
Press, pages 463-502.
• Mellor, D. H., 1981, Real Time, Cambridge: Cambridge University Press. (Chapter 6 reprinted with revisions as “The Unreality of Tense” in R. Le Poidevin and M. MacBeath (eds.), The Philosophy of
Time, Oxford University Press, 1993.)
• Øhrstrøm, P. and Hasle, P., 1995, Temporal Logic: From Ancient Ideas to Artificial Intelligence, Dordrecht, Boston and London: Kluwer Academic Publishers.
• Pnueli, A., 1977, “The temporal logic of programs”, Proceedings of the 18th IEEE Symposium on Foundations of Computer Science, pages 46-67.
• Prior, A. N., 1957, Time and Modality, Oxford: Clarendon Press.
• Prior, A. N., 1967, Past, Present and Future, Oxford: Clarendon Press.
• Prior, A. N., 1969, Papers on Time and Tense, Oxford: Clarendon Press.
• Reichenbach, H., 1947, Elements of Symbolic Logic, New York: Macmillan
• Rescher, N. and Urquhart, A., 1971, Temporal Logic, Springer-Verlag.
• Richards, B., Bethke, I., van der Does, J., and Oberlander, J., 1989, Temporal Representation and Inference, London: Academic Press.
• Shanahan, M., 1997, Solving the Frame Problem, Cambridge MA and London: The MIT Press.
• Taylor, B., 1985, Modes of Occurrence, Aristotelian Society Series, Volume 2, Oxford: Basil Blackwell.
• Xu, M., 1995, “On the basic logic of STIT with a single agent”, Journal of Symbolic Logic, volume 60, pages 459-483.
artificial intelligence: logic and | frame problem | logic: hybrid | logic: modal | Prior, Arthur | time
|
{"url":"https://plato.stanford.edu/ARCHIVES/WIN2009/entries/logic-temporal/","timestamp":"2024-11-13T22:14:48Z","content_type":"application/xhtml+xml","content_length":"45934","record_id":"<urn:uuid:2fc9a3ae-6957-4be4-9285-6d449032d564>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00689.warc.gz"}
|
vector-th-unbox vs singletons-presburger - compare differences and reviews? | LibHunt
Deriver for unboxed vectors using Template Haskell (by liyang)
Presburger arithmetic solver for built-in type-level naturals (by konn)
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages
with each PR.
vector-th-unbox singletons-presburger
- -
- -
0.0 7.4
about 5 years ago 5 months ago
Haskell Haskell
BSD 3-clause "New" or "Revised" License BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Posts with mentions or reviews of vector-th-unbox. We have used some of these posts to build our list of alternatives and similar projects.
We haven't tracked posts mentioning vector-th-unbox yet.
Tracking mentions began in Dec 2020.
Posts with mentions or reviews of singletons-presburger. We have used some of these posts to build our list of alternatives and similar projects.
We haven't tracked posts mentioning singletons-presburger yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
When comparing vector-th-unbox and singletons-presburger you can also consider the following projects:
statistics - A fast, high quality library for computing with statistics in Haskell.
vector-binary-instances - Instances for the Haskell Binary class, for the types defined in the popular vector package.
ghc-plugs-out - Type checker plugins without the type checking.
nimber - Finite nimber arithmetic
dimensional - Dimensional library variant built on Data Kinds, Closed Type Families, TypeNats (GHC 7.8+).
levmar - An implementation of the Levenberg-Marquardt algorithm
vector - An efficient implementation of Int-indexed arrays (both mutable and immutable), with a powerful loop optimisation framework .
vector-space-points - A type for points, as distinct from vectors.
semigroups - Haskell 98 semigroups
vector-heterogenous - Arbitrary size tuples in Haskell
linear - Low-dimensional linear algebra primitives for Haskell.
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages
with each PR.
|
{"url":"https://www.libhunt.com/compare-vector-th-unbox-vs-ghc-typelits-presburger","timestamp":"2024-11-11T04:29:36Z","content_type":"text/html","content_length":"30237","record_id":"<urn:uuid:ad69d307-84af-4e6c-bf3f-e4a66df8aca2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00782.warc.gz"}
|
Generate Random Numbers & Normal Distribution Plots - Analytics Yogi
Generate Random Numbers & Normal Distribution Plots
In this blog post, we’ll be discussing how to generate random numbers samples from normal distribution and create normal distribution plots in Python. We’ll go over the different techniques for
random number generation from normal distribution available in the Python standard library such as SciPy, Numpy and Matplotlib. We’ll also create normal distribution plots from these numbers
Generate random numbers using Numpy random.randn
Numpy is a Python library that contains built-in functions for generating random numbers. The numpy.random.randn function generates random numbers from a normal distribution. This function takes size
N as in number of numbers to be generated as an input and returns an array of N random numbers. The elements of the output array are normally distributed such that they have a mean of 0 and a
standard deviation of 1. The distribution is also termed as standard normal distribution. Note that the random number needs to be sorted for creating a smooth plot. Also, note that SciPy’s norm.pdf
method is used to generate probability distribution for normal distribution. Norm.pdf can be passed mean (loc) and standard deviation (scale) as well to create normal distribution plot of given mean
and standard deviation.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
# Draw random samples from standard normal distribution
x = np.random.randn(100)
# Plot probability distribution function
x_sorted = np.sort(x)
plt.figure(figsize=(7, 5))
plt.plot(x_sorted, norm.pdf(x_sorted))
plt.title("Normal distribution", fontsize=16)
Generate random numbers using Numpy random.normal
The numpy.random.normal function generates random numbers from this distribution. The parameters of the function specify the mean and standard deviation of the distribution. The function also takes
an size parameter, which specifies the shape of the array of generated numbers. Unlike random.randn method, random.normal method on Numpy is used to generate random samples of normal ddistribution
with a given value of mean and standard deviation. In the code below, loc is used to specify mean and scale is used to specify standard deviation. The same is also passed to norm.pdf function.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
# Draw random samples from normal distribution
# with a predefined mean and standard deviation
x = np.random.normal(loc=5, scale = 2, size=1000)
# Plot probability distribution function
x_sorted = np.sort(x)
plt.figure(figsize=(7, 5))
plt.plot(x_sorted, norm.pdf(x_sorted, loc=5, scale=2))
plt.title("Normal distribution", fontsize=16)
Generating Numbers in Standard Normal Distribution using SciPy Norm.ppf
Numbers are generated using Numpy linspace method. The input to linspace method is lower and upper range which are passed as output of SciPy norm.ppf function. Scipy Norm.ppf function takes as input
a desired probability, and returns a number that has that probability of occurring under a standard normal distribution. This happens when no mean and standard deviation values are passed to norm.ppf
method. The default mean is 0 and standard deviation is 1 for standard normal distribution. For example, if we want to generate a number that has a 90% chance of occurring, we would use Scipy
Norm.ppf(0.9). This would return a value of 1.28, which means that there is a 90% chance that a randomly generated number will be less than or equal to 1.28. Scipy Norm.ppf can be used to generate
numbers with any desired probability, making it a versatile tool for statisticians.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
# Generate numbers in well-defined range using
# norm.ppf method
x = np.linspace(norm.ppf(0.01), norm.ppf(0.99), 100)
# Create standard normal distribution plot
plt.figure(figsize=(7, 5))
plt.plot(x, norm.pdf(x))
plt.title("Standard normal distribution", fontsize=16)
Generating Numbers in Normal Distribution using Scipy Norm.ppf
The difference from the previous method is specific values of mean and standard deviation passed to both norm.ppf and norm.pdf method.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
# Generate numbers in well-defined range for a given mean and std using
# norm.ppf method
mean = 3
std = 4
x = np.linspace(norm.ppf(0.01, loc=mean, scale=std),
norm.ppf(0.99, loc=mean, scale=std),
# Create normal distribution plot
plt.figure(figsize=(7, 5))
plt.plot(x, norm.pdf(x, loc=mean, scale=std))
plt.title("Normal distribution", fontsize=16)
In this blog post, we’ve shown you how to generate random numbers belonging to normal distribution and create plots of the normal distribution in Python. We hope you found this helpful!
Latest posts by Ajitesh Kumar
(see all)
I found it very helpful. However the differences are not too understandable for me
Very Nice Explaination. Thankyiu very much,
in your case E respresent Member or Oraganization which include on e or more peers?
Such a informative post. Keep it up
Thank you....for your support. you given a good solution for me.
Ajitesh Kumar
I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming
languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big
data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.
Posted in Data Science, Python, statistics. Tagged with Data Science, python, statistics.
|
{"url":"https://vitalflux.com/generate-random-numbers-normal-distribution-plots/","timestamp":"2024-11-07T06:55:27Z","content_type":"text/html","content_length":"111787","record_id":"<urn:uuid:380f98c5-e856-465e-97b0-9f324b5a7dab>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00204.warc.gz"}
|
Leading digits of powers of 2
The first digit of a power of 2 is a 1 more often than any other digit. Powers of 2 begin with 1 about 30% of the time. This is because powers of 2 follow Benford’s law. We’ll prove this below.
When is the first digit of 2^n equal to k? When 2^n is between k × 10^p and (k + 1) × 10^p for some positive integer p. By taking logarithms base 10 we find that this is equivalent to the fractional
part of n log[10]2 being between log[10] k and log[10] (k + 1).
The map
x ↦ ( x + log[10 ]2 ) mod 1
is ergodic. I wrote about irrational rotations a few weeks ago, and this is essentially the same thing. You could scale x by 2π and think of it as rotations on a circle instead of arithmetic mod 1 on
an interval. The important thing is that log[10 ]2 is irrational.
Repeatedly multiplying by 2 is corresponds to adding log[10 ]2 on the log scale. So powers of two correspond to iterates of the map above, starting with x = 0. Birkhoff’s Ergodic Theorem tells us
that the number of times iterates of this map fall in the interval [a, b] equals b – a. So for k = 1, 2, 3, … 9, the proportion of powers of 2 start with k is equal to log[10] (k + 1) – log[10] (k)
= log[10] ((k + 1) / k).
This is Benford’s law. In particular, the proportion of powers of 2 that begin with 1 is equal to log[10] (2) = 0.301.
Note that the only thing special about 2 is that log[10 ]2 is irrational. Powers of 3 follow Benford’s law as well because log[10 ]3 is also irrational. For what values of b do powers of b not follow
Benford’s law? Those with log[10 ]b rational, i.e. powers of 10. Obviously powers of 10 don’t follow Benford’s law because their first digit is always 1!
[Interpret the “!” above as factorial or exclamation as you wish.]
Let’s look at powers of 2 empirically to see Benford’s law in practice. Here’s a simple Python program to look at first digits of powers of 2.
count = [0]*10
N = 10000
def first_digit(n):
return int(str(n)[0])
for i in range(N):
n = first_digit( 2**i )
count[n] += 1
Unfortunately this only works for moderate values of N. It ran in under a second with N set to 10,000 but for larger values of N it rapidly becomes impractical.
Here’s a much more efficient version that ran in about 2 seconds with N = 1,000,000.
from math import log10
N = 1000000
count = [0]*10
def first_digit_2_exp_e(e):
r = (log10(2.0)*e) % 1
for i in range(2, 11):
if r < log10(i):
return i-1
for i in range(N):
n = first_digit_2_exp_e( i )
count[n] += 1
You could make it more efficient by caching the values of log10 rather than recomputing them. This brought the run time down to about 1.4 seconds. That’s a nice improvement, but nothing like the
orders of magnitude improvement from changing algorithms.
Here are the results comparing the actual counts to the predictions of Benford’s law (rounded to the nearest integer).
| Leading digit | Actual | Predicted |
| 1 | 301030 | 301030 |
| 2 | 176093 | 176091 |
| 3 | 124937 | 124939 |
| 4 | 96911 | 96910 |
| 5 | 79182 | 79181 |
| 6 | 66947 | 66948 |
| 7 | 57990 | 57992 |
| 8 | 51154 | 51153 |
| 9 | 45756 | 45757 |
The agreement is almost too good to believe, never off by more than 2.
Are the results correct? The inefficient version relied on integer arithmetic and so would be exact. The efficient version relies on floating point and so it’s conceivable that limits of precision
caused a leading digit to be calculated incorrectly, but I doubt that happened. Floating point is precise to about 15 significant figures. We start with log10(2), multiply it by numbers up to
1,000,000 and take the fractional part. The result is good to around 9 significant figures, enough to correctly calculate which log digits the result falls between.
Update: See Andrew Dalke’s Python script in the comments. He shows a way to efficiently use integer arithmetic.
4 thoughts on “Leading digits of powers of 2”
1. The performance limitation in your first version is likely the quadratic time to convert from Python’s native integer representation to base-10. If you use a decimal.Decimal then that overhead
disappears. The following takes about the same time as your second, float version, and doesn’t have the accuracy concern. (Both return the same values for N=1 million.)
import decimal
count = [0]*10
N = 1000000
a = decimal.Decimal(1)
count[1] += 1
for i in range(N-1):
a = a*2
count[int(str(a)[0])] += 1
2. Andrew: Very nice. I haven’t used decimal before.
3. Your proof is flawed. The ergodic theorem doesn’t assert anything about a specific orbit (in this case the orbit of 0), only about almost every orbit.
You need to use the fact that exp(2pi n alpha) is uniformly distributed around the circle when alpha is irrational.
4. Andrew: Nice and fast. But it will fail if N gets too high and if you want to have not only the first but e. g. the leading 4 digits.
|
{"url":"https://www.johndcook.com/blog/2017/06/20/leading-digits-of-powers-of-2/","timestamp":"2024-11-13T21:11:36Z","content_type":"text/html","content_length":"59740","record_id":"<urn:uuid:60c5836b-32d4-4230-a043-adc5618865f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00582.warc.gz"}
|
rsback -- Program to backup file trees in rotating archives on Unix-based hosts
To start one ore more backup tasks:
rsback [options] list-of-tasks
To get help:
rsback -h
rsback makes rotating backups using the common rsync program and some standard file utilities on Unix-based backup hosts. Its purpose is to mirror certain file trees from a remote host or from the
local system and to store them as rotating archives in backup repositories on the local backup host. The file structure, permissions, ownerships and time stamps of the mirrored data are the same as
in the original sources.
rsback is a kind of front end to rsync written in Perl which allows a system administrator to configure and excute backups of different file trees located on remote hosts or on the local system (e.g
tasks for hourly, daily, weekly, monthly, ... backups).
If rsback is executed at regular intervals (preferably scheduled by cron jobs), it maintains rotating backup archives. To restore files from the backup repository no special restore procedure is
necessary. To recover files or directories, you just copy them from the archive tree back to the original location or wherever you want to place them.
The combination of rsync's powerful capabilities and the extensive use of hard links for copying archives within the local file system results in a fast and disk space saving backup technique.
rsback runs on Unix-based hosts. I tested it on some Linux boxes running different distributions. It should also run on other Unix-based systems if the following programs and utilites are installed:
• rsync: I recommend the most recent version [1]. If you want to mirror file trees from remote Windows boxes you also need Cygwin.
• Perl 5.005 (or higher version)
• The common file utilities cp, rm, mv, and mkdir
• cron or similar program to execute scheduled commands (recommended)
• You should have some knowledge about rsync
• You need root privileges to install and run rsback on a backup host
I was looking for a backup solution suitable for a workgroup server (Linux box) where some project folders (10 to 20 Gigabytes) have to be mirrored daily.
For a while I tried several different backup techniques. But I was not really happy with any of them.
By accident I found rsync on my disk and tried to find out what it could be used for ... looks good ;)
Searching for a ready-to-use backup solution based on rsync in the net I found Mike Rubel's examples of rotating rsync snapshots [6]. I seemed to be a solution for my problem. To handle
configurations of different baskup tasks more comfortably I finally made a kind of front end or wrapper based on Mike's sample scripts:
The result was rsback (RSync BACKup ... hmm).
How it works
The explanations below will refer to a typical example like this:
We want to maintain a rotating backup repository of a file tree which resides on a remote host workbox. The remote host runs rsync in daemon mode on TCP port 873. The file tree on workbox consists of
al subdirectrories and files of var/projects which is accessible via workbox::work. The corresponding entry in the rsyncd configuration file workbox:/etc/rsyncd.conf may look like this (the most
simple case):
path = /var/projects
comment = project directories
Backup steps
The backup concept of rsback is based on two steps:
1. Rotation
2. Backup
The repetitive combination of rotation and backup results in backup archives which are comparable to classic combinations of full and incremental backups with respect to the content of the archives.
Task work-daily
All files and directories under workbox::work (or workbox:/var/projects respectively) should be saved every workday night to our local machine backbox. The five latest daily backup sets should be
kept in the backup repository on backbox.
Task work-weekly
Additionally, a weekly backup of the most recent local archive should be made on saturday. The four latest weekly backup sets should also be kept in the repository on backbox.
That way we should have the data of the last five working days and the weekly shnapshots of the last four weeks (taken every Friday) in our backup repository.
Tasks are not restricted to be processed at daily or weekly intervals as in this example. It's up to you how often you perform backups and how many archives you keep in your repositories.
Backup repositories
Let us assume that our local host backbox has a large disk which is mounted to backup. The directory /backup will hold our local backup repositories`.
Archive structure
The backup repository on backbox in our example looks like this:
| history.work-daily history of task work-daily
| history.work-weekly history of task work-weekly
| +--/daily.0 most recent daily archive tree
| +--/daily.1 \
| +--/daily.2 |
| +--/daily.3 |-previous daily archives
| +--/daily.4 |
| +--/daily.5 /
| +--/weekly.0 most recent weekly archive tree
| +--/weekly.1 \
| +--/weekly.2 |-previous weekly archives
| +--/weekly.3 |
| +--/weekly.4 /
The directories ../daily.0 to ../daily.5 contain copies of the original data of the most recent daily backup run (vdaily.0), of the backup run one day before (daily.1), ..., and of the backup run
five days ago (daily.5) respectively. The directories ../weekly.0to../weekly.4` are the archives of the most recent weekly tasks and of the previous weekly tasks, respectively.
History file
A history file for each backup task keeps track of the time stamps of the archives. A History file consits of a table of two (tab separated) columns. For each consecutive backup run there is a row
with the backup number in column one and the date and time in ISO format in column two:
# rsback-0.4.0 (hjb -- 2002-07-16)
0 2002-07-17 22:24:05
1 2002-07-16 22:24:13
2 2002-07-15 22:24:30
3 2002-07-12 22:25:28
4 2002-07-11 22:24:20
5 2002-07-10 22:24:16
6 2002-07-09 20:15:37
The history file is read before a backup task is processed. If no history file exists it will be created using the time stamps of the existing archive tree (if there is any). After the backup task
has finished, the recent history will be written to the history file.
Daily rotation
When a backup task is executed, first the previous backup archives in the repository are rotated by hard-linking the archives among themselves. In our example:
rm -rf daily.5
mv -al daily.4 daily.5
mv -al daily.3 daily.4
mv -al daily.2 daily.3
mv -al daily.1 daily.2
The backup set daily.1 is replaced by hard links to the most recent backup set daily.0:
cp -al daily.0 daily.1
Daily backup
Using rsync the source tree is mirrored from a remote or local file system to the local backup repository. The default behaviour is, that only files and directories are copied which are different
from their couterparts in the backup repository. Different means: the size, time stamp, or ownership of a file/directory has changed since the last backup to the same repository, or a file/directory
doesn't (yet) exist in the repository. Items in the backup repository, which do not exist in the source tree, are removed from the backup repository.
This action is launched by invoking rsync like
rsync -al --delete
Weekly rotation
This is done in same manner as the daily rotation, execpt that (in our example) the archives from weekly.0 to weekly.4 are rotated.
Weekly backup
We want to make a snapshot of the most recent daily backup archive in our backp repository. Both the source and the destination are local directories. Therefore this backup executed by hard-linking
daily.0 to weekly.0:
rm -rf weekly.0
cp -alf daily.0 weekly.0
To install rsback on a backup host, login as root and proceed as follows.
Copy the downloaded archive rsback-x.y.z.tar.gz (x.y.z is the actual version) to a installation directory, e.g. /usr/local/src. Change to this directory and unpack the archive:
# cd /usr/local/src
# tar zxvf rsback-x.y.z.tar.gz
Copy rsback to a bin directory in root's path, e.g.
# cp rsback-x.y.z/bin/rsback /root/bin
Make sure that rsback is executable only by root:
# chmod 700 /root/bin/rsback
Create a configuration directory and copy the sample configuration files from ../rsback-x.y.z/etc into it:
# mkdir /etc/rsback
# cp rsback-x.y.z/etc/* /etc/rsback
Be sure that only root has access to ´rsback.conf´:
# chown root.root /etc/rsback/rsback.conf
# chmod 600 /etc/rsback/rsback.conf
Now you may delete the archive:
# rm rsback-x.y.z.tar.gz
Some configuration parameters will just be passed as options to rsync. Therefore it is strongly recommended that you consult the rsync documentation [5] and the man pages (rsync(1), rsyncd.conf(5)),
if you are not sure, what rsback does. Before you run your configuration with production data, make some tests with dummy data first. Compare the results carefully with that, what you have expected.
You should consider some general precautions, if your machines can be accessed by more people than only you.
• Don't allow data transfers to and from remote machines without authentication or other access restrictions.
• Don't transfer clear text passwords.
• Don't transfer unencrypted sensible data.
• Don't give write access to the backup repositories to anyone else than root@backupbox.
• Don't give read access to backup repositories to anyone else than the owner of the original data.
• Don't give read or even write access to rsback.conf to anybody else than root@backupbox:
# chown root.root /etc/rsback/rsback.conf
# chmod 600 /etc/rsback/rsback.conf
Configuration file
Edit rsback.conf to customize rsback and to define your backup tasks.
Default location
If you want to have the default configuration file somewhere else than /etc/rsback/rsback.conf, edit the variable $rsback_conf in `rsbackv to match your preferences. Or use option -c to tell rsback
where to find the configuration file (see the section on "Usage").
File format
The file format is similar to that of rsyncd.conf(5).
The file is line-based - that is, each newline-terminated line represents either a comment, a section name or a parameter. Any line beginning with a hash # or a semicolon ; is ignored, as are lines
containing only whitespace. The file consists of sections and parameters. A section begins with the name of the section in square brackets and continues until the next section begins. Sections
contain parameters of the form name = list-of-values, where list-of-values is a list of one or more strings.
Global section
In the section [global] some general configuration parameters are defined. If not noted explicitly as optional, all parameters are mandatory.
rsback needs to know where to find some programs. Set the paths with the parameters rsync_cmd, cp_cmd, mv_cmd, rm_cmd, and mkdir_cmd according to your system. The default settings in the sample
configuration file comming with rsback are:
parameter: __rsync_cmd__
rsync_cmd = /usr/bin/rsync
parameter: __cp_cmd__
cp_cmd = /bin/cp
parameter: __mv_cmd__
mv_cmd = /bin/mv
parameter: __rm_cmd__
rm_cmd = /bin/rm
parameter: __mkdir_cmd__
mkdir_cmd = /bin/mkdir
parameter: tasks
tasks is a list of all backup tasks you want to execute. A back up task in this context is just a arbitrary word to denote a certain backup job. The specific parameters of each backup task listed in
tasks have to be defined in a separate task section (see below).
parameter: __exclude_file__ (optional)
exclude_file points to a file containing global exclude patterns for rsync. 'global' means: these patterns are applied to all backup tasks wich are excuted with mode=rsync (see the section on
"task_sections"). Please refer to the rsync documentation (look for "exclude patterns") or to the man page (rsync(1)). The value given here will be passed to rsync with the command option
--exclude-from as it is.
parameter: __rsync_options__
The parameter rsync_options defines additional options which will be passed to rsync. For example you may choose rsync_options = --stats to tell rsync to report some statistics on the file transfer.
This parameter applies to all backup tasks. You can also define additional options which will only applied to certain tasks within the task sections.
parameter: __lock_dir__
The directory where lock files will be created. Specify a directory by giving an absolute path without a trailing slash.
parameter: if_locked_retry (optional)
If the current task is locked by an other task (see lock_dir above) then this parameter may be used to try to restart the task after some time of delay. The default delay value is 10 minutes.
Example: Retry three times with a dela of 10 minutes.
if_locked_retry = 3 10m
parameter: ignore_rsync_errors (optional)
Give a list of rsync error codes (see rsync EXIT VALUES) which should be ignored.
This example ignores rsync exit values of 12 (Error in rsync protocol data stream) and 24 (Partial transfer due to vanished source files):
ignore_rsync_errors = 12 24
parameter: if_error_continue (optional)
This defines how to proceed if a system command used by rsback fails. A value of 'no' means that subsequent tasks given on the command line are skipped (this is the default behaviour). If this
parameter is set to 'yes' then only the actual task is aborted but following tasks, given on the command line, are started as if no error has occured.
parameter: if_error_undo (optional)
Undo (remove defective and/or incomplete backup set) in case of errors?
parameter: use_link_dest (optional)
Use rsync's option --link-dest, yes or no (default is yes).
Task sections
Parameters specific to certain backup tasks are declared within corresponding task sections. There should be one task section for each backup task listed with the global parameter tasks (see global
section). E.g., if you have declared
tasks = work-daily work-weekly misc
the task sections
must be present.
parameter: mode
This parameter controls what backup mode will be used for execution of this task. Use mode=rsync, if you want to backup the original source tree either from a remote host or form the local machine
using rsync.
mode=link is intended to be used for local copies on the backup host. This makes sense only, if both the source and the destination reside on the same physical partion, because hard links will be
parameter: source
source designates the location of the source data to be saved. The format depends on the backup mode and the loaction of the source files. This parameter will be passed as source to rsync if mode=
rsync is selected or to cp if mode=link is selected. Please refer to the man pages rsync(1) and cp(1) to select the right one for your purpose.
E.g. if the source data resides on the remote host workbox which is running rsync in daemon mode (as in our example above) then source is something like this
source = workbox::work/
If mode=link the parameter source designates the source directory on the local host. The task work-weekly in our example above needs a line like
source = /backup/work/daily.0
in its task section.
parameter: destination
destination is the directory within the local backup repository. It is not a bad idea to use directory names in the destination path which can easily be related to a backup task (or vice versa). E.g.
if we refer to the task work-daily of our example then it is something like
destination = /backup/work
The definition for the task work-weekly or our example is also
destination = /backup/work
This may be confusing, but consider, that the final archive directory will always be a subdirectory of this path, named according to your selection in the first rotate parameter (see below).
parameter: rotate
This parameter consists of a list of two values: the first value is an arbitrary name to designate the archive directory in the local depository. The second value is an positive integer number, which
defines how many backup sets have to be kept in the repository.
rotate = daily 5
parameter: __rsync_options__ ( optional )
Same as parameter rsync_options in the [global] section, but applies only to this task.
parameter: __exclude_file__ ( optional )
This parameter has the same purpose as in the global section. The only difference is, that it is applied to this task only (see also below).
Example: exclude_file = /etc/rsback/work-daily.exclude
parameter: __suspend_file__ (optional)
Suspend tasks (temporarily) if this file exists.
Example: suspend_file = /var/work/_dont_backup_now_
parameter: __trigger_file__ (optional)
Trigger tasks (once only). This trigger_file should be writetable by _rsback, because it will be removed after succefull backup.
Example: trigger_file = /home/fritz/_backup_please_
parameter: use_link_dest (optional)
Use rsync's option --link-dest, yes or no (default is yes).
parameter: update (optional, in mode = rsync only)
This causes just a "simple" rsync on the most recent backup set of the refered task (no rotation). There are no other parameters necessary, they are taken from the refered task section.
To get consistent backups of live systems (data bases, virtual machines, ...) these have to be shutdown before. To speedup backups of such systems, this procedure may help in some cases:
Step 1: Run a backup task of the running system (live-backup). Use 'ignore_rsync_errors = 24' (vanished source files).
Step 2: Shutdown the system.
Step 3: Run a task with 'update = live-backup' to take an update snapshot of the backup from Step 2.
Step 4: Restart the system.
The downtime (during step 3) should be considerably shorter compared to step 1 because only those files have to be transfered which have changed in the meantime.
parameter __pre_hook__ (optional)
parameter __post_hook__ (optional)
Run a system command before (pre_hook) and/or after (post_hook) a task excution.
Example to mount and unmout a Proxmox container which should be backed up:
pre_hook = pct mount 113
post_hook = pct unmount 113
Exclude files
Patterns to exclude files or directories from beeing rsync'd are collected in separate files, see parameter exclude_file above. Because these exclude files are directly passed to rsync with the
option --exclude-from=FILE they must have a format as rsync expects. Please consult the section "EXCLUDE PATTERNS" in rsync(1).
Global and task specific exclude files are cumulative: both the exclude patterns in the global exclude file and the patterns in the exclude file defined in a task section will be applied to the
source tree when a backup task is processed.
To start a backup task invoke
# rsback [options] task-list
where task-list is a list of one or more backup tasks as definded in the configuration file.
The possible options are
-h Display a help message (usage)
-v Be verbose
-d Run rsync with option '--dry-mode' (simulation mode). That means: __*rsync*__ does not copy anything,
it just displays what it would do.
-i Initialize the backup repositories to be used for the specified tasks. This isn't really necessary,
because __*rsback*__ will try do create the necessary directories, if a backup repository does not yet exist,
when a backup task is processed.
-c configuration-file. Use this option to use a configuration file other than the default one.
Example: rsback -vc /etc/rsback/test.conf work-daily misc
Scheduling backup tasks
rsback is supposed to be executed by cron jobs at regular intervals. crontab entries in our example may look like
0 22 * * 1-5 /root/bin/rsback -v work-daily >>/var/log/rsback/work-daily.log
0 22 * * 6 /root/bin/rsback -v work-weekly >>/var/log/rsback/work-weekly.log
The daily backup task work-daily will be executed every workday night at 22:00. The weekly backup task work-weekly will run at Saturday night.
see CHANGELOG
Copyright (C) 2002-2024 by Hans-Jürgen Beie
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
{"url":"https://www.pollux.franken.de/projekte/rsback/readme","timestamp":"2024-11-11T23:40:13Z","content_type":"text/html","content_length":"33276","record_id":"<urn:uuid:99d2af2e-daef-4bb9-aeb1-7dda89cae0a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00032.warc.gz"}
|
eet (US survey) to Fingers (cloth)
Feet (US survey) to Fingers (cloth) Converter
Enter Feet (US survey)
Fingers (cloth)
β Switch toFingers (cloth) to Feet (US survey) Converter
How to use this Feet (US survey) to Fingers (cloth) Converter π €
Follow these steps to convert given length from the units of Feet (US survey) to the units of Fingers (cloth).
1. Enter the input Feet (US survey) value in the text field.
2. The calculator converts the given Feet (US survey) into Fingers (cloth) in realtime β using the conversion formula, and displays under the Fingers (cloth) label. You do not need to click any
button. If the input changes, Fingers (cloth) value is re-calculated, just like that.
3. You may copy the resulting Fingers (cloth) value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Feet (US survey) to Fingers (cloth)?
The formula to convert given length from Feet (US survey) to Fingers (cloth) is:
Length[(Fingers (cloth))] = Length[(Feet (US survey))] / 0.3749992499962613
Substitute the given value of length in feet (us survey), i.e., Length[(Feet (US survey))] in the above formula and simplify the right-hand side value. The resulting value is the length in fingers
(cloth), i.e., Length[(Fingers (cloth))].
Calculation will be done after you enter a valid input.
Consider that a land parcel is measured as 500 feet (US survey) in length.
Convert this length from feet (US survey) to Fingers (cloth).
The length in feet (us survey) is:
Length[(Feet (US survey))] = 500
The formula to convert length from feet (us survey) to fingers (cloth) is:
Length[(Fingers (cloth))] = Length[(Feet (US survey))] / 0.3749992499962613
Substitute given weight Length[(Feet (US survey))] = 500 in the above formula.
Length[(Fingers (cloth))] = 500 / 0.3749992499962613
Length[(Fingers (cloth))] = 1333.336
Final Answer:
Therefore, 500 ft is equal to 1333.336 finger.
The length is 1333.336 finger, in fingers (cloth).
Consider that a boundary wall is 250 feet (US survey) long.
Convert this distance from feet (US survey) to Fingers (cloth).
The length in feet (us survey) is:
Length[(Feet (US survey))] = 250
The formula to convert length from feet (us survey) to fingers (cloth) is:
Length[(Fingers (cloth))] = Length[(Feet (US survey))] / 0.3749992499962613
Substitute given weight Length[(Feet (US survey))] = 250 in the above formula.
Length[(Fingers (cloth))] = 250 / 0.3749992499962613
Length[(Fingers (cloth))] = 666.668
Final Answer:
Therefore, 250 ft is equal to 666.668 finger.
The length is 666.668 finger, in fingers (cloth).
Feet (US survey) to Fingers (cloth) Conversion Table
The following table gives some of the most used conversions from Feet (US survey) to Fingers (cloth).
Feet (US survey) (ft) Fingers (cloth) (finger)
0 ft 0 finger
1 ft 2.6667 finger
2 ft 5.3333 finger
3 ft 8 finger
4 ft 10.6667 finger
5 ft 13.3334 finger
6 ft 16 finger
7 ft 18.6667 finger
8 ft 21.3334 finger
9 ft 24 finger
10 ft 26.6667 finger
20 ft 53.3334 finger
50 ft 133.3336 finger
100 ft 266.6672 finger
1000 ft 2666.672 finger
10000 ft 26666.72 finger
100000 ft 266667.2 finger
Feet (US survey)
A foot (US survey) is a unit of length used in land surveying and mapping in the United States. One foot (US survey) is defined as exactly 1200/3937 meters, which is approximately 0.3048006096 meters
or about 0.3048 meters.
The US survey foot is slightly different from the international foot, which is defined as exactly 0.3048 meters. The difference is due to historical measurement standards and is used in specific
contexts such as land surveying and engineering in the United States.
US survey feet are used primarily in the United States for property measurement, land surveying, and mapping, ensuring consistency in measurements within these fields.
Fingers (cloth)
A finger (cloth) is a historical unit of length used in textiles and cloth measurement. One finger (cloth) is approximately equivalent to 1 inch or 0.0254 meters.
The finger (cloth) is based on the width of a person's finger and was used for finer measurements in fabric and textiles.
Finger (cloth) measurements were utilized in the textile industry for detailing and cutting fabric. Although it is not commonly used today, the unit provides insight into traditional textile
measurement practices and historical standards.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Feet (US survey) to Fingers (cloth) in Length?
The formula to convert Feet (US survey) to Fingers (cloth) in Length is:
Feet (US survey) / 0.3749992499962613
2. Is this tool free or paid?
This Length conversion tool, which converts Feet (US survey) to Fingers (cloth), is completely free to use.
3. How do I convert Length from Feet (US survey) to Fingers (cloth)?
To convert Length from Feet (US survey) to Fingers (cloth), you can use the following formula:
Feet (US survey) / 0.3749992499962613
For example, if you have a value in Feet (US survey), you substitute that value in place of Feet (US survey) in the above formula, and solve the mathematical expression to get the equivalent value in
Fingers (cloth).
|
{"url":"https://convertonline.org/unit/?convert=foot_us_survey-fingers_cloth","timestamp":"2024-11-08T22:11:24Z","content_type":"text/html","content_length":"92141","record_id":"<urn:uuid:28c71840-be49-46b4-8e8f-b8e1b4dca9bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00439.warc.gz"}
|
Did I Fry My Motherboard & A Few Other Component Questions ...
My PC Stats:
MGE Viper Gaming Case
550W Power Supply
AMD64 3500+ (2.2GHz)
MSI K8N Neo2 Platinum Motherboard (NVidia nForce3 Ultra Chipset Based)
2G OCZ RAM (4x512)
eVGA 256mb 6800 Graphics Card (AGP)
Short Version Of My Story:
About a week ago, my 256MB 6800 eVGA Graphics Card or my Insignia monitor were having some type of pixalitation issue. The pixals lines ran from top to bottom on the monitor screen with no movement,
just there without moving. The only way I got rid of the issue was to shut my PC off for five minutes then start it back up and it would work fine. A few days later, the same issue happened again.
This time I decided cleaning the inside of my PC from all the dust may help the issue.
I cleaned the dust (and there was a ton of it) from within the PC. I noticed the processor heatsink fan was cluttered with dust so I used the duster spray (of air) but it only pushed the dust
further into the metal casing that lies between the fan and the processor (the heatsink?). I removed the fan and the metal casing that lies between the fan and processor. The processor was stuck to
the metal heatsink by the grease used to keep the processor cool. I removed the processor from this heatsink carefully and cleaned the metal heatsink from all the dust within it. I laid the
processor back onto the motherboard without snapping it into the plastic casing right under heat then put the heatsink and fans on top and tried to snap the heatsink clamps back into place, which was
pretty hard because taking it off was alot easier than putting it back on.
After I clamped it back into place and turned the PC on. I heard two long beep sounds. I didn't listen to see if there were any more beeps. After the two beeps, I immediately shut the power off
thinking there was something wrong with the way I put the heatsink and fan back on. Well, after pulling the fans and heatsink off, I had problems getting the processor to fit properly. I also
noticed the plastic square (between the processor and motherboard) where the processor snaps into place was broken into two pieces. I fit the plastic piece back into place and before I put the
processor back on, I noticed a few bent pins. I checked online and saw that a credit card could be used to slide between the pins to straighten the pins. Well, in my case, a few pins snapped off (I
didn't use much force but I learned just how weak the processor pins can be) and a few pins were still bent. Fearing any further damage to the processor, I decided to snap the processor in on the
plastic part that had broken into two that I put back into place. I powered the PC up not realizing AMD chips can burn out within seconds without the heatsink attached. The fans, power supply and
fan, and graphics card fan worked but nothing else worked so I shut the power off again. I noticed my processor was super hot, hotter than any shower you I've ever taken. We're talking oven hot. I
put the heatsink back into place and screwed it in and screwed the removeable side of the PC case back on and hoped for the best.
Since that time, there are no more beeps to tell me what's wrong with the PC (I looked up some information after I screwed up my PC). All fans but one side fan connected to the case of the PC for
airflow are working. The temperature gauge on the PC is working ans says the hard drive is operational but nothing is booting up ... I assume its a default display? Well, after doing some reading
online I assume I've friend my processor and also friend my motherboard in the process of my stupidity.
What I'd like to know is if there is a way to test my motherboard to see if I fried it along with my processor? Is it possible only the processor is fried and the motherboard may still be good?
Also, the fan on my graphics card still works. Does this mean my graphics card is still good or will the fan on the graphics card still work even if I fried that also? ... As for the RAM, I assume
I can have them checked at my local Best Buy to see if I screwed them up also or does the fried componens have nothing to do with the RAM? ... I am also wondering if I indeed fried my processor and
mobo, would I be better off replacing the 3500+ with a new one (since they run about $40) and possibly a new Mobo (if its fried)?
Any help would be greatly appreciated and sorry for the newbie questions.
Wow.. Computer 101 is to never, I mean absolutely never, run a modern PC with out an active cooling heatsink.. I am afraid that your process is done, which I am sure that you already figured out.
I also noticed the plastic square (between the processor and motherboard) where the processor snaps into place was broken into two pieces.
If the motherboard is physically damaged I wouldn't dump another processor into that motherboard. The chance for a short it too high in my opinion. I would replace it. Also after a processor has
cooked most of the time some of the plastic melts below the processor depending on how long was run. Again I would replace it.
As far as the rest of your stuff... It should be alright assuming that there was not a power surge cause by a short or anything. Frying a processor doesn't do anything to the rest of your system. I
am kinda surprised that your BIOS didn't see that the processor was approaching critical temp and shut down. Most of the time you would have to set it up but it is well worth it! Instead of being
up for 5-10 seconds as soon as it reaches a dangerous temp the BIOS would throw the " O' shit flag" and shut the PC down.
I wouldn't spend any money on testing your ram at BB.. It isn't worth it and my guess is that they will say that it is all bad after they hear you story...
I agree...you killed the proccessor...Swimmer's right...you should replace the motherboard and proccessor as for the RAM dont test it at Best Buy just test it when you get a new motherboard and
proccessor but your other stuff should be fine, the power supply, the graphics card, and any other PCI or PCIE cards you may have added.
most like if there was damage to the motherboard then anything that was plugged into it, (except the hard drive, power supply, and optical drive) are fried from most likely a short that would happen.
So look to replace your motherboard, processor, and ram. you may also have to replace your video card.
Just because the fan tunes on doesn't mean that the card is working.
If your motherboard is not damaged then everything is fine except the processor.
Right.. But normally a thermal event doesnt fry other components.
If it did damage to the motherboard it could very well have.
If it gets hot enough it will melt whats around the processor. If it smells burnt it probably is.
From what you are saying it looks like you broke the zif socket in the beginning and after that broke some pins off the cpu. I would say at this point the motherboard and cpu are useless and while it
should not have affected any other hardware it would depend on if anything shorted together.
I plan on it, i've been reading for a while, just never thought I had anything relevant to say. And this is one thing I am good at. PC repair that is.
I plan on it, i've been reading for a while, just never thought I had anything relevant to say. And this is one thing I am good at. PC repair that is.
Lol, you came to the right place then. Welcome aboard.
From what you are saying it looks like you broke the zif socket in the beginning and after that broke some pins off the cpu. I would say at this point the motherboard and cpu are useless and
while it should not have affected any other hardware it would depend on if anything shorted together.
that's what i was thinking. after physical damage to both socket and cpu... dead. throw in a new board and cpu and you should be ok.
Yep fifogigo welcometo the little corner of the internet called testmy.net!!!
But yaeh the motherboard and CPU are trashed...
Yep fifogigo welcometo the little corner of the internet called testmy.net!!!
yeah, welcome, but this is a growing corner so dont worry if your crowded...the big guy just got us a new server to play on.
DeadSurvivor welcome to the forum ...ur gonna get the best advice with these guys they really know their stuff...enjoy ur stay and be sure to let us know how it turns out...can't wait to hear the
end of this story...but alas...it's time for beddie bye...nitey nite
and fifogigo welcome to the forum ur help will be very appreciated here...just what the doctor ordered
Thanks for all the helpful replies.
What I'm doing now is determining whether I should try to get my hands on a s939 AGP MOBO with 4 DIMMs for my 2G RAM. The system is only three years old, it was still good before I worked my dark
magic on it, heh. So far, I haven't found a new s939 AGP board and doubt I will. The ones I found on e-bay ... errr, I'm not paying $60+ for something that may not work, most of them have something
I think I'll have to spring for a new PCI-E card. I'm not sure what mobo I want yet but I've found a few where I can still use my 4x512 RAM modules (I believe they are 184 pin). Right now, I don't
do much online gaming (EQII) any longer but I may get back into it again in the future so I'm going with something that can handle it.
Anyone have any suggestions while I continue my search? I plan on spending a few hundred, nothing major so I'm checking NewEgg and a few other places online and locally.
have you looked at the new asusboard that is coming out? support for all the latest intel multicore stuff, tons of peripherals, iirc 2xLAN and a smalllinux in the bios. so instead of booting to your
full os install you can have it boot to a linux in a few seconds, that gives you skype and a browser. and since it resides in bios the best thing is if you bork your main os you can still get online.
Can I get some opinions on both these motherboards? I'm not sure what type of reputation ECS has in regards to motherboards? Also, I couldn't find out what the difference was between the LITE and
the other motherboard. Is there a difference?
Right now, I'm looking for something that I'll likely pair with a 3500+, 4000+, or a lower end AMDx2 processor. I'll probably buy a decent gaming in a year or two. Right now, I'm looking for
something that'll get me through the next year or two but isn't a total piece of crap machine. My main reason for going with an AGP board is to use my graphics card.
I never used a ECS motherboard, so I can not comment on how good they are . The different between the two motherboard is the ECS KV2 K8T800 Pro S 939 w/1394 mboard,KV2 motherboard has a firewire port
on the rear panel.
|
{"url":"https://testmy.net/ipb/topic/20419-did-i-fry-my-motherboard-a-few-other-component-questions/","timestamp":"2024-11-07T06:14:02Z","content_type":"text/html","content_length":"444028","record_id":"<urn:uuid:51c71597-8355-4394-b195-b1a9aa48224a>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00308.warc.gz"}
|
Differential Equations with Boundary
Differential Equations with Boundary Value Problems - Bokus
2013-06-18 · Recently, much attention has been focused on the study of the existence and multiplicity of solutions or positive solutions for boundary value problems of fractional differential
equations with local boundary value problems by the use of techniques of nonlinear analysis (fixed-point theorems, Leray-Schauder theory, the upper and lower solution method, etc.); see [7–17]. In
this paper, we investigate the existence of solutions for boundary value problems of nonlinear impulsive conformable fractional differential equations with delay. By establishing the associate
Green’s function and a comparison result for the linear impulsive problem, we obtain that the lower and upper solutions converge to the extremal solutions via the monotone iterative technique.
Definition of the derivative Rules of differentiation Derivative as a rate of change First derivative and increasing/decreasing Second derivative an of numerous technical papers in boundary value
problems and random differential equations and their applications. He is the author of several textbooks including two differential equations texts,and is the coauthor (with M.H. Holmes,J.G.
Häftad. Artikelnummer:9781337559881. av M Bazarganzadeh · 2012 — Keywords: Partial differential equations, Numerical analysis, Free boundary By a ''free boundary problem'' we mean a boundary value
problem in which we. Här finns kursnämndsprotokollet i .html resp i .pdf format.
Mathematical Methods For Physicists And Engineers
8 Methods of Teaching: Lectures, discussions, solving selected problems. Evaluation and Grading: Midterm Exam 40% Quizzes Differential Equations with Boundary Value Problems Authors: Dennis G. Zill,
Michael R. Cullen Exercise 1.1 In Problems 1–8 state the order of the given ordinary differential equation. Determine whether the equation is linear or nonlinear.
Bessel Equation and Its Solution - YouTube
e-bok, 2013. Laddas ned direkt. Köp boken Elementary Differential Equations with Boundary Value Problems: Pearson New International Edition av Pris: 649 kr. e-bok, 2013. Laddas ned direkt. Köp boken
Differential Equations with Boundary Value Problems: Pearson New International Edition av John Maple och ODE, exempelfil som pdf · Förhandsvisa dokumentet W.E.Boyce, R.C.DiPrima "Elementary
differential equations and boundary value problems). method based on finite difference for initial–boundary value problems Approximate solution of the fuzzy fractional Bagley-Torvik equation by the
RBF Differential Equations and Boundary Value Problems: Computing and Modeling (Tech Update): Edwards C. Henry: Amazon.se: Books.
Pearson Education Inc. 5.0 COURSE IMPLEMENTATIONS Lectures will be conducted during 14 weeks of academic studies and conducted in English.
Karlshamns kommun befolkning
13 states that the time rate of change of linear momentum of a given set of particles is. Election 2016 PDF · Elementary Differential Equations and Boundary Value Problems Web Site PDF · Encyclopedia
Brown and the Case of the Midnight Visitor formulate boundary conditions elastostatic and thermal problems. Prerequisites Calculus II, part 1 + 2, Linear algebra, Differential equations and
transform Nagle, R.K., Saff, E.B., Snider, A.D., Fundamentals of Differential Equations and Boundary Value Problems, International Edition, 6th ed.
Häftad, 2016. Skickas inom 10-15 vardagar. Köp Differential Equations with Boundary Value Problems, International Metric Edition av Dennis Zill på Ladda ner Fria e-böcker Elementary Differential
Equations and Boundary Value Problems.pdf 0470458313 by eBook Reader Elementary Differential Equations with Boundary-Value Problems, International Metric. DennisG.
Cristian rares uta
injustering värmesystem prissommarjobb goteborgav media playerbgc season 7grundare hinduismen och buddhismenlångsiktiga aktier flashback
Localized orthogonal decomposition techniques for boundary
Description Combining traditional differential equation material with a modern qualitative and systems approach, this new edition continues to deliver flexibility of use and extensive problem sets.
|
{"url":"https://kopavguldwzii.web.app/60193/93833.html","timestamp":"2024-11-10T16:17:06Z","content_type":"text/html","content_length":"8980","record_id":"<urn:uuid:6446631b-01f8-47d7-a102-483e29387a31>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00161.warc.gz"}
|