content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
NumbersThe product of two positive numbers is 11520 and their quotient is 9/5. Find the difference of two numbers.Options60747064
NumbersThe product of two positive numbers is 11520 and their quotient is 9/5. Find the difference of two numbers.Options60747064
Solution 1
To find the difference between two numbers, we need to first determine what those numbers are.
Let's call the two numbers x and y. We are given two pieces of information about these numbers:
1. The product of the two numbers is 11520: xy = 11520
2. The quotient of the two numbers is 9/5: x/y = 9/ Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
|
{"url":"https://knowee.ai/questions/96249261-numbersthe-product-of-two-positive-numbers-is-and-their-quotient-is","timestamp":"2024-11-10T05:35:21Z","content_type":"text/html","content_length":"364231","record_id":"<urn:uuid:a8427e45-0b2a-40b8-a0ac-9033586af846>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00704.warc.gz"}
|
D.-M. Nguyen
Duc-Manh NGUYEN (Version française)
IMB Bordeaux,
Université de Bordeaux, Bât. A33,
351, cours de la Libération
33405 Talence Cedex
email: duc-manh.nguyen -at- math.u-bordeaux.fr
I am a Maître de Conférences (Assistant Professor) at University of Bordeaux, and a member of the Geometry group of the Institute of Mathematics of Bordeaux (IMB Bordeaux).
More information about me can be found in this short CV.
Research Interest
Flat surfaces, Translation surfaces and their moduli spaces, Teichmüller theory, Mapping Class Groups....
• Representations of braid groups via cyclic covers of the sphere: Zariski closure and arithmeticity, joint with Gabrielle Menet (arXiv).
• On the volumes of linear subvarieties in moduli spaces of projectivized Abelian differentials (arXiv).
• Volumes of the sets of translation surfaces with small saddle connections in rank one affine submanifolds (arXiv).
• Intersection theory and volumes of the moduli spaces of flat metrics on the sphere (with an appendix joint with Vincent Koziarz), Geometriae Dedicata (special volume in honor of François
Labourie) (to appear) (arXiv)
• The incidence variety compactification of strata of d-differentials in genus 0, International Mathematics Research Notices (to appear) (arXiv)
• Volume form on moduli spaces of d-differentials, Geometry & Topology (2022) (arXiv)
• Topological Veech dichotomy and tessellations of the hyperbolic plane, Israel Journal of Math. (2022) (arXiv)
• Variation of Hodge structure and enumerating tilings of surfaces by triangles and squares, joint with Vincent Koziarz, Journal de l'Ecole Polytechnique (2021) (arXiv).
• Weierstrass Prym eigenforms in genus four, joint with Erwan Lanneau, Journal of the Inst. Math. Jussieu (2020) (arXiv).
• Rank two affine manifolds in genus three, joint with David Aulicino, Jour. Diff. Geom. (2020) (arXiv).
• Existence of closed geodesics through a regular point on translation surfaces, joint with Huiping Pan and Weixu Su (arXiv), Math. Annalen (2020).
• Complex hyperbolic volume and intersections of boundary divisors in moduli spaces of pointed genus zero curves, joint with Vincent Koziarz, (arXiv) Ann. Sci. de l'E.N.S. (2018).
• Connected components of Prym eigenform loci in genus three, joint with Erwan Lanneau, Math. Annalen (2018) (arXiv).
• Translation surfaces and the curve graph in genus two (arXiv), Alg. & Geom. Top. (2017).
• Finiteness of Teichmüller curves in non-arithmetic rank one orbit closures, joint with Erwan Lanneau and Alex Wright (arXiv), Amer. J. of Math. (2017).
• Rank two affine submanifolds in H(2,2) and H(3,1), joint with David Aulicino (arXiv), Geometry & Topology (2016).
• GL(2,R)-orbits in Prym eigenform loci, joint with Erwan Lanneau, Geometry & Topology (2016) (arXiv)
• Complete periodicity of Prym eigenforms, joint with Erwan Lanneau, Annales Scientifiques de l'E.N.S. (2016) (arXiv)
• Classification of higher rank orbit closures in H^{odd}(4), joint with David Aulicino and Alex Wright, Journal of the European Mathematical Society (JEMS) (2016) (arXiv)
• Non-Veech surfaces in H^{hyp}(4) are generic, joint with Alex Wright, Geometric and Functional Analysis (GAFA) (2014) (arXiv)
• On the topology of H(2), Groups, Geometry, and Dynamics (2014) (arXiv).
• Teichmueller curves generated by Weierstrass Prym eigenforms in genus three and genus four, joint with Erwan Lanneau, Journal of Topology (2014) (arXiv)
• Energy functions on moduli space of flat surfaces with erasing forest, Mathematische Annalen (2012) (arXiv).
• Parallelogram decompositions and generic surfaces in H^{hyp}(4), Geometry & Topology (2011) (arXiv).
• Triangulations and volume forms on moduli spaces of flat surfaces, Geometric and Functional Analysis (GAFA) (2010) (arXiv).
• Moduli spaces of flat surfaces and their volume form Ph.D. dissertation defended in December 2008, Université Paris Sud XI (apart from the Introduction chapter, which is in French, the rest of
thesis is written in English).
|
{"url":"https://www.math.u-bordeaux.fr/~ducnguye/index-En.html","timestamp":"2024-11-14T18:26:54Z","content_type":"text/html","content_length":"13476","record_id":"<urn:uuid:ff1057af-7b77-4227-9aca-24a627b55bc2>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00120.warc.gz"}
|
Decode String
Problem statement:
Given an encoded string, return its decoded string.
The encoding rule is: k[encoded_string], where the encoded_string inside the square brackets is being repeated exactly k times. Note that k is guaranteed to be a positive integer.
You may assume that the input string is always valid; there are no extra white spaces, square brackets are well-formed, etc. Furthermore, you may assume that the original data does not contain any
digits and that digits are only for those repeat numbers, k. For example, there will not be input like 3a or 2[4].
The test cases are generated so that the length of the output will never exceed 105.
Example 1:
Input: s = "3[a]2[bc] "
Output: "aaabcbc "
Example 2:
Input: s = "3[a2[c]] "
Output: "accaccacc "
Example 3:
Input: s = "2[abc]3[cd]ef "
Output: "abcabccdcdcdef "
* 1 <= s.length <= 30
* s consists of lowercase English letters, digits, and square brackets '[]'.
* s is guaranteed to be a valid input.
* All the integers in s are in the range [1, 300].
Solution in C++
Solution in Python
Solution in Java
Solution in Javascript
Solution explanation
1. Initialize two stacks counts and results, a pointer ptr, and a current string.
2. Loop through the input string s.
3. If the current character is a digit, calculate the number and push it onto the counts stack.
4. If the current character is an opening bracket [, push the current string into the results stack and reset the current string.
5. If the current character is a closing bracket ], pop the top strings from the results stack and multiply the current string with the top count from the counts stack. Update the current string.
6. If the current character is a letter, add it to the current string.
7. After looping through s, return the current string, which is now the decoded version of the input string.
|
{"url":"https://www.freecodecompiler.com/tutorials/dsa/decode-string","timestamp":"2024-11-07T13:50:49Z","content_type":"text/html","content_length":"39826","record_id":"<urn:uuid:d04cb52c-5883-48b1-a561-7d6a6007d732>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00480.warc.gz"}
|
Confidence Interval
Confidence interval is the interval that we are about a certain degree of confident that we have captured the true mean. From this point of view, the name confidence interval is rather misleading.
We will use upper cases for the abstract variable and lower cases for the actual numbers.
Why is Confidence Interval Needed?
Suppose I sample the population multiple times, the mean value $\mu_i$ of the sample is calculated for each sample. It is a good question to ask how different these $\mu_i$ are compared to the true
mean $\mu_p$ of the population.
In this article, we would need to specify several notations.
1. $X$ is the quantity we are measuring.
2. $\bar X$ is the mean of the quantity $X$.
Confidence Interval
This theorem states that the probability for the true mean $\mu_p$ to fall into a specific range can be calculated using
$$ P( \mu_p \in [\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_{\alpha/2} \sigma_{\bar x} ] ) = 1-\alpha. $$
Here $z_{\alpha/2}$ is the $z$ value which makes the probability to be in between $[-z_{\alpha/2}, z_{\alpha/2}]$ to be $1-\alpha$. As shown in the following figure. $\sigma_{\bar x}$ is the standard
error of the mean for the sample.
1. It is NOT the standard deviation of the sample data.
2. It is NOT the population standard deviation.
3. It is NOT the standard error calculated using only the sample data.
4. It depends on the sample size. The larger the sample size, the smaller it will be.
The derivation of this formula is easy. It can be found in many textbooks such as the one we are using as a reference (chapter 14).
Does it make sense?
Let’s go to the extreme case, again! Suppose we have an infinite sample size then we should actually be calculating the true mean $\mu_p$. Thus $\bar X=\mu_p$. That being said, in this extreme case,
$$ [\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_{\alpha/2} \sigma_{\bar x} ] $$
should become
$$ [\bar X, \bar X]. $$
We must have $z_{\alpha/2} \sigma_{\bar x} \to 0$ as sample size $n\to \infty$. $z_{\alpha/2}$ can not be always 0 so $\sigma_{\bar x} \to 0$ as $n\to \infty$.
In fact, $\sigma_{\bar x} = \sigma_p / \sqrt{n}$ if our population is infinite. This is part of the central limit theorem.
The definition of $\alpha$ for a normal distribution. In a probability distribution, the area under the curve should be 1. Or the integral of the curve from $-\infty$ to $\infty$ should be 1. $\
alpha$ is the sum of the two red areas. In this example, we actually have $\alpha=0.05$.
Normal Distribution
Gaussian distribution
The confidence level is a weird measurement of our statistical confidence.
Imagine we are drawing 100 samples from the population and calculate the range $[\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_{\alpha/2} \sigma_{\bar x} ]$. We would have 100 different ranges.
How many of them would actually contain the true mean? The answer is probably around $(1-\alpha)$ fraction of all the 100 calculations. When we have a huge amount of samples, this probability becomes
quite faithful.
Why is this a representation of our confidence? Imagine we choose one sample from this 100 calculations. The probability that this specific number $[\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_
{\alpha/2} \sigma_{\bar x} ]$ (with all the numbers substituted using the sample values, for example $[-1,2]$, which is different from every other sample calculations) contains the true mean is also
In other words, $[\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_{\alpha/2} \sigma_{\bar x} ]$ is the interval that we are $1-\alpha$ confident that we have captured the true mean. From this point
of view, the name confidence interval is rather misleading.
Be Professional and Call Them by the Names
1. Confidence limits or fiducial limits is the range $[\bar X - z_{\alpha/2} \sigma_{\bar x}, \bar X + z_{\alpha/2} \sigma_{\bar x} ]$.
2. $1-\alpha$ is the confidence coefficient
3. $z_\alpha$ is the critical value.
What If the Confidence Limits is Huge Range?
As we are talking about the confidence limit, we would imagine it is different for different distributions even for the same $\alpha$.
If the confidence limits are large, it indicates that we won’t be able to pin down the true mean very well. In this case, our estimation is less precise.
Thus the width of the confidence limits
$$ \bar X + z_{\alpha/2} \sigma_{\bar x} - \left(\bar X - z_{\alpha/2} \sigma_{\bar x} \right) = 2 z_{\alpha/2} \sigma_{\bar x} $$
is a measure of the precision of our estimation, which is defined as twice of the margin of error, i.e.,
$$ 2 E = 2 z_{\alpha/2} \sigma_{\bar x}. $$
The larger the margin of error $E$, the harder to pin down the true mean. In the two panels, we have larger $E$ for the lower panel whose sample size is approximately 1/4 of the upper panel’s. This
is trivial since smaller samples leads to larger $\sigma_{\bar x}$ thus wider distribution. In this example, we have $\alpha=0.05$.
What If We Don’t Know the Population Standard Deviation?
Wait a minute. To calculate $\sigma_{\bar x}$ we need the population standard deviation $\sigma_p$,
$$ \sigma_{\bar x} = \sigma_p/\sqrt{n}. $$
As we mentioned, this is NOT what you calculate using the sample data only.
Suppose we have have a sample with a sample size $n$. We could calculate a standard deviation $S$ event a standard error of the mean $S_{\bar x}=S/\sqrt{n}$ using this sample data. However, $S_{\bar
x}$ is an estimation of the standard error of the mean $\sigma_{\bar x}$ since we do not know the actual distribution of the sample.
For this problem, we have a macroscopic view and microscopic view.
Macroscopic View
As the population size approach infinity, we expect to recover the population distribution. Suppose we have a large sample size, it is good enough to use the estimation of the standard error $S_{\bar
x}$ to represent the actual standard error $\sigma_{\bar x}$.
In this case, we
1. calculate the standard deviation of the sample, $S$,
2. estimate the standard error of the mean $S_{\bar x}$ using $S_{\bar x}=S/\sqrt{n}$ where $n$ is the sample size,
3. use $S_{\bar x}$ instead of (the unknown) $\sigma_{\bar x}$ to calculate the scalar $z_{\alpha/2}$.
Basically, we assume that
$$ \frac{\bar X - \mu}{\sigma_{\bar x}} \sim \frac{\bar X - \mu}{ S_{\bar x} } $$
Microscopic View
When we have a small sample size, we do not have the macroscopic view to neglect the “glitches”.
We have been talking approximations
$$ \frac{\bar X - \mu}{\sigma_{\bar x}} \sim \frac{\bar X - \mu}{ S_{\bar x} } . $$
If this is not the case, then how good is the approximation? To answer this question, we need to know the distribution of $\frac{\bar X - \mu}{\sigma_{\bar x}}$. It is called t distribution. Since
this distribution is known. We simply replace the assumed normal distribution of the sample mean using this t distribution. We will still have our confidence limits and confidence levels using this t
What Sample Size is Required for Macroscopic View?
If the sample size is larger than 30!
Planted: by L Ma;
Additional Double Backet Links:
L Ma (2019). 'Confidence Interval', Datumorphism, 01 April. Available at: https://datumorphism.leima.is/wiki/statistical-estimation/confidence-interval/.
|
{"url":"https://datumorphism.leima.is/wiki/statistical-estimation/confidence-interval/?ref=footer","timestamp":"2024-11-02T15:47:38Z","content_type":"text/html","content_length":"120362","record_id":"<urn:uuid:87d81b9f-8dc1-4a2b-9f86-63ce89bdbb30>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00724.warc.gz"}
|
Mumford curves covering p-adic Shimura curves and their fundamental domains
We give an explicit description of fundamental domains associated with the p-adic uniformisation of families of Shimura curves of discriminant Dp and level N ≥ 1, for which the one-sided ideal class
number h(D, N) is 1. The results obtained generalise those in Schottky groups and Mumford curves, Springer, Berlin, 1980 for Shimura curves of discriminant 2p and level N = 1. The method we present
here enables us to find Mumford curves covering Shimura curves, together with a free system of generators for the associated Schottky groups, p-adic good fundamental domains, and their stable
reduction-graphs. The method is based on a detailed study of the modular arithmetic of an Eichler order of level N inside the definite quaternion algebra of discriminant D, for which we generalise
the classical results of Hurwitz. As an application, we prove general formulas for the reduction-graphs with lengths at p of the families of Shimura curves considered.
• Shimura curves
• Mumford curves
• p-adic fundamental domains
• UNIFORMIZATION
• CONSTRUCTION
• POINTS
Dive into the research topics of 'Mumford curves covering p-adic Shimura curves and their fundamental domains'. Together they form a unique fingerprint.
|
{"url":"https://research.aalto.fi/en/publications/mumford-curves-covering-p-adic-shimura-curves-and-their-fundament","timestamp":"2024-11-03T09:11:03Z","content_type":"text/html","content_length":"58551","record_id":"<urn:uuid:c8ee8bb4-cf41-4b34-bc01-9fd937a636da>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00259.warc.gz"}
|
MTBF is Not a Duration - No MTBF
MTBF is Not a Duration
MTBF is Not a Duration
Despite standing for the time between failures, MTBF does not represent a duration. Despite having units of hours (months, cycles, etc.) is it not a duration related metric.
This little misunderstanding seems to cause major problems.
MTBF Calculation From Data
If I have 10 pieces of equipment and they have run for a year, 8,760 hours. And, during that year we enjoyed 5 failures, which were quickly repaired, what is the MTBF of that equipment?
10 units running for 8,760 hours for a total operating time of 87,600 hours. And 5 failures is the only other bit of information need for the calculation. 87,600 divided by 5 is 17,520 hours MTBF.
MTBF, Duration, and Confusion
Of the ten pieces of equipment that each operated for a year and experienced 5 failures, how does a mean time between failures of 17,520 hours remain consistent with the idea (mistaken idea) that we
should only have 1 failure every 17,520 hours for each piece of equipment?
Well, it is consistent if you consider we are expecting 1 failure every 17,520 hours, and 17,520 divided by 8,760 hours is 2. Therefore we expect each piece of equipment to have a 50% chance of
failure each year. 10 times 50% is 5, which what we experienced.
The confusion occurs when some expect all 10 units to run 2 years and only have one failure. Or that each unit should operate 17,520 hours then have a failure (this is less common to consider MTBF a
failure-free period, yet it occurs).
MTBF is an Inverse Failure Rate
Keep in mind that we can consider MTBF as a probability of failure. Unit wise it is an inverse failure rate or the chance of failure per hour.
In the example above we have a 1 in 17,520 chance of failure every hour. Of course, ignoring early life and wear out patterns, which one should never do, btw. More more hours the equipment runs the
more times we have a 1 in 17,520 chance of failure. Run for two years you are pretty much certain to have at least one failure.
MTBF does provide a chance per unit (in many cases an hour) of failure, it doesn’t mean the failure rate is accurate or fixed over any period of time we want to use.
In the example above we have data for one year of operation for the 10 units. We do not have information over two years (17,520 hours) nor over 10 years. The MTBF value we calculated only represents
a failure rate that is valid for one year. As the equipment breaks-in or wears out, it most likely will be less and less accurate.
MTBF is not all that useful as we rarely encounter a constant failure rate pattern with equipment. Second, MTBF is just a fancy way of representing a failure rate. IT does provide information on the
chance of failure per hour per piece of equipment. IT does not suggest the equipment will have a two-year life with no failures or that the equipment will run for two years with only one failure.
MTBF is not all that helpful for many reasons, one is we often work with people that do not understand what MTBF is or is not. MTBF is not a duration it is a probability of failure, that is all.
5 thoughts on “MTBF is Not a Duration”
1. the CRE BOK have many examples of probability of failure without duration.
this is why it annoys me when i see the definition that reliability is the probability of success, over a period of time, which of course, it is not.
it is the probability of success for a given scenario, which may be time, but might not be.
2. Much confusion in the calculation…For a constant failure rate, the probability of failure after 1 year is
1-R(1year) or 1-exp(-0.5)= 0.39
so 39% and not 50%. And it decreases the following year…
1. Hi Marie,
Remember the exponential distribution is not a normal distribution. We learned in school with a normal that the average or mean is 50%. other distributions, especially skewed distributions do
not have an average at the 50th percentile. It will vary.
This is a common confusion with the constant failure rate assumption coupled with our stats knowledge based on the normal distribution.
The math you did is right and if the population has a 50% failure rate over a year, then the probability of failure is as you calculated. Failure rate (or MTBF) is not the same as the
probability of failure.
1. Hi all.
To reiterate. There are a few good examples of non duration reliability in OConnor. He uses cable strength expressed as a mean and standard deviation versus known loads. The stress/strain
interference gives the probability of success.
3. If we even consider basic statistics, is the mean, or ‘average’ the most robust measurement of ‘expectation’ (or the ‘middle tendency’) of a distribution of continuous data? The mean is very
sensitive to outliers and is not the most robust measurement for non-symetrical or skewed probability distributions. The median measure is more often appropriate. Notwithstanding that, if we are
only provided with an MTBF figure and no other data, we have to assume the underlying distribution is exponential, then the mean equates to a 63.2% probability of failure – not as many people
assume 50%. For a symmetrical normal distribution both the median and mean is 50%. Why do we persist in using mean, when median is a better choice? Even the Median measure by itself is not
enough, we need to know how the data is distributed to understand reliability.
|
{"url":"https://nomtbf.com/2018/04/mtbf-is-not-a-duration/","timestamp":"2024-11-08T23:41:44Z","content_type":"text/html","content_length":"77727","record_id":"<urn:uuid:c35ed285-5905-45fa-b57b-036d82b708e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00291.warc.gz"}
|
Rotation and Circular Positioning
So far, in most of my blog posts we've been dealing with arrangement of elements on a grid, however, creating circular motion should be another weapon in your creative coding arsenal. This light blog
post explains how to achieve it in P5JS in a number of different ways. Starting with the simplest form: rotating a point around a circle.
To create a rotating point, we usually need 3 parameters: the center of the circle around which we are rotating, the radius, and the angle. To make this point actually move around the circle, the
parameter for the angle needs to be a linearly increasing parameter, which, as always, is time in form of the millis() function. The snippet of code that follows the comment defines the rotating
function setup() {
createCanvas(400, 400);
centerX = width/2
centerY = height/2
radius = 100
function draw() {
point(centerX, centerY)
// for what follows simply replace this with the provided snippet
centerX + radius * cos(millis()/1000),
centerY + radius * sin(millis()/1000)
// end of what you should replace
The coordinates of the rotating point are defined in terms of sin() and cos(), plugging time into the two functions representing the angle(divided by some arbitrary value to slow down the animation),
the visualization that we obtain looks like this:
Now, If you're not very familiar with sine and cosine I recommend reading up on them in one of my previous blog posts that explains them in a little more detail here. In simple terms, all you need to
know is that given a center point and a radius, you can draw a point on the circle (defined by this center and radius) using the sine and cosine functions for a specific angle.
// ----
point(centerX + radius * cos(0), centerX + radius * sin(0))
point(centerX + radius * cos(HALF_PI), centerX + radius * sin(HALF_PI))
point(centerX + radius * cos(PI), centerX + radius * sin(PI))
point(centerX + radius * cos(PI+QUARTER_PI), centerX + radius * sin(PI+QUARTER_PI))
Intuition and Math behind rotational movement
You might ask now for a more detailed definition of sin() and cosine(), and how they allow you to find points on a circle. Essentially it has something to do with triangles, angles and ratios. First
we should look at the definition of sine as a trigonometric function:
Sine: The trigonometric function that is equal to the ratio of the side opposite a given angle (in a right-angled triangle) to the hypotenuse.
In this case we already have the angle (we simply choose one, depending where we want to position the point on the circle), and the hypotenuse (the radius), which will allow us to find the opposite
side. Since we're dealing with ratios here, we still need to multiply by the radius, to find the exact position and add an offset, which is the center of the circle. This gives us one of the
coordinates for our point, we repeat the same thing for the cosine. Putting them together will allow us to rotate a point around a circle.
To explain it in another way, we're calculating the vertical offset from the horizontal line that cuts the circle in two parts, as well as the horizontal offset from the vertical line that cuts the
circle in two equal parts, and draw a point at these coordinates. This animation from wikipedia was too compelling not to include, as it perfectly exemplifies what is going on:
Great, now you might be either satisfied that you can position something on a circle, OR you might be more confused and still curious as to how we found out that we can do such positioning with sine
and cosine. I'd like to include this excerpt from Robert Cruikshank's Quora Answer:
Given the Pythagorean theorem and the definition of a circle, one can discover (and prove) that the ratio of circumference to diameter is π.
The definition of cosine and sine for arbitrary angles is just that—a definition, an invention. We chose it that way because it is useful. Likewise, I always get school kids arguing with me that
1^0 should be 0, not 1. I explain to them that they CAN define 1^0 to be 0, it’s just that the math they get will be less streamlined, more clunky, the rules for exponents will have exceptions,
If all of this is still hazy, then I recommend playing around with sin() and cos() to internalize their behaviour, soon enough you'll have an intuitive understanding, and when you do, I recommend
approaching the math behind it again and I promise that it will feel much easier.
This is kind of the basis for a lot of other things that you can do. For example, what would you have to do to have a 3rd point rotate around the already rotating point? The snippet I provided
already has the answer to it!
centerX + radius * sin(millis()/1000),
centerY + radius * cos(millis()/1000)
centerX + radius * sin(millis()/1000) + radius/2 * sin(millis()/500),
centerY + radius * cos(millis()/1000) + radius/2 * cos(millis()/500)
You need to be mindful about the speed of the new point, here I'm dividing the new point's time by 500. If both were in sync we wouldn't observe an additional orbiting motion. Very simple, yet very
Positioning items around a circle
Let's go back a little and see how to position an arbitrary number of points around a circle in an equidistant manner. The circumference of a full circle can be expressed in either 2xPI RADIANS or
360 Degrees. Positioning an arbitrary number of points around a circle requires splitting 2xPI evenly among them.
numPoints = 5
for (i = 0; i < numPoints; i++) {
centerX + radius * sin(2 * PI / numPoints * i + millis() / 1000),
centerY + radius * cos(2 * PI / numPoints * i + millis() / 1000)
In this manner we simply divide 2xPI by the number of points and multiply by the number of the current point. If you prefer doing this in degrees, P5JS has a function that allows you to set this.
This concept also applies if you want to position certain elements along the arc of a circle, for example fanning out a number of lines:
// ------
for(i = 0; i < 20; i++){
As you can see P5JS also conveniently provides pre-defined constant that define different portions of PI (QUARTER_PI, HALF_PI, PI and TWO_PI).
Rotation in the WEBGL mode
Alternatively, you could simply achieve rotation by using the WEBGL mode that provides a conveninent rotate function. This is quite fun and very easy, but allows you to create incredible cool
sketches. I'm only going to brush briefly over 3D rotation, and is a topic that requires much more exploration in future blog posts. I'll showcase a simple sketch, that looks very impressive:
function setup() {
createCanvas(500, 500, WEBGL);
function draw() {
for(x = 0; x
|
{"url":"https://www.gorillasun.de/blog/rotation-and-circular-positioning/","timestamp":"2024-11-01T22:06:21Z","content_type":"text/html","content_length":"33107","record_id":"<urn:uuid:e5446eeb-10fb-41ad-bc75-dce8065f174c>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00199.warc.gz"}
|
Microsoft Word - FINA 6223 Summer 2020 PS3
Problem #1 [18 points]
On blackboard you will find the data file named FINA_6223_Summer2020_PS3.xlsx. Tab Q1
contains monthly returns data for 3 US ETFs, the market portfolio, and the risk-free return.
A. [6 points] Compute the Sharpe, Treynor, and Jensen measures for the 3 funds and the market
(no Jensen necessary for the market as it would be zero). Evaluate the two funds based on
these ratios. If you select one fund for your entire portfolio, which do you prefer and why?
B. [6 points] Compute the M2 and T2 measures. What information do these measures add to
your performance analysis from Part A?
C. [4 points] Compute the tracking e
or for each fund as the return deviation from the market.
Compute the information ratio for each fund as the ratio of the average tracking e
or to the
standard deviation of that tracking e
or. What do you learn from the information ratio?
D. [2 points] The TUSA charges 1.57% annual expenses to actively re-weight the universe of
US stocks. Based on the performance analysis, is there any evidence that those fees are
Problem 2: Computing returns [12 points]
The following table provides the end of year prices and dividend information for GDUB stock.
At the end of each year 2014 to 2018, an investor purchases 100 shares of GDUB. At the end of 2019,
the investor collects the last dividend and sells all shares.
A. [4 points] What is the arithmetic average time-weighted annual rate of return?
B. [4 points] What is the geometric average time-weighted annual rate of return?
C. [4 points] What is the dollar-weighted annual rate of return? A suggestion is to construct a chart
of all cash flows.
3. Computing more returns [12 points]
A company makes a $50 million investment. At the end of the first 3 quarters, they withdrawal $5
million. Using the information below, answer the following questions:
A. [4 points] Compute the simple Holding Period Return (HPR)
B. [4 points] Compute the annual time-weighted return
C. [4 points] Compute the annual dollar-weighted return
Year End of year price Dividend paid at end of yea
Investment Value
(inclusive of
withdrawals and
0 50,000,000 50,000,000
0.25 -5,000,000 47,000,000
0.5 -5,000,000 41,000,000
0.75 -5,000,000 35,000,000
1 0 38,000,000
Problem 4: Comparing performance [7 points]
Two investment professionals are comparing their return performance. The first professional managed
portfolios with an average return of 10% and the second professional managed portfolios with a 12% rate
of return. The beta of the first portfolio was 0.8 while the beta of the second was 1.1. The risk-free rate
of return was 2% and the expected market return is 8%.
A. [5 points] Which manager was a better selector of individual stocks, and why?
B. [2 points] Plot both of the portfolios on the security market line.
Problem 5: Closed End Funds [10 points]
See the worksheet named “Closed End Fund” for the prices and NAV of a recently IPO-ed closed end
fund, HGLB.
A. [5 points] For each day, compute the premium (discount) of the price to the NAV. Compute the
average daily premium.
B. [3 points] Plot the premium (discount) through time on a graph.
C. [2 points] Explain if this fund exhibits the same premium(discount) pattern around its IPO and
post-IPO period as the example we discussed in class.
Problem 6: Mutual Fund Returns [15 points]
You are considering investing in a mutual fund’s A share class that has a 3% sales charge (front-end
load) and annual expense ratio of 1.0%. Alternatively, you might invest in that fund’s C shares that
have no load and an annual expense ratio of 1.5%. Assume the fund’s assets return 9% annually.
a. [3 points] For a 2-year holding period, which share class will you prefer?
. [3 points] For a 50-year holding period, which share class will you prefer?
c. [5 points] After how many years will the two mutual fund share classes result in the same
future wealth amounts?
d. [2 points] How does your answer to part c. change if the fund’s assets return 14% per
e. [2 points] What accounts for the greater expenses of the C-shares, compared to the A-
|
{"url":"https://www.topassignmentexperts.com/questions/please-refer-to-the-attached-files-149427.html","timestamp":"2024-11-10T19:05:48Z","content_type":"text/html","content_length":"230909","record_id":"<urn:uuid:b512b98e-d006-4487-ba8e-807b9838214b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00093.warc.gz"}
|
Study Guide - Theorems Used to Analyze Polynomial Functions
Theorems Used to Analyze Polynomial Functions
Learning Objectives
• Use the remainder theorem to evaluate a polynomial at a specific x-value
• Use the rational zeros theorem to find rational zeros of a polynomial
• Use the Factor Theorem to solve a polynomial equation
In the last section, we learned how to divide polynomials. We can now use polynomial division to evaluate polynomials using the Remainder Theorem. If the polynomial is divided by x – k, the remainder
may be found quickly by evaluating the polynomial function at k, that is, f(k) Let’s walk through the proof of the theorem. Recall that the Division Algorithm states that, given a polynomial dividend
f(x) and a non-zero polynomial divisor d(x) where the degree of d(x) is less than or equal to the degree of f(x), there exist unique polynomials q(x) and r(x) such that
If the divisor, d(x), is x – k, this takes the form
Since the divisor x – k is linear, the remainder will be a constant, r. And, if we evaluate this for x = k, we have
[latex]\begin{array}{l}f\left(k\right)=\left(k-k\right)q\left(k\right)+r\hfill \\ \text{ }=0\cdot q\left(k\right)+r\hfill \\ \text{ }=r\hfill \end{array}[/latex]
In other words, f(k) is the remainder obtained by dividing f(x) by x – k.
A General Note: The Remainder Theorem
If a polynomial [latex]f\left(x\right)[/latex] is divided by x – k, then the remainder is the value [latex]f\left(k\right)[/latex].
How To: Given a polynomial function [latex]f[/latex], evaluate [latex]f\left(x\right)[/latex] at [latex]x=k[/latex] using the Remainder Theorem.
1. Use synthetic division to divide the polynomial by [latex]x-k[/latex].
2. The remainder is the value [latex]f\left(k\right)[/latex].
Example: Using the Remainder Theorem to Evaluate a Polynomial
Use the Remainder Theorem to evaluate [latex]f\left(x\right)=6{x}^{4}-{x}^{3}-15{x}^{2}+2x - 7[/latex] at [latex]x=2[/latex].
Answer: To find the remainder using the Remainder Theorem, use synthetic division to divide the polynomial by [latex]x - 2[/latex].
[latex]\begin{array}{l}\\ 2\overline{)\begin{array}{l}6\hfill & -1\hfill & -15\hfill & 2\hfill & -7\hfill \\ \hfill & 12\hfill & \text{ }22\hfill & 14\hfill & 32\hfill \end{array}}\\ \begin{array}{l}
\text{ }6\hfill & 11\hfill & \text{ }7\hfill & \text{16}\hfill & 25\hfill \end{array}\end{array}[/latex]
The remainder is 25. Therefore, [latex]f\left(2\right)=25[/latex].
Analysis of the Solution
We can check our answer by evaluating [latex]f\left(2\right)[/latex].
[latex]\begin{array}{l}f\left(x\right) & =6{x}^{4}-{x}^{3}-15{x}^{2}+2x - 7 \\ f\left(2\right) & =6{\left(2\right)}^{4}-{\left(2\right)}^{3}-15{\left(2\right)}^{2}+2\left(2\right)-7 \\ \hfill & =25\
hfill \end{array}[/latex]
Try It
Use the Remainder Theorem to evaluate [latex]f\left(x\right)=2{x}^{5}-3{x}^{4}-9{x}^{3}+8{x}^{2}+2[/latex] at [latex]x=-3[/latex].
Answer: [latex]4{x}^{2}-8x+15-\frac{78}{4x+5}[/latex]
Use the Rational Zero Theorem to find rational zeros
Another use for the Remainder Theorem is to test whether a rational number is a zero for a given polynomial. But first we need a pool of rational numbers to test. The Rational Zero Theorem helps us
to narrow down the number of possible rational zeros using the ratio of the factors of the constant term and factors of the leading coefficient of the polynomial Consider a quadratic function with
two zeros, [latex]x=\frac{2}{5}[/latex] and [latex]x=\frac{3}{4}[/latex]. By the Factor Theorem, these zeros have factors associated with them. Let us set each factor equal to 0, and then construct
the original quadratic function absent its stretching factor.
A General Note: The Rational Zero Theorem
The Rational Zero Theorem states that, if the polynomial [latex]f\left(x\right)={a}_{n}{x}^{n}+{a}_{n - 1}{x}^{n - 1}+...+{a}_{1}x+{a}_{0}[/latex] has integer coefficients, then every rational zero
of [latex]f\left(x\right)[/latex] has the form [latex]\frac{p}{q}[/latex] where p is a factor of the constant term [latex]{a}_{0}[/latex] and q is a factor of the leading coefficient [latex]{a}_{n}[/
latex]. When the leading coefficient is 1, the possible rational zeros are the factors of the constant term.
How To: Given a polynomial function [latex]f\left(x\right)[/latex], use the Rational Zero Theorem to find rational zeros.
1. Determine all factors of the constant term and all factors of the leading coefficient.
2. Determine all possible values of [latex]\frac{p}{q}[/latex], where p is a factor of the constant term and q is a factor of the leading coefficient. Be sure to include both positive and negative
3. Determine which possible zeros are actual zeros by evaluating each case of [latex]f\left(\frac{p}{q}\right)[/latex].
Example: Listing All Possible Rational Zeros
List all possible rational zeros of [latex]f\left(x\right)=2{x}^{4}-5{x}^{3}+{x}^{2}-4[/latex].
Answer: The only possible rational zeros of [latex]f\left(x\right)[/latex] are the quotients of the factors of the last term, –4, and the factors of the leading coefficient, 2. The constant term is
–4; the factors of –4 are [latex]p=\pm 1,\pm 2,\pm 4[/latex]. The leading coefficient is 2; the factors of 2 are [latex]q=\pm 1,\pm 2[/latex]. If any of the four real zeros are rational zeros, then
they will be of one of the following factors of –4 divided by one of the factors of 2.
[latex]\begin{array}{l}\frac{p}{q}=\pm \frac{1}{1},\pm \frac{1}{2}\text{ }& \frac{p}{q}=\pm \frac{2}{1},\pm \frac{2}{2}\text{ }& \frac{p}{q}=\pm \frac{4}{1},\pm \frac{4}{2}\end{array}[/latex]
Note that [latex]\frac{2}{2}=1[/latex] and [latex]\frac{4}{2}=2[/latex], which have already been listed. So we can shorten our list.
[latex]\frac{p}{q}=\frac{\text{Factors of the last}}{\text{Factors of the first}}=\pm 1,\pm 2,\pm 4,\pm \frac{1}{2}[/latex]
Example: Using the Rational Zero Theorem to Find Rational Zeros
Use the Rational Zero Theorem to find the rational zeros of [latex]f\left(x\right)=2{x}^{3}+{x}^{2}-4x+1[/latex].
Answer: The Rational Zero Theorem tells us that if [latex]\frac{p}{q}[/latex] is a zero of [latex]f\left(x\right)[/latex], then p is a factor of 1 and q is a factor of 2.
[latex]\begin{array}{l}\frac{p}{q}=\frac{\text{factor of constant term}}{\text{factor of leading coefficient}}\hfill \\ \text{ }=\frac{\text{factor of 1}}{\text{factor of 2}}\hfill \end{array}[/
The factors of 1 are [latex]\pm 1[/latex] and the factors of 2 are [latex]\pm 1[/latex] and [latex]\pm 2[/latex]. The possible values for [latex]\frac{p}{q}[/latex] are [latex]\pm 1[/latex] and
[latex]\pm \frac{1}{2}[/latex]. These are the possible rational zeros for the function. We can determine which of the possible zeros are actual zeros by substituting these values for
in [latex]f\left(x\right)[/latex].
[latex]\begin{array}{l}\text{ }f\left(-1\right)=2{\left(-1\right)}^{3}+{\left(-1\right)}^{2}-4\left(-1\right)+1=4\hfill \\ \text{ }f\left(1\right)=2{\left(1\right)}^{3}+{\left(1\right)}^{2}-4\left(1\
right)+1=0\hfill \\ \text{ }f\left(-\frac{1}{2}\right)=2{\left(-\frac{1}{2}\right)}^{3}+{\left(-\frac{1}{2}\right)}^{2}-4\left(-\frac{1}{2}\right)+1=3\hfill \\ \text{ }f\left(\frac{1}{2}\right)=2{\
left(\frac{1}{2}\right)}^{3}+{\left(\frac{1}{2}\right)}^{2}-4\left(\frac{1}{2}\right)+1=-\frac{1}{2}\hfill \end{array}[/latex]
Of those, [latex]-1,-\frac{1}{2},\text{ and }\frac{1}{2}[/latex] are not zeros of [latex]f\left(x\right)[/latex]. 1 is the only rational zero of [latex]f\left(x\right)[/latex].
Try It
Use the Rational Zero Theorem to find the rational zeros of [latex]f\left(x\right)={x}^{3}-5{x}^{2}+2x+1[/latex].
Answer: [latex]3{x}^{2}-4x+1[/latex]
Use the Factor Theorem to solve a polynomial equation
The Factor Theorem is another theorem that helps us analyze polynomial equations. It tells us how the zeros of a polynomial are related to the factors. Recall that the Division Algorithm tells us
If k is a zero, then the remainder r is [latex]f\left(k\right)=0[/latex] and [latex]f\left(x\right)=\left(x-k\right)q\left(x\right)+0[/latex] or [latex]f\left(x\right)=\left(x-k\right)q\left(x\right)
[/latex]. Notice, written in this form, x – k is a factor of [latex]f\left(x\right)[/latex]. We can conclude if k is a zero of [latex]f\left(x\right)[/latex], then [latex]x-k[/latex] is a factor of
[latex]f\left(x\right)[/latex]. Similarly, if [latex]x-k[/latex] is a factor of [latex]f\left(x\right)[/latex], then the remainder of the Division Algorithm [latex]f\left(x\right)=\left(x-k\right)q\
left(x\right)+r[/latex] is 0. This tells us that k is a zero. This pair of implications is the Factor Theorem. As we will soon see, a polynomial of degree n in the complex number system will have n
zeros. We can use the Factor Theorem to completely factor a polynomial into the product of n factors. Once the polynomial has been completely factored, we can easily determine the zeros of the
A General Note: The Factor Theorem
According to the Factor Theorem, k is a zero of [latex]f\left(x\right)[/latex] if and only if [latex]\left(x-k\right)[/latex] is a factor of [latex]f\left(x\right)[/latex].
How To: Given a factor and a third-degree polynomial, use the Factor Theorem to factor the polynomial.
1. Use synthetic division to divide the polynomial by [latex]\left(x-k\right)[/latex].
2. Confirm that the remainder is 0.
3. Write the polynomial as the product of [latex]\left(x-k\right)[/latex] and the quadratic quotient.
4. If possible, factor the quadratic.
5. Write the polynomial as the product of factors.
Example: Using the Factor Theorem to Solve a Polynomial Equation
Show that [latex]\left(x+2\right)[/latex] is a factor of [latex]{x}^{3}-6{x}^{2}-x+30[/latex]. Find the remaining factors. Use the factors to determine the zeros of the
Answer: We can use synthetic division to show that [latex]\left(x+2\right)[/latex] is a factor of the polynomial.
We can factor the quadratic factor to write the polynomial as
[latex]\left(x+2\right)\left(x - 3\right)\left(x - 5\right)[/latex]
By the Factor Theorem, the zeros of [latex]{x}^{3}-6{x}^{2}-x+30[/latex] are –2, 3, and 5.
Try It
Use the Factor Theorem to find the zeros of [latex]f\left(x\right)={x}^{3}+4{x}^{2}-4x - 16[/latex] given that [latex]\left(x - 2\right)[/latex] is a factor of the polynomial.
Answer: The zeros are 2, –2, and –4.
Now use Desmos to graph the function [latex]f(x)[/latex]. On the next line, enter [latex]g(x) = \frac{f(x)}{(x-2)}[/latex]. Where does [latex]g(x)[/latex] cross the x-axis? How do those roots compare
to the solution we found using the Factor Theorem?
Licenses & Attributions
CC licensed content, Original
CC licensed content, Shared previously
• College Algebra. Provided by: OpenStax Authored by: Abramson, Jay et al.. Located at: https://openstax.org/books/college-algebra/pages/1-introduction-to-prerequisites. License: CC BY: Attribution
. License terms: Download for free at http://cnx.org/contents/[email protected].
• Question ID 2639. Authored by: Anderson,Tophe. License: CC BY: Attribution. License terms: IMathAS Community License CC-BY + GPL.
|
{"url":"https://www.symbolab.com/study-guides/ivytech-wmopen-collegealgebra/the-remainder-and-factor-theorems.html","timestamp":"2024-11-14T18:38:23Z","content_type":"text/html","content_length":"144429","record_id":"<urn:uuid:8d640b16-5f54-4222-a2ac-01fc85bb810d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00670.warc.gz"}
|
Identification of determinants of exposure: consequences for measurement and control strategies
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant
permission to reuse the content in many different ways.
Exposure to chemical, physical, and biological agents in the workplace is difficult to characterise. A worker’s exposure is never constant over time. Workers within groups with similar tasks and
working environments are rarely uniformly exposed. Hence, assigning workers to “exposed” and “unexposed” groups or to exposure categories becomes a problem. Insight is required into the reasons why
exposure variability exists, how large this variability is, and which factors determine differences in exposure levels among workers. This knowledge is essential in design, conduct, and
interpretation of epidemiological studies and workplace intervention programmes. In recent years statistical techniques have become available that allow simultaneous evaluation of the magnitude of
variance components as well as determinants of this variability. These techniques are powerful instruments in the design of measurement strategies in epidemiological studies and in implementation of
control and prevention strategies to reduce hazardous exposure.^1
Assessment of exposure to hazardous substances at the workplace has shown that exposure is rarely constant. In workplaces tasks, activities, work processes, and locations change over time, resulting
in occupational exposures that vary both within workers over time and between workers in the same job. Figure 1 depicts the variability in exposure to inhalable flour dust among and between workers
in Swedish bakeries,^2 illustrating that the (geometric) mean and (geometric) standard deviation of an exposure parameter in an occupational group present only limited information on the underlying
exposure pattern among workers in this occupational group. The phenomenon of exposure variability needs to be understood for a number of purposes, including planning exposure measurements, assigning
estimates of exposure to subjects in a study, identifying determinants of exposure, evaluating compliance with exposure limits, and establishing efficiency of control measures.
The first approach to evaluate the influence of particular factors on exposure levels is a linear regression model. This model requires a dataset consisting of single measurements on various workers
and additional information for each worker on their work characteristics (see box 1). This information may be derived from questions in a self-administered questionnaire on job descriptions or may be
collected during walk-through surveys. Assuming a log normal exposure distribution, the exposure level of worker i is predicted by the model:
Box 1 Examples of determinants of exposure in selected studies
• Physical characteristics (e.g. vapour pressure, pH level)
• Process characteristics
• Intermittent/continuous process
• Level of automation
• Confinement (enclosed/open)
• Type of process (e.g. welding v thermal cutting)
• Technology development
• General ventilation
• Local exhaust ventilation
Worker characteristics
• Job activities
• Work techniques
• Mobile/stationary
Environmental characteristics
• Indoor/outdoor
• Temperature
where a is the intercept representing the exposure concentration when all independent variables equal zero, b[1] to b[k] are regression coefficients of k independent variables X describing the fixed
effects of these determinants on the exposure level, and ε represents the random deviation with mean 0. Among others, crucial assumptions are that the measurements (lnC) are independent of one
another (this requires a single measurement per worker) and that the variance of lnC is the same for any fixed combination of X[1]...X[k] (equal covariances). A linear regression model presents the
investigator with information on the relative importance of various determinants of exposure. For example, a regression coefficient of 0.50 for the presence of local exhaust ventilation indicates
that the exposure among workers with local exhaust ventilation is a factor 1.65 [exp(0.50)] higher than those workers without ventilation. This insight in exposure determinants will greatly
facilitate decisions about the type of control measures that are deemed most appropriate at specific workplaces. The linear regression model may also be translated into a mathematical expression to
assign exposure levels to all individual workers in a population for which only information on determinants of exposure is available. One of the first applications of this approach was a linear
regression model assessing the impact of different plant processes, dust control measures, and job assignments on historical exposure levels in an asbestos textile plant. Subsequently, all workers in
the cohort were assigned exposure levels based on the regression equation.^8
An important disadvantage of linear regression analysis is that it cannot take into account repeated measurements on workers. When repeated measurements are available (at least two measurements for a
proportion of workers) an analysis of variance (ANOVA) technique can be employed. The basic information in an ANOVA is the estimates of variance and this approach may be used to evaluate and optimise
the grouping of workers into comparable exposure groups. In a simple one-way ANOVA model with repeated measurements on workers in an occupational group the measured exposure of worker i at time j,
assuming a lognormal exposure distribution, is expressed in the formula:
where μ is the longterm group mean exposure, α[i] is the random deviation of the mean exposure of person i from the group mean (contributing to the between-worker variance), and ε[ij] represents the
random deviation of the exposure of person i on day j from the mean exposure of person i (contributing to the within-worker variance). This error term also includes the measurement error due to
analytical errors in the measurement technique. The total variance in exposure is the sum of the between-worker variance and the within-worker variance. The ANOVA model assumes that α[i] and ε[ij]
are normally distributed and independent and that two important conditions are met: measurements have equal variances at each of the repeated measurements, and pairs of measurements on the same
subject are equally correlated, regardless of the time lag between individuals. The latter two conditions are known as the “compound symmetry” assumption. Violation of this restrictive assumption of
homogeneous between-worker variance and homogeneous within-worker variance may result in invalid estimates and invalid standard errors for these estimates.
The ANOVA model can easily be expanded to include occupational groups in the analysis, allowing to partition the exposure variability into three variance components: between-group variance (do a
priori defined groups in a study population really differ in mean exposure?), between-worker variance (are subjects a priori assigned to an exposure group really similar?), and within-worker variance
(do repeated samples on an individual show similar exposure levels?). These random effects ANOVA models have been used in many occupational groups to illustrate the presence of substantial
variability in exposure to chemical substances,^3,^4 physical agents,^5 aero-allergens,^6 and physical workload.^7 A comprehensive evaluation of exposure to chemical substances among 165 occupational
groups showed that the individual mean exposures within a group varied considerably. In fact, only in 25% of the groups was the 95% range of individual mean exposures within a factor of 2, almost 30%
were within a 10-fold range, and 10% of the groups showed a range of over 50-fold. In general, the differences in exposure among activities and shift within workers exceeded the differences in
exposure between workers in the same job in the same factory, suggesting that grouping strategies solely based on job titles may result in considerable misclassification and, hence, in attenuation of
the true association between exposure and health effect.^4
Random effects ANOVA models and linear regression models have been applied in a rich diversity of occupational settings, showing that estimating and modelling of exposure determinants is a suitable
addition to the classical exposure assessment strategies. An extensive review of studies on determinants of exposure concluded that observational studies can be used to identify sources of exposure
and guide towards appropriate control measures which can be tested in experimental studies and, subsequently, re-evaluated in observational studies at the workplace.^9 However, a word of caution is
necessary since both statistical techniques only produce valid results within the constraints of their mathematical assumptions. In several publications with exposure models based on linear
regression analysis the available data for the analysis was not limited to a single measurement per worker, thus, disregarding the correlation between repeated measurements.^10 In some publications
ANOVA models have been described without a formal evaluation of the required compound symmetry covariance structure, whereas the description of the dataset suggests that important assumptions may
have been ignored. The assumption of equal between-worker variance should certainly be tested when including occupational groups from a wide range of situations, such as working outdoors versus
working indoors or workers in intermittent processes versus operators in continuous processes. The within-worker variance may be influenced by the timeframe of the measurements, since measurements
taken close in time have higher correlations than those taken further apart in time.
In the classical linear regression model the effects of work characteristics on observed exposure levels are determined without taking into account the role of within-worker variability. The random
effects ANOVA model does not present information on the influence of particular factors on the actual measured personal exposure level and has very restrictive assumptions on the variance components.
Hence, there is a need for a statistical technique that can combine both models and is less restrictive on the specific covariance structure of repeated measurements. With the recent introduction of
the linear mixed-effects model in common statistical packages, such as SPSS (mixed model), SAS (Proc Mixed) and S-Plus, a powerful tool has become available to combine the prediction of workers’
exposure by process characteristics, job title, or other exposure determinants (fixed effects), while accounting for the within-worker and between-worker variance (random effects). The basic idea of
a mixed effects model is that the variance in measured exposure is partly explained by fixed determinants of exposure, thereby reducing the remaining random variance between and within workers (see
fig 2). The primary objective is to make inferences about the fixed effects in the model, for example to estimate differences between occupational groups at specific times, differences between
exposure conditions averaged over time, or changes over time in specific exposure conditions.^11
A straightforward linear mixed effects model describes the exposure level of worker i on day j, assuming a lognormal exposure distribution, in the model:
where μ is the intercept representing the true underlying exposure concentration (fixed) averaged over all workers, b[1] to b[k] are regression coefficients of the fixed effects of particular
determinants of exposure, α[i] represents the random effect of the i-th worker corresponding to the difference between his mean exposure and the overall mean exposure, and ε[ij] represents the random
effect of the j-th day for the i-th worker. The underlying model assumptions play a crucial role in the estimation of the parameters and, hence, to specify a model for the covariance structure is an
essential first step in the analysis. This involves evaluation of different structures in the selection of the best model.^11 In the most restricted model all workers have the same within-worker
variance (correlations between repeated measurements are equal) and the same between-worker variance (variance between workers is equal across all fixed determinants of exposure); the aforementioned
compound symmetry covariance structure. A less restricted model only assumes that the within-worker variance is constant, whereas in the least restrictive model the variance in repeated measurements
within workers may vary as well as the variance between workers for different fixed effects. In this last model with an unstructured covariance each worker has his own regression model with different
regression coefficients and different true mean exposure. In table 1 the estimated parameters are presented for a theoretical situation. The analysis will present estimates of the variance
components, which may be similar across jobs depending on the assumptions on the variance structure. The fixed determinants of exposure can be interpreted similar to regression coefficients in a
linear regression model. In general, more restrictions in a linear mixed effects model are required when less measurements are available, for example with only 2–3 repeated samples per worker the
assumption of a common within-worker variance is essential for fitting an appropriate model (that is, in the example of table 1 the error term is equal across jobs).
A good illustration of a mixed effects model with restrictions on both within- and between-worker variance was recently presented by Burstyn and colleagues.^12 A large database with over 1500
exposure measurements among asphalt workers from 37 different sources in eight European countries was constructed. This database enabled the researchers to present three models on the important
determinants of bitumen fume, bitumen vapour, and polycyclic aromatic hydrocarbons (PAH) exposure intensity among paving workers. The geometric mean for bitumen fume among about 1200 workers was 0.28
mg/m^3 with a wide range between 0.02 and 260 mg/m^3. The exposure model identified as important determinants: mastic laying (+0.88 mg/m^3), recycling operations (+0.89 mg/m^3), oil gravel paving
(−1.51 mg/m^3), and years before 1997 (+0.06 mg/m^3). The specific sampling techniques applied also were significantly associated with the exposure level of bitumen fume. It appeared that the fixed
effects explained about 41% of the total variability in exposure to bitumen fume. These fixed effects reduced the within-worker variance by only 8% but the between-worker variance by about 56%. This
exposure model was used for exposure assessment in a historical cohort study of asphalt paving in Western Europe. A validation of the empirical exposure model with exposure information from the USA
showed large systematic differences in predicted bitumen fume exposures between Western European and USA paving practices.^13 This finding argues for caution in the application of an exposure model
to occupational populations not included in the development of the original model, since the relative importance of determinants of exposure may vary across workplaces and populations.
A comprehensive evaluation of the application of mixed effects models with different variance structures was performed by Rappaport and colleagues.^14 Almost 200 measurements on total particulates
were performed at nine workplaces among boilermakers, ironworkers, pipefitters, and welder-fitters in the USA. Most workers were repeatedly sampled with a range of 3–14 measurements per worker, and
six process and task related parameters were collected during the measurements. The comparison between three different models showed that it was reasonable to pool the within-worker variance across
jobs and, hence, increasing the statistical power. The between-worker variance was sufficiently different among the jobs with welder-fitters showing the largest between-worker variance and
boilermakers and ironworkers the lowest. Thus, in the mixed effects model a fixed term was introduced for job title, allowing the between-worker variance to vary across the four jobs. With regard to
the fixed effects the exposure was significantly lower with the use of ventilation or when less than half of the day involved hot processes. The interaction of both fixed effects was also
significant. This analysis of important determinants of exposure and sources of variability suggests that control of particulate exposure among boilermakers and ironworkers (with low between-worker
variance) should focus on broad environmental changes, such as engineering or administrative controls, and among welder-fitters and pipefitters should address individual personal environments and
working techniques.^14
The need to apply a linear mixed effects model rather than the classical linear regression analysis is largely determined by the influence of fixed effects on the between-worker and within-worker
variance. A comparison of both statistical models on two datasets showed that in a study on endotoxins exposure among pig farmers the results from both analyses were very similar, due to the fact
that inclusion of 12 fixed farm characteristics in the exposure model reduced the between-worker variance by 82% and the overall between-worker variance was very low. However, in a dataset on
exposure to inhalable dust among workers in the rubber manufacturing industry, the mixed effects model resulted in fewer exposure determinants with a lower estimated effect on exposure. In this
example the between-worker variance was large and could only be reduced by 35% through inclusion of fixed effects.^10
What are the advantages of these new statistical techniques in the evaluation of exposure patterns at the workplace? The mixed effects models require a substantial amount of measurements, their
application needs a thorough statistical evaluation of underlying assumptions on variance structures, and specific software packages are required for the calculations. Then why bother to use these
The main advantages of a full exploration of exposure variability are that planning of measurement campaigns can be improved substantially and that interpretation of results is less biased by
neglected sources of variation in exposure. These advantages can be illustrated in three key areas of exposure assessment: dose-response relations in epidemiological studies, testing for compliance
with exposure limits, and design and evaluation of control measures.
The consequences of exposure variability have been explored in detail in the context of their effects on the exposure-response relation in epidemiological studies and subsequent adjustment of the
exposure assessment strategy. Random error in exposure measurements usually biases the risk estimate (for example, odds ratio, relative risk, regression coefficient) towards unity—that is, no
association. The within-worker variability is an important component of the total measurement error and the ratio of the within-worker variance over the between-worker variance is directly linked to
the expected attenuation in the observed risk estimate.^15 Given the reported magnitude of the within-worker component in the exposure variability in several occupational groups, the true risk in an
epidemiological study may be missed by a large margin. The consequences are that in these situations a group based exposure strategy may be preferred over an individual based strategy. Such a group
strategy will result in little or no attenuation in the dose-response relation if workers can be assigned to exposure groups that sufficiently differ in their average exposures.^15 The information on
the relative magnitude of sources of exposure variability can be used to evaluate the effect of different grouping strategies on observed associations between exposure and health outcomes. Equations
using variance components have been developed to predict the effect of different strategies on the risk estimate and standard error.^16 In a large study among carbon black workers it was shown that
differences in grouping schemes had a large effect on the slope and standard error of the regression coefficient of dust exposure on lung function parameters. The similarities in predicted and
observed attenuation of risk prompted the authors to conclude that these equations appear to be a useful tool in establishing the most efficient way of utilising exposure measurements.^17 The same
information may also guide towards an optimum sampling scheme for exposure measurements since the efficiency of increasing the number of repeated measurements per subject or, vice versa, increasing
the number of subjects, is partly determined by the ratio of within-worker over between-worker variance.^15,^18
The first test for compliance of workplaces against occupational exposure limits assumed that all exposure measurements were less than the limit. Subsequent testing procedures incorporated
information on exposure variability, either by using a predetermined value for the geometric standard deviation in the observed occupational group or by requesting a few measurements to estimate the
geometric standard deviation of the exposure distribution within a particular group. These strategies implicitly assumed that for each worker in the group the same exposure distribution was present,
hence ignoring the presence of substantial between-worker variance. A new compliance testing procedure for agents with chronic health effects has been proposed that accounts for both within-worker
and between-worker sources of variability.^19 This procedure starts with two shift-long measurements randomly collected from each of 10 randomly chosen workers from an occupational group. In the
first step it is evaluated whether the selected occupational group may be regarded as a monomorphic group. A random effects analysis of variance model is fitted to the data to evaluate whether the
individual mean exposures of workers in that group can be described by a log normal distribution, and thus whether these workers can be regarded as members of the same group. For these monomorphic
groups the probability is assessed that a randomly selected worker’s mean exposure is above the occupational exposure limit. For non-monomorphic groups alternative grouping should be attempted since
the initially identified group of workers most likely constitutes several different groups, or particular workers were assigned to the wrong group. For occupational groups with unacceptable exposure
levels, re-sampling is suggested to increase the power of the compliance test. If it appears that workers in the occupational group are uniformly exposed to unacceptable levels, engineering or
administrative controls are recommended. For non-uniformly exposed workers in a group, interventions at individual level should be considered, such as modifications of tasks and work practices. This
compliance testing strategy combines a compliance protocol showing that exposure at the workplace will not exceed the threshold limit value with the analysis of type of control measures best suited
to reduce exposure levels at the workplace.
In the design and evaluation of control measures, traditional methods of analysis may regard exposure variability as a nuisance since it will diminish the power of the exposure survey. These methods
fail to exploit the fact that the sources of exposure variability in itself contain important information as to the most appropriate control measures. For example, the presence of substantial
between-worker variance may suggest that a few workers have exposures well in excess of their co-workers and, thus, generic controls affecting everyone (such as engineering or administrative
controls) are far less effective than specific measures to modify work practices of the highest exposed individual workers. A linear mixed effect model is the best method to derive all possible
information from exposure variability, as was illustrated in the aforementioned study on determinants of exposure during hot processes in the construction industry.^14 Another example is a survey
among 19 small machine shops where determinants of exposure to water based metalworking fluids (MWF) were examined using a mixed effects multiple regression analysis. Contamination of MWF with tramp
oil, MWF pH, MWF temperature, and type of MWF were all significant predictors for sump fluid endotoxin concentration. The high within-shop correlation of sump fluid endotoxin levels indicated that
contamination of one sump is a sign to change the metalworking fluids in all sumps.^20
The application of these new techniques will require measurement strategies that accommodate estimation of all relevant sources of exposure variability, including repeated measurements on workers
under various working conditions.^21 This argument is not particularly new, but the application of new techniques such as linear mixed effects model offer great opportunities for a more meaningful
analysis of determinants of exposure. This will greatly enhance the design of exposure assessment strategies in occupational epidemiology and the decision process on appropriate control measures.
Box 2 Sources of exposure variability
Fixed effects
Variable with a constant and repeatable effect on the exposure level across workplaces and workers
Random effects
Differences in exposure among a classification variable
• Between-group variance: random deviation of the mean exposure of group k from the mean exposure of all measurements in the total sample of workers
• Between-worker variance: random deviation of the mean exposure of person i from the mean exposure of group k
• Within-worker variance: random deviation of the exposure of person i on day j from the mean exposure of person i
QUESTIONS (SEE ANSWERS ON P 289)
1. An analysis of variance with repeated measurements can be used to:
1. Evaluate the grouping of workers into comparable exposure groups.
2. Estimate the differences in mean exposure between workers.
3. Calculate the effect of determinants of exposure on grouping strategies.
4. Demonstrate systematic misclassification in exposure.
2. Which violation of underlying assumptions in an analysis of variance with repeated measurements may invalidate the results:
1. A large analytical error in the measurement technique.
2. A substantial correlation between repeated measurement within the same worker.
3. A between-worker variance that exceeds the within-worker variance.
4. A heterogeneous variance both within and between workers.
3. What is the basic idea behind a mixed effect model for the analysis of exposure patterns?
1. To take into account repeated measurements on the same workers.
2. To introduce fixed determinants of exposure in the model that will reduce random variation between workers.
3. The estimation of temporal and individual variability in exposure.
4. To predict the worker’s exposure by process characteristics.
4. In a measurement strategy repeated measurements are conducted within workers in different occupational groups. The mixed effect model shows large between-worker variance and small within-worker
variance within specified jobs. What conclusion may be drawn with regard to the most appropriate control measures?
1. Specific measures are needed to modify work practices of the highest exposed individuals.
2. Engineering controls are needed to reduce the mean exposure of workers within the same job.
3. General ventilation is probably not sufficient to reduce exposure among workers in the same workplace.
4. Personal protective equipment is required in jobs with the highest exposure.
5. A prediction model for exposure is used to assign exposure levels to workers in a retrospective cohort study. What situation will hamper the application of such a model?
1. Fixed determinants of exposure that are interrelated.
2. A large variance between workers in the same occupational group.
3. A large variance within workers in the same occupational group.
4. Differences in work processes over time.
This paper is based partly on a book chapter on analysis and modelling of personal exposure from the book Exposure assessment in occupational and environmental epidemiology, edited by Mark
Nieuwenhuijsen for Oxford University Press.1
|
{"url":"https://oem.bmj.com/content/62/5/344?ijkey=42e6af05a843461cb03b61d8e562564c4add92eb&keytype2=tf_ipsecsha","timestamp":"2024-11-07T16:18:19Z","content_type":"text/html","content_length":"185159","record_id":"<urn:uuid:dd79728c-d583-45f4-9eda-5f96746ce48c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00293.warc.gz"}
|
Intensity – The Physics Hypertextbook
intensity vs. amplitude
The amplitude of a sound wave can be quantified in several ways, all of which are a measure of the maximum change in a quantity that occurs when the wave is propagating through some region of a
• Amplitudes associated with changes in kinematic quantities of the particles that make up the medium
□ The displacement amplitude is the maximum change in position.
□ The velocity amplitude is the maximum change in velocity.
□ The acceleration amplitude is the maximum change in acceleration.
• Amplitudes associated with changes in bulk properties of arbitrarily small regions of the medium
□ The pressure amplitude is the maximum change in pressure (the maximum gauge pressure).
□ The density amplitude is the maximum change in density.
Measuring displacement might as well be impossible. For typical sound waves, the maximum displacement of the molecules in the air is only a hundred or a thousand times larger than the molecules
themselves — and what technologies are there for tracking individual molecules anyway? The velocity and acceleration changes caused by a sound wave are equally hard to measure in the particles that
make up the medium.
Density fluctuations are minuscule and short lived. The period of a sound wave is typically measured in milliseconds. There are some optical techniques that make it possible to image the intense
compressions are rarefactions associated with shock waves in air, but these are not the kinds of sounds we deal with in our everyday lives.
Pressure fluctuations caused by sound waves are much easier to measure. Animals (including humans) have been doing it for several hundred million years with devices called ears. Humans have also been
doing it electromechanically for about a hundred years with devices called microphones. All types of amplitudes are equally valid for describing sound waves mathematically, but pressure amplitudes
are the one we humans have the closest connection to.
In any case, the results of such measurements are rarely ever reported. Instead, amplitude measurements are almost always used as the raw data in some computation. When done by an electronic circuit
(like the circuits in a telephone that connect to a microphone) the resulting value is called intensity. When done by a neuronal circuit (like the circuits in your brain that connect to your ears)
the resulting sensation is called loudness.
The intensity of a sound wave is a combination of its rate and density of energy transfer. It is an objective quantity associated with a wave. Loudness is a perceptual response to the physical
property of intensity. It is a subjective quality associated with a wave and is a bit more complex. As a general rule the larger the amplitude, the greater the intensity, the louder the sound. Sound
waves with large amplitudes are said to be "loud". Sound waves with small amplitudes are said to be "quiet" or "soft". The word "low" is sometimes also used to mean quiet, but this should be avoided.
Use "low" to describe sounds that are low in frequency. Loudness will be dealt with at the end of this section, after the term level and its unit the decibel have been defined.
By definition, the intensity (I) of any wave is the time-averaged power (⟨P⟩) it transfers per area (A) through some region of space. The traditional way to indicate the time-averaged value of a
varying quantity is to enclose it in angle brackets (⟨⟩). These look similar to the greater than and less than symbols but they are taller and less pointy. That gives us an equation that looks like
The SI unit of power is the watt, the SI unit of area is the square meter, so the SI unit of intensity is the watt per square meter — a unit that has no special name.
intensity and displacement
For simple mechanical waves like sound, intensity is related to the density of the medium and the speed, frequency, and amplitude of the wave. This can be shown with a long, horrible, calculation. If
you don't care to see the sausage being made below, jump to the equation just before the vibrant table.
Start with the definition of intensity. Replace power with energy (both kinetic and elastic) over time (one period, for convenience sake).
Since kinetic and elastic energies are always positive we can split the time-averaged portion up into two parts.
Mechanical waves in a continuous medium can be thought of as an infinite collection of infinitesimal coupled harmonic oscillators. Little masses connected to other little masses with little springs
as far as the eye can see. On average, half the energy in a simple harmonic oscillator is kinetic and half is elastic. The time-averaged total energy in then either twice the average kinetic energy
or twice the average potential energy.
Let's work on the kinetic energy and see where it takes us. It has two important parts — mass and velocity.
K = ½mv^2
The particles in a longitudinal wave are displaced from their equilibrium positions by a function that oscillates in time and space. Use the one-dimensional traveling wave for this.
∆s(x,t) = ∆s sin[2π(x − ft)]
∆s(x,t) = instantaneous displacement at any position (x) and time (t)
∆s = displacement amplitude
ƒ = frequency
λ = wavelength
π = everyone's favorite mathematical constant
Take the time derivative to get the velocity of the particles in the medium (not the velocity of the wave through the medium).
∆v(x,t) = 2πf∆s cos[2π(x/λ − ft)]
Then square it.
∆v^2(x,t) = 4π^2f^2∆s^2 cos^2[2π(x/λ − ft)]
On to the mass. Density times volume is mass. The volume of material we're concerned with is a box whose area (A) is the surface through which the wave is traveling and whose length is one wavelength
(λ) — the distance the wave travels in one period.
m = ρV = ρAλ
In the volume spanned by a single wavelength, all the bits of matter are moving with different speeds. Calculus is needed to combine a multitude of varying values into one integrated value. We're
dealing with a periodic system here, one that repeats itself over and over again. We can choose to start our calculation at any time we wish as long as we finish one cycle later. For convenience sake
let's choose time to be zero — the beginning of a sinusoidal wave.
⟨K⟩ = ⎮ ½(ρA dx) v^2(x,0)
⟨K⟩ = ⎮ ½(ρA)(4π^2f^2∆s^2)cos^2(2πx/λ) dx
Clean up the constants.
½(ρA)(4π^2f^2∆s^2) = 2π^2ρAf^2∆s^2
Then work on the integral. It may look hard, but it isn't. Just visualize the cosine squared curve traced out over one cycle. See how it divides the rectangle bounding it into equal halves?
The height of this rectangle is one (as in the number 1 with no units) and its width is one wavelength. That gives an area of one wavelength and a half-area of half a wavelength.
⎮ cos^2(2πx/λ) dx = ½λ
Put the constants together with the integral and divide by one period to get the time-averaged kinetic energy. (Remember that wavelength divided by period is wave speed.)
⟨K⟩ = ⎰ (2π^2ρAf^2∆s^2)(½λ) ⎱ 1
T ⎱ ⎰ T
That concludes the hard part. Double the equation above and divide by area…
One last bit of algebra and we're done.
I = 2π^2ρf^2v∆s^2
We now have an equation that relates intensity (I) to displacement amplitude (∆s).
Does this formula make sense? Let's check to see how each of the factors affect intensity.
Factors affecting the intensity of sound waves
factor comments
I ∝ ρ The denser the medium, the more intense the wave. That makes sense. A dense medium packs more mass into any volume than a rarefied medium and kinetic energy goes with mass.
I ∝ f^ The more frequently a wave vibrates the medium, the more intense the wave is. I can see that one with my mind's eye. A lackluster wave that just doesn't get the medium moving isn't going to
2 carry as much energy as one that shakes the medium like crazy.
I ∝ v The faster the wave travels, the more quickly it transmits energy. This is where you have to remember that intensity doesn't so much measure the amount of energy transferred as it measures the
rate at which this energy is transferred.
I ∝ ∆s The greater the displacement amplitude, the more intense the wave. Just think of ocean waves for a moment. A hurricane-driven, wall-of-water packs a lot more punch than ripples in the bathtub.
^2 The metaphor isn't visually correct, since sound waves are longitudinal and ocean waves are complex, but it is intuitively correct.
Particle motion can be described in terms of displacement, velocity, or acceleration. Intensity can be related to these quantities as well. We've just completed the hard work of relating intensity (I
) to displacement amplitude (∆s). For a sense of completeness (and for the sake of why not), let's also derive the equations for intensity in terms of velocity amplitude (∆v) and acceleration
amplitude (∆a).
intensity and velocity
How does intensity relate to maximum velocity (the velocity amplitude)? Let's find out. Start with the one-dimensional traveling wave.
∆s(x,t) = ∆s sin[2π(x/λ − ft)]
Recall that velocity is the time derivative of displacement.
∆v(x,t) = 2πf∆s cos[2π(x − ft)]
The stuff in front of the cosine function is the velocity amplitude.
∆v = 2πf∆s
Solve this for the displacement amplitude.
Just a little while ago, we derived an equation for intensity in terms of displacement amplitude.
I = 2π^2ρf^2v∆s^2
Combine these two equations…
⎛ ∆v ⎞^2
I = 2π^2ρf^2v ⎜ ⎟
⎝ 2πf ⎠
and simplify.
We now have an equation that relates intensity (I) to velocity amplitude (∆v).
intensity and acceleration
How does intensity relate to maximum acceleration (the acceleration amplitude)? Let's find out. Once again, start with the one-dimensional traveling wave.
∆s(x,t) = ∆s sin[2π(x − ft)]
Recall that velocity is the time derivative of displacement…
∆v(x,t) = 2πf∆s cos[2π(x − ft)]
and that acceleration is the time derivative of velocity.
∆a(x,t) = −4π^2f^2∆s sin[2π(x − ft)]
The acceleration amplitude is the stuff in front of the sine function (and ignoring the minus sign).
∆a = 4π^2f^2∆s
Rearrange this to make displacement amplitude the subject.
Time to bring back our equation for intensity in terms of displacement amplitude.
I = 2π^2ρf^2v∆s^2
Combine the previous two equations…
⎛ ∆a ⎞^2
I = 2π^2ρf^2v ⎜ ⎟
⎝ 4π^2f^2 ⎠
and simplify.
We now have an equation that relates intensity (I) to acceleration amplitude (∆a).
intensity and pressure
The amplitude of a sound wave can be measured much more easily with pressure (a bulk property of a material like air) than with displacement (the displacement of the submicroscopic molecules that
make up air). Here's a quick and dirty derivation of a more useful intensity-pressure equation from an effectively useless intensity-displacement equation.
Start with the equation that relates intensity to displacement amplitude.
I = 2π^2ρf^2v∆s^2
Now let's play a little game with the symbols — a game called algebra. Note that many of the symbols in the equation above are squared. Make all of them squared by multiplying the numerator and
denominator by 2ρv.
Write the numerator as a quantity squared.
Look at the pile of symbols in the parenthesis.
Look at the units of each physical quantity.
⎡ kg 1 m m ⎤
⎢ ⎥
⎣ m^3 s s 1 ⎦
Do some more magic — not algebra this time, but dimensional analysis.
⎡ kg kg m N ⎤
⎢ = = = Pa ⎥
⎣ m s^2 m^2 s^2 m^2 ⎦
The units of that mess are pascals, so the parenthetical quantity in the earlier equation is pressure — maximum gauge pressure to be more precise. We now have an equation that relates intensity to
pressure amplitude.
I = intensity [W/m^2]
∆P = pressure amplitude [Pa]
ρ = density [kg/m^3]
v = wave speed [m/s]
Here's a slow and clean derivation of a the intensity-pressure equation. Start from the version of Hooke's law that uses the bulk modulus (K).
The fraction on the left is the compressive stress, also known as the pressure (P). The fraction on the right is the compressive strain, also known as the fractional change in volume (θ). The latter
of these two is the one we're interested in right now. Imagine a sound wave that only stretches and compresses the medium in one direction. If that's the case, then the fractional change in volume is
effectively the same as a fractional change in length.
We have to use calculus here to get that fractional change, since the infinitesimal bits and pieces of the medium are squeezing and stretching at different rates at different points in space. Length
changes are described by a one-dimensional traveling wave.
∆s(x,t) = ∆s sin[2π(x − ft)]
Its spatial derivative is the same as the fractional change in volume.
θ = ∂∆s(x,t) = − 2π ∆s cos[2π(x − ft)]
∂x λ
It's interesting to note that the volume changes are out of phase from the displacements, since taking the derivative changed sine to negative cosine. Volume changes are 90° behind displacement,
since negative cosine is 90° behind sine. The most extreme volume changes occur at locations where the particles are back in their equilibrium positions.
Interesting, but not so useful right now. We care more about what these extreme values are than where they occur. For that, we replace the negative cosine expression with its extreme absolute value
+1. Doing that leaves us with this expression for the maximum strain (∆θ).
Plugging this back into the bulk modulus equation gives us the maximum gauge pressure.
And now for the dirty work. Recall these two equations for the speed of sound.
Substitute into the previous equation…
and simplify.
∆P = 2πρfv∆s
Familiar? It's in the numerator of an expression that appeared earlier.
Replace the pile of symbols in the parenthesis and behold. We get this thing again — the intensity-pressure amplitude relationship.
I = intensity [W/m^2]
∆P = pressure amplitude [Pa]
ρ = density [kg/m^3]
v = wave speed [m/s]
intensity and density
The density changes in a medium associated with a sound wave are directly proportional to the pressure changes. The relationship is as follows…
This looks similar to the Newton-Laplace equation for the speed of sound in an ideal gas but it's missing the heat capacity ratio γ (gamma). Why?
Assuming the first equation is the right one, solve it for ∆ρ.
Take the pressure amplitude-displacement amplitude relation…
∆P = 2πρfv∆s
and simplify to get the density-displacement amplitude relation.
Mildly amusing. Let's try something else.
Again, assuming the first equation is the right one, solve it for ∆P.
∆P = ∆ρv^2
Take the equation that relates intensity to pressure amplitude…
make a similar substitution…
and simplify to get the equation that relates intensity to density amplitude.
Not very interesting, but now our list is complete.
Sound intensity-amplitude
relationshipsρ = average density,
∆ρ = density amplitude, ∆a =
acceleration amplitude, f =
frequency, I = intensity, ∆P =
pressure amplitude, ∆s =
displacement amplitude, v = wave
speed, ∆v = velocity amplitude
amplitude intensity connection
velocity ∆v = 2πf∆s
acceleration ∆a = 2πf∆v
What is a level?
Types of levels.
I'm getting rid of all my furniture. All of it. And I'm going to build these different levels, with steps, and it'll all be carpeted with a lot of pillows. You know, like ancient Egypt.
Given a periodic signal of any sort, its intensity level (L[I]) in bel [B] is defined as the base ten logarithm of the ratio of its intensity to the intensity of a reference signal. Since this unit
is a bit large for most purposes, it is customary to divide the bel into tenths or decibels [dB]. The bel is a dimensionless unit.
When the signal is a sound wave, this quantity is called the sound intensity level, frequently abbreviated SIL.
sound pressure level, SPL
⎛ (∆P^2)/(2ρv) ⎞
= log ⎜ ⎟
⎝ (∆P[0]^2)/(2ρv) ⎠
• By convention, sound has a level of 0 dB at a pressure intensity of 20 μPa and frequency of 1,000 Hz. This is the generally agreed upon threshold of hearing for humans. Sounds with intensities
below this value are inaudible to (quite possibly) every human.
• For sound in water and other liquids, a reference pressure of 1 μPa is used.
• The range of audible sound intensities is so great, that it takes six orders of magnitude to get us from the threshold of hearing (20 μPa~0.5 pW/m^2) to the threshold of pain (20 Pa~0.5 W/m^2).
• The bel was invented by engineers of the Bell telephone network in 1923 and named in honor of the inventor of the telephone, Alexander Graham Bell.
• A level of 0 dB is not the same as an intensity of 0 W/m^2, or a pressure amplitude of 0 Pa, or a displacement amplitude of 0 m.
• Signals below the threshold or reference value are negative. Silence has a level of negative infinity.
• Since the base ten log of 2 is approximately 0.3, every additional 3 dB of level corresponds to an approximate doubling of amplitude.
• A 10 decibel increase is perceived by people as sounding roughly twice as loud.
• Other examples of logarithmic scales include: earthquake magnitudes (often called by its obsolete name, the Richter scale), pH, stellar magnitudes, electromagnetic spectrum charts, … any more?
• Transform the decibel equation for level from a ratio to a difference.
• The 1883 eruption at Krakatau, Indonesia (often misspelled Krakatoa) had an intensity of 180 dB and was audible 5,000 km away in Mauritius. The Krakatoa explosion registered 172 decibels at 100
miles from the source.
It would be equally reasonable to use natural logarithms in place of base ten, but this is far, far less common. Given a periodic signal of any sort, the ratio of the natural logarithm of its
intensity to a reference signal is a measure of its intensity level (L) in neper [Np]. As with the bel it is customary to divide the neper into tenths or decineper [dNp]. The neper is also a
dimensionless unit.
⎛ ∆P ⎞
L[P] = 20 ln ⎜ ⎟
⎝ ∆P[0] ⎠
The neper and decineper are so rare in comparison to the bel and decibel that they are essentially the answer to a trivia question.
Notes and quotes.
• Quote from Russ Rowlett of UNC: "The [neper] recognizes the British mathematician John Napier, the inventor of the logarithm. Napier often spelled his name Jhone Neper, and he used the Latin form
Ioanne Napero in his writings." AHD "Scottish mathematician who invented logarithms and introduced the use of the decimal point in writing numbers."
• The value, in nepers, for the level difference of two values (F[1] and F[2]) of a field quantity is obtained by taking the natural logarithm of the ratio of the two values, ΔL[N] = lnF[1]/F[2].
For so-called power quantities (see below), a factor 0.5 is included in the definition of the level difference, ΔL[N] = 0.5lnP[1]/P[2]. Two field quantity levels differ by 1 Np when the values
of the quantity differ by a factor e (the base of natural logarithms). (The levels of two power quantities differ by 1 Np if the quantities differ by a factor e^2.) Since the ratio of values of
any kind of quantity (or the logarithm of such ratios) are pure numbers, the neper is dimensionless and can be represented by "one." One cannot infer from this measure what kind of quantity is
being considered so that the kind of quantity has to be specified clearly in all cases.
Intensity level of selected sounds in airSource: League for the Hard of Hearing and Physics of the Body (paid link)
level (dB) source
−∞ absolute silence
−24 sounds quieter than this are not possible due to the random motion of air molecules at room temperature (∆P = 1.27 μPa)
−20.6 current world's quietest room (Microsoft Building 87, Redmond, Washinton)
−9.4 former world's quietest room (Orfield Laboratories, Minneapolis, Minnesota)
0 threshold of hearing, reference value for sound pressure (∆P[0] = 20 μPa)
10–20 normal breathing, rustling leaves
20–30 whispering at 5 feet
40–50 coffee maker, library, quiet office, quiet residential area
50–60 dishwasher, electric shaver, electric toothbrush, large office, rainfall, refrigerator
60–70 air conditioner, automobile interior, alarm clock, background music, normal conversation, television, vacuum cleaner, washing machine
70–80 coffee grinder, flush toilet, freeway traffic, hair dryer
80–90 blender, doorbell, bus interior, food processor, garbage disposal, heavy traffic, hand saw, lawn mower, machine tools, noisy restaurant, toaster, ringing telephone, whistling kettle
>85 OSHA 1910.95(i)(1): Employers shall make hearing protectors available to all employees exposed to an 8-hour time-weighted average of 85 decibels or greater at no cost to the employees.
090–100 electric drill, shouted conversation, tractor, truck
100–110 baby crying, boom box, factory machinery, motorcycle, school dance, snow blower, snowmobile, squeaky toy held close to the ear, subway train, woodworking class
110–120 ambulance siren, car horn, chain saw, disco, football game, jet plane at ramp, leaf blower, personal music player on high, power saw, rock concert, shouting in ear, symphony concert, video
113 loudest clap (Alastair Galpin, New Zeeland, 2008)
120–130 threshold of pain (∆P = 20 Pa), auto stereo, band concert, chain saw, hammer on nail, heavy machinery, pneumatic drills, stock car races, thunder, power drill, percussion section at
125 loudest bird (white bellbird, Procnias albus)
130–140 air raid siren, jet airplane taking off, jackhammer
150–160 artillery fire at 500 feet, balloon pop, cap gun, firecracker, jet engine taking off
160–170 fireworks, handgun, rifle
170–180 shotgun
180–190 rocket launch, 1883 Krakatau volanic eruption, 1908 Tunguska meteor
194 loudest sound possible in Earth's atmosphere
+∞ infinitely loud
• loudness
□ Loudness is a perceptual response to the physical property of intensity.
□ A 10 dB increase in level is perceived by most listeners as a doubling in loudness
□ A 1 dB change in level is just barely perceptible by most listeners
□ Since loudness varies with frequency as well as intensity, a special unit has been designed for loudness — the phon. One phon is the loudness of a 1 dB, 1,000 Hz sound; 10 phon is the
loudness of a 10 dB, 1,000 Hz sound; and so on.
□ Cupping ones hand behind one's ear will result in an intensity increase of 6 to 8 dB.
□ Asking someone to speak up usually results in an increase of about 10 dB on the part of the speaker.
• locating the source of sound
□ Sound has a finite propagation speed. Sounds from sources not directly in front of or directly behind an observer will reach one ear before the other due to a difference in distance. This
results in an interaural time difference (ITD) that the brain can use to determine the direction to a source of sound.
□ Phase differences are another way we localize sounds. The difference in location of our two ears results in an interaural phase difference (ITD), but it is only effective for wavelengths
longer than 2 head diameters (f ≲ 1,000 Hz).
□ Sounds in one ear will be louder than the other. Sound waves diffract easily at wavelengths larger than the diameter of the human head. At higher frequencies (f ≳ 1,000 Hz), the head casts an
auditory "shadow". Sounds will be louder in one ear than the other for sources that are not directly in front of or behind the listener because the head is partially blocking the sound waves.
This results in an interaural level difference (ILD).
• The human ear can distinguish some…
□ 280 different intensity levels (seems unlikely)
□ 1,400 different pitches
□ three (four?) vocal registers
☆ (whistle register?)
☆ falsetto
☆ modal — the usual speaking register
☆ vocal fry — the lowest of the three vocal registers
• fish
□ Unlike our ears and hydrophones, fish ears don't detect sound pressure, which is the compression of molecules. Instead, they perceive particle motion, the tiny back-and-forth movements of
particles in response to sound waves (source needed).
seismic waves
Extended quote that needs to be paraphrased.
Magnitude scales are quantitative. With these scales, one measures the size of the earthquake as expressed by the seismic wave amplitude (amount of shaking at a point distant from the earthquake)
rather than the intensity or degree of destructiveness. Most magnitude scales have a logarithmic basis, so that an increase in one whole number corresponds to an earthquake 10 times stronger than
one indicated by the next lower number. This translates into an approximate 30-fold increase in the amount of energy released. Thus magnitude 5 represents ground motion about 10 times that of
magnitude 4, and about 30 times as much energy released. A magnitude 5 earthquake represents 100 times the ground motion and 900 times the energy released of a magnitude 3 earthquake.
The Richter scale was created by Charles Richter in 1935 at the California Institute of Technology. It was created to compare the size of earthquakes. One of Dr. Charles F. Richter's most
valuable contributions was to recognize that the seismic waves radiated by all earthquakes can provide good estimates of their magnitudes. He collected the recordings of seismic waves from a
large number of earthquakes, and developed a calibrated system of measuring them for magnitude. He calibrated his scale of magnitudes using measured maximum amplitudes of shear waves on
seismometers particularly sensitive to shear waves with periods of about one second. The records had to be obtained from a specific kind of instrument, called a Wood-Anderson seismograph.
Although his work was originally calibrated only for these specific seismometers, and only for earthquakes in southern California, seismologists have developed scale factors to extend Richter's
magnitude scale to many other types of measurements on all types of seismometers, all over the world. In fact, magnitude estimates have been made for thousands of moonquakes and for two quakes on
Most estimates of energy have historically relied on the empirical relationship developed by Beno Gutenberg and Charles Richter.
log[10] E[s] = 4.8 + 1.5 M[s]
where energy, E[s], is expressed in joules. The drawback of this method is that M[s] is computed from a bandwidth between approximately 18 to 22 s. It is now known that the energy radiated by an
earthquake is concentrated over a different bandwidth and at higher frequencies. Note that this is not the total "intrinsic" energy of the earthquake, transferred from sources such as
gravitational energy or to sinks such as heat energy. It is only the amount radiated from the earthquake as seismic waves, which ought to be a small fraction of the total energy transferred
during the earthquake process.
With the worldwide deployment of modern digitally recording seismograph with broad bandwidth response, computerized methods are now able to make accurate and explicit estimates of energy on a
routine basis for all major earthquakes. A magnitude based on energy radiated by an earthquake, M[e], can now be defined. These energy magnitudes are computed from the radiated energy using the
Choy and Boatwright (1995) formula
M[e] = ⅔ log[10] E[s] − 2.9
where E[s] is the radiated seismic energy in joules. M[e], computed from high frequency seismic data, is a measure of the seismic potential for damage.
|
{"url":"https://physics.info/intensity/","timestamp":"2024-11-04T23:24:12Z","content_type":"text/html","content_length":"105184","record_id":"<urn:uuid:20a3540e-72b6-4c67-aa60-5ca7aa1e675b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00358.warc.gz"}
|
+ General Questions (11)
Well, it is a linear equation system with five variables and three linear independent equations so the solutions form a two-dimensional vector space that can be described as the set of linear
combinations of two vectors. These vectors are not unique, and can be easily found as usual in linear algebra:
2a-s = 0 ==> a = 1/2s
6b-5s-4t = 0 ==> b = (5s+4t)/6
2c+4t =0 ==> c = -2t
Hence, the solutions are
(a,b,c,s,t) = (1/2,5/6,0,1,0)*s + (0,4/6,-2,0,1)*t
which could alternatively also represented as follows
(a,b,c,s,t) = (3,5,0,6,0)*s' + (0,2,-6,0,3)*t'
|
{"url":"https://q2a.cs.uni-kl.de/3587/s-invariants","timestamp":"2024-11-15T01:01:49Z","content_type":"text/html","content_length":"48823","record_id":"<urn:uuid:a31afa2d-83d1-402a-9d26-de86772d4cec>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00405.warc.gz"}
|
Blog - CryptoKiwi
• By using cryptography, you can create a key pair consisting of a public key and a private key.
• The public key is used to receive funds, similar to your bank account number. It can be publicly shared without risking losing your funds.
• The private key is used to spend the funds, similar to your bank account password. You should never disclose it.
• A wallet helps you to generate and manage such kind of key pairs because they are long alphanumeric strings and not easy to memorize.
• To make bitcoin transaction easier in daily life, the public key is converted to a much shorter bitcoin address (33 or 34 characters) through a series of mathematical functions.
• On most occasions, we use “public key” and “address” interchangeably.
• We present our bitcoin address instead of the public key when we want to receive funds. We rarely see our actual public keys in bitcoin daily usage.
• You can also think of your phone number as your bitcoin address, and it allows others to call you easily. Your SIM card is your private key and if you lose it, you lose control of your identity.
The IMEI identifier of your phone is similar to the public key, which can be used to uniquely identify you, but it is too long and used internally and rarely shown to users.
• From your private key, we can calculate your public key; and thru your public key, we can calculate your bitcoin address. But it cannot be done in the opposite direction. Therefore, as long as
you keep your private key in a safe place, you are in full control of your fund.
• In Bitcoin, we use elliptic curve multiplication as the basis for its cryptography. The curve determines the mathematical relationship between the public key and the private key.
• The private key is not used to encrypt your fund, instead, to sign a message, usually a transaction. It is similar to your signature on the cheque, which enables you to prove your ownership and
spend the fund in the bank.
• By applying a signing algorithm (e.g. DSA) between your private key and the transaction message, a unique digital signature can be generated. Anyone with your public key can verify the signature
was generated by you and the message was not altered in transit.
Alice signs the message with her private key and Bob verifies the message with Alice’s public key
• This useful property allows the private key owner to prove its ownership of the source of the information without revealing the private key, and anyone with the public key to verify the signature
without knowing the private key.
• The relationship between the private key and public key is fixed, but can only be calculated in one direction, from private to public. We call such kind of relationship as “asymmetric”. It is
guaranteed by math.
• Such kind of asymmetric cryptography has been widely used on the internet nowadays to prove the authenticity of a digital identity, such as a website, an email message, a electronic document, a
bank transaction and so on.
• ECDSA or Elliptic Curve Digital Signature Algorithm is the core of bitcoin cryptographic algorithm which defines how bitcoin private key and public key are generated.
• From the public key, we can derive the “raw” bitcoin address using SHA256 and RIPEMD160 one-way functions.
• To help human readability, avoid ambiguity and protect against errors, we use “Base58Check” to encode the “raw” bitcoin address, which is the ultimate address (33 or 34 characters) we see in
daily transactions.
Full Path From Private Key to Bitcoin Address
• Base58 uses all letters (lowercase & uppercase) and numbers except (0, O, l, I) to represent the 160-bit “raw” bitcoin address.
• Base58Check adds version number and error-checking code at the beginning and end of the address, which boosts its flexibility and security.
|
{"url":"https://cryptokiwi.com/blog/","timestamp":"2024-11-10T11:01:27Z","content_type":"text/html","content_length":"67595","record_id":"<urn:uuid:188db288-da5c-4db2-95f3-dafd5c2406ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00066.warc.gz"}
|
Q. In an examination, the maximum marks for each of the four papers namely P, Q, R and S are 100. Marks scored by the students are in integers. A student can score 99% in n different ways. What is the value of n? - Sociology OWL
Q. In an examination, the maximum marks for each of the four papers namely P, Q, R and S are 100. Marks scored by the students are in integers. A student can score 99% in n different ways. What is
the value of n?
(a) 16
(b) 17
(c) 23
(d) 35
Correct Answer: (d) 35
Question from UPSC Prelims 2023 CSAT
Explanation :
The total marks for the four papers is 400 (100 marks each for P, Q, R and S).
If a student scores 99% in the examination, then the total marks scored by the student is 99% of 400 = 396 marks.
Now, we need to find out the number of ways in which 396 can be expressed as a sum of four integers, each ranging from 0 to 100.
The problem can be solved by considering all possible combinations and permutations of the scores in the four papers.
1. All four papers scored 99 marks each. This is one way.
2. Three papers scored 100 marks each and one paper scored 96 marks. This can happen in 4 ways (^4C[3]) · (^1C[1]).
3. Two papers scored 100 marks each, one paper scored 98 marks and one paper scored 98 marks. This can happen in 6 ways (^4C[2]) · (^2C[2]).
4. Two papers scored 100 marks each, one paper scored 99 marks and one paper scored 97 marks. This can happen in 12 ways (^4C[2] · ^2C[1] · ^1C[1]).
5. One paper scored 100 marks, two papers scored 99 marks each and one paper scored 98 marks. This can happen in 12 ways (^4C[1] · ^3C[2] · ^1C[1]).
So, the total number of ways is 1 + 4 + 6 + 12 + 12 = 35.
Hence, the correct answer is (d) 35.
|
{"url":"https://upscsociology.in/q-in-an-examination-the-maximum-marks-for-each-of-the-four-papers-namely-p-q-r-and-s-are-100-marks-scored-by-the-students-are-in-integers-a-student-can-score-99-in-n-different-way/","timestamp":"2024-11-09T06:25:20Z","content_type":"text/html","content_length":"190870","record_id":"<urn:uuid:39abec41-345e-4497-8018-d5ad2a4cffc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00082.warc.gz"}
|
Recursive Data Structures
Site: Saylor Academy
Course: CS102: Introduction to Computer Science II
Book: Recursive Data Structures
Printed by: Guest user
Date: Saturday, November 2, 2024, 11:28 PM
Read this page. In the previous unit of our course we studied recursive algorithms. Recursion is a concept that also applies to data. Here we look at recursive data structures - lists, trees, and
sets. A list is a structure that consists of elements linked together. If an element is linked to more than one element, the structure is a tree. If each element is linked to two (sub) elements, it
is called a binary tree. Trees can be implemented using lists, as shown in the resource for this unit. Several examples of the wide applicability of lists are presented. A link points to all the
remaining links, i.e. the rest of the list or the rest of the tree; thus, a link points to a list or to a tree - this is data recursion.
The efficiency of the programming process includes both running time and size of data. This page discusses the latter for recursive lists and trees.
Lastly, why read the last section on sets? Sets are another recursive data structure and the last section 2.7.6, indicates their connection with trees, namely, a set data type can be implemented in
several different ways using a list or a tree data type. Thus, the programming process includes implementation decisions, in addition, to design or algorithm decisions. Each of these types of
decisions is constrained by the features of the programming language used. The decision choices, such as which data structure to use, will impact efficiency and effectiveness of the program's
satisfaction of the program's requirements.
Note: You will notice an unusual use of C++ here. What the author is doing is showing how to pass a fixed-value data-structure as a calling argument.
1. Why Recursive Data Structures?
In this essay, we are going to look at recursive algorithms, and how sometimes, we can organize an algorithm so that it resembles the data structure it manipulates, and organize a data structure so
that it resembles the algorithms that manipulate it.
When algorithms and the data structures they manipulate are isomorphic, the code we write is easier to understand for exactly the same reason that code like template strings and regular expressions
are easy to understand: The code resembles the data it consumes or produces.
We'll finish up by observing that we also can employ optimizations that are only possible when algorithms and the data structures they manipulate are isomorphic.
Here we go.
Source: Reg Braithwaite, https://raganwald.com/2016/12/27/recursive-data-structures.html
This work is licensed under a Creative Commons Attribution-ShareAlike 2.0 License.
2. An exercise: rotating a square
Here is a square composed of elements, perhaps pixels or cells that are on or off. We could write them out like this:
Consider the problem of rotating our square. There is an uncommon, but particularly delightful way to do this. First, we cut the square into four smaller squares:
Now, we rotate each of the four smaller squares 90 degrees clockwise:
Finally, we move the squares as a whole, 90 degrees clockwise:
Then reassemble:
How do we rotate each of the four smaller squares? Exactly the same way. For example,
By rotating each smaller square, it becomes:
And we rotate all for squares to finish with:
Reassembled, it becomes this:
How would we rotate the next size down?
Rotating an individual dot is a NOOP, so all we have to do is rotate the four dots around, just like we do above:
Reassembled, it becomes this:
Voila! Rotating a square consists of dividing it into four "region" squares, rotating each one clockwise, then moving the regions one position clockwise. It brings whirling dervishes to mind.
Here's the algorithm in action:
3. Recursion, see recursion
In From Higher-Order Functions to Libraries And Frameworks, we had a look at multirec, a recursive combinator.
function mapWith (fn) {
return function * (iterable) {
for (const element of iterable) {
yield fn(element);
function multirec({ indivisible, value, divide, combine }) {
return function myself (input) {
if (indivisible(input)) {
return value(input);
} else {
const parts = divide(input);
const solutions = mapWith(myself)(parts);
return combine(solutions);
With multirec, we can write functions that perform computation using divide-and-conquer algorithms. multirec handles the structure of divide-and-conquer, we just have to write four smaller functions
that implement the parts specific to the problem we are solving.
In computer science, divide and conquer (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conquer algorithm works by recursively breaking down a problem into
two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the
original problem.
We'll implement rotating a square using multirec. Let's begin with a naïve representation for squares, a two-dimensional array. For example, we would represent the square:
With this array:
[['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚫️'],
['⚪️', '⚪️', '⚪️', '⚫️']]
To use multirec, we need four pieces:
1. An indivisible predicate function. It should report whether an array is too small to be divided up. It's simplicity itself: (square) => square.length === 1.
2. A value function that determines what to do with a value that is indivisible. For rotation, we simply return what we are given: (something) => something
3. A divide function that breaks a divisible problem into smaller pieces. Our function will break a square into four regions. We'll see how that works below.
4. A combine function that puts the result of rotating the smaller pieces back together. Our function will take four region squares and put them back together into a big square.
As noted, indivisible and value are trivial. We'll call our functions hasLengthOne, and, itself:
const hasLengthOne = (square) => square.length === 1;
const itself = (something) => something;
divide involves no more than breaking arrays into halves, and then those halves again. We'll write a divideSquareIntoRegions function for this:
const firstHalf = (array) => array.slice(0, array.length / 2);
const secondHalf = (array) => array.slice(array.length / 2);
const divideSquareIntoRegions = (square) => {
const upperHalf = firstHalf(square);
const lowerHalf = secondHalf(square);
const upperLeft = upperHalf.map(firstHalf);
const upperRight = upperHalf.map(secondHalf);
const lowerRight = lowerHalf.map(secondHalf);
const lowerLeft= lowerHalf.map(firstHalf);
return [upperLeft, upperRight, lowerRight, lowerLeft];
Our combine function, rotateAndCombineArrays, makes use of a little help from some functions we saw in an essay about generators:
function split (iterable) {
const iterator = iterable[Symbol.iterator]();
const { done, value: first } = iterator.next();
if (done) {
return { rest: [] };
} else {
return { first, rest: iterator };
function * join (first, rest) {
yield first;
yield * rest;
function * zipWith (fn, ...iterables) {
const asSplits = iterables.map(split);
if (asSplits.every((asSplit) => asSplit.hasOwnProperty('first'))) {
const firsts = asSplits.map((asSplit) => asSplit.first);
const rests = asSplits.map((asSplit) => asSplit.rest);
yield * join(fn(...firsts), zipWith(fn, ...rests));
const concat = (...arrays) => arrays.reduce((acc, a) => acc.concat(a));
const rotateAndCombineArrays = ([upperLeft, upperRight, lowerRight, lowerLeft]) => {
// rotate
[upperLeft, upperRight, lowerRight, lowerLeft] =
[lowerLeft, upperLeft, upperRight, lowerRight];
// recombine
const upperHalf = [...zipWith(concat, upperLeft, upperRight)];
const lowerHalf = [...zipWith(concat, lowerLeft, lowerRight)];
return concat(upperHalf, lowerHalf);
Armed with hasLengthOne, itself, divideSquareIntoRegions, and rotateAndCombineArrays, we can use multirec to write rotate:
const rotate = multirec({
indivisible : hasLengthOne,
value : itself,
divide: divideSquareIntoRegions,
combine: rotateAndCombineArrays
[['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚫️'],
['⚪️', '⚪️', '⚪️', '⚫️']]
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚪️', '⚪️']
4. Accidental complexity
Rotating a square in this recursive manner is intellectually stimulating, but our code is encumbered with some accidental complexity. Here's a flashing strobe-and-neon hint of what it is:
const firstHalf = (array) => array.slice(0, array.length / 2);
const secondHalf = (array) => array.slice(array.length / 2);
const divideSquareIntoRegions = (square) => {
const upperHalf = firstHalf(square);
const lowerHalf = secondHalf(square);
const upperLeft = upperHalf.map(firstHalf);
const upperRight = upperHalf.map(secondHalf);
const lowerRight = lowerHalf.map(secondHalf);
const lowerLeft= lowerHalf.map(firstHalf);
return [upperLeft, upperRight, lowerRight, lowerLeft];
divideSquareIntoRegions is all about extracting region squares from a bigger square, and while we've done our best to make it readable, it is rather busy. Likewise, here's the same thing in
const rotateAndCombineArrays = ([upperLeft, upperRight, lowerRight, lowerLeft]) => {
// rotate
[upperLeft, upperRight, lowerRight, lowerLeft] =
[lowerLeft, upperLeft, upperRight, lowerRight];
// recombine
const upperHalf = [...zipWith(concat, upperLeft, upperRight)];
const lowerHalf = [...zipWith(concat, lowerLeft, lowerRight)];
return concat(upperHalf, lowerHalf);
rotateAndCombineArrays is a very busy function. The core thing we want to talk about is actually the rotation: Having divided things up into four regions, we want to rotate the regions. The zipping
and concatenating is all about the implementation of regions as arrays.
We can argue that this is necessary complexity, because squares are arrays, and that's just what we programmers do for a living, write code that manipulates basic data structures to do our bidding.
But what if our implementation wasn't an array of arrays? Maybe divide and combine could be simpler? Maybe that complexity would turn out to be unnecessary after all?
5. Isomorphic data structures
When we have what ought to be an elegant algorithm, but the interface between the algorithm and the data structure ends up being as complicated as the rest of the algorithm put together, we can
always ask ourselves, "What data structure would make this algorithm stupidly simple?"
The answer can often be found by imagining a data structure that looks like the algorithm's basic form. If we follow that heuristic, our data structure would be recursive, rather than ‘flat.' Since
we do all kinds of work sorting out which squares form the four regions of a bigger square, our data structure would describe a square as being composed of four region squares.
Such a data structure already exists, it's called a quadtree. Squares are represented as four regions, each of which is a smaller square or a cell. A simple implementation is a "Plain Old JavaScript
Object" (or "POJO") with properties for each of the regions. If the property contains a string, it's cell. If it contains another POJO, it's a quadtree.
A square that looks like this:
Is composed of four regions, the ul ("upper left"), ur ("upper right"), lr ("lower right"), and ll ("lower left"), something like this:
Thus, for example, the ul is:
And the ur is:
And so forth. Each of those regions is itself composed of four regions. Thus, the ul of the ul is ⚪️, and the ur of the ul is ⚫️.
The quadtree could be expressed in JavaScript like this:
const quadTree = {
ul: { ul: '⚪️', ur: '⚫️', lr: '⚪️', ll: '⚪️' },
ur: { ul: '⚪️', ur: '⚪️', lr: '⚪️', ll: '⚫️' },
lr: { ul: '⚫️', ur: '⚪️', lr: '⚪️', ll: '⚪️' },
ll: { ul: '⚫️', ur: '⚫️', lr: '⚪️', ll: '⚪️' }
Now to our algorithm. Rotating a quadtree is simpler than rotating an array of arrays. First, our test for indivisibility is now whether something is a string or not:
const isString = (something) => typeof something === 'string';
The value of an indivisible cell remain the same, itself.
Our divide function is simple: quadtrees are already divided in the manner we require, we just have to turn them into an array of regions:
const quadTreeToRegions = (qt) =>
[qt.ul, qt.ur, qt.lr, qt.ll];
And finally, our combine function reassembles the rotated regions into a POJO, rotating them in the process:
const regionsToRotatedQuadTree = ([ur, lr, ll, ul]) =>
({ ul, ur, lr, ll });
And here's our function for rotating a quadtree:
const rotateQuadTree = multirec({
indivisible : isString,
value : itself,
divide: quadTreeToRegions,
combine: regionsToRotatedQuadTree
Let's put it to the test:
ul: { ll: "⚪️", lr: "⚫️", ul: "⚪️", ur: "⚫️" },
ur: { ll: "⚪️", lr: "⚫️", ul: "⚪️", ur: "⚪️" },
lr: { ll: "⚪️", lr: "⚪️", ul: "⚫️", ur: "⚪️" },
ll: { ll: "⚪️", lr: "⚪️", ul: "⚪️", ur: "⚫️" }
If we reassemble the square by hand, it's what we expect:
Now we can be serious about the word "Isomorphic". Isomorphic means, fundamentally, "having the same shape". Obviously, a quadtree doesn't look anything like the code in rotateQuadTree or
multirec. So how can a quadtree "look like" an algorithm? The answer is that the quadtree's data structure looks very much like the way rotateQuadTree behaves at run time.
More precisely, the elements of the quadtree and the relationships between them can be put into a one-to-one correspondance with the call graph of rotateQuadTree when acting on that quadtree.
6. Separation of concerns
But back to our code. All we've done so far is moved the "faffing about" out of our code and we're doing it by hand. That's bad: we don't want to retrain our eyes to read quadtrees instead of flat
arrays, and we don't want to sit at a computer all day manually translating quadtrees to flat arrays and back.
If only we could write some code to do it for us… Some recursive code…
Here's a function that recursively turns a two-dimensional array into a quadtree:
const isOneByOneArray = (something) =>
Array.isArray(something) && something.length === 1 &&
Array.isArray(something[0]) && something[0].length === 1;
const contentsOfOneByOneArray = (array) => array[0][0];
const regionsToQuadTree = ([ul, ur, lr, ll]) =>
({ ul, ur, lr, ll });
const arrayToQuadTree = multirec({
indivisible: isOneByOneArray,
value: contentsOfOneByOneArray,
divide: divideSquareIntoRegions,
combine: regionsToQuadTree
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚫️', '⚪️']
ul: { ul: "⚪️", ur: "⚪️", lr: "⚫️", ll: "⚪️" },
ur: { ul: "⚪️", ur: "⚪️", lr: "⚪️", ll: "⚪️" },
lr: { ul: "⚪️", ur: "⚪️", lr: "⚪️", ll: "⚫️" },
ll: { ul: "⚫️", ur: "⚪️", lr: "⚫️", ll: "⚫️" }
Naturally, we can also write a function to convert quadtrees back into two-dimensional arrays again:
const isSmallestActualSquare = (square) => isString(square.ul);
const asTwoDimensionalArray = ({ ul, ur, lr, ll }) =>
[[ul, ur], [ll, lr]];
const regions = ({ ul, ur, lr, ll }) =>
[ul, ur, lr, ll];
const combineFlatArrays = ([upperLeft, upperRight, lowerRight, lowerLeft]) => {
const upperHalf = [...zipWith(concat, upperLeft, upperRight)];
const lowerHalf = [...zipWith(concat, lowerLeft, lowerRight)];
return concat(upperHalf, lowerHalf);
const quadTreeToArray = multirec({
indivisible: isSmallestActualSquare,
value: asTwoDimensionalArray,
divide: regions,
combine: combineFlatArrays
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚫️', '⚪️']
["⚪️", "⚪️", "⚪️", "⚪️"],
["⚪️", "⚫️", "⚪️", "⚪️"],
["⚫️", "⚪️", "⚪️", "⚪️"],
["⚫️", "⚫️", "⚫️", "⚪️"]
And thus, we can take a two-dimensional array, turn it into a quadtree, rotate the quadtree, and convert it back to a two-dimensional array again:
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚫️', '⚪️']
["⚫️", "⚫️", "⚪️", "⚪️"],
["⚫️", "⚪️", "⚫️", "⚪️"],
["⚫️", "⚪️", "⚪️", "⚪️"],
["⚪️", "⚪️", "⚪️", "⚪️"]
7. But why?
Now, we argued above that we've neatly separated the concerns by making three separate functions, instead of interleaving dividing two-dimensional squares into regions, rotating regions, and then
reassembling two-dimensional squares.
But the converse side of this is that what we're doing is now a lot less efficient: We're recursing through our data structures three separate times, instead of once. And frankly, multirec was
designed such that the divide function breaks things up, and the combine function puts them back together, so these concerns are already mostly separate once we use multirec instead of a bespoke
recursive function.
One reason to break the logic up into three separate functions would be if we want to do lots of different kinds of things with quadtrees. Besides rotating quadtrees, what else might we do?
Well, we might want to superimpose one image on top of another. This could be part of an image editing application, where we have layers of images and want to superimpose all the layers to derive the
finished image for the screen. Or we might be implementing Conway's Game of Life, and might want to 'paste' a pattern like a glider onto a larger universe.
Let's go with a very simple implementation: We're only editing black-and-white images, and each 'pixel' is either a ⚪️ or ⚫️. If we use two-dimensional arrays to represent our images, we need to
iterate over every 'pixel' to perform the superimposition:
const superimposeCell = (left, right) =>
(left === '⚫️' || right === '⚫️') ? '⚫️' : '⚪️';
const superimposeRow = (left, right) =>
[...zipWith(superimposeCell, left, right)];
const superimposeArray = (left, right) =>
[...zipWith(superimposeRow, left, right)];
const canvas =
[ ['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚪️', '⚪️', '⚫️'],
['⚪️', '⚪️', '⚪️', '⚫️']];
const glider =
[ ['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚫️', '⚪️']];
superimposeArray(canvas, glider)
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚫️'],
['⚫️', '⚫️', '⚫️', '⚫️']
Seems simple enough. How about superimposing a quadtree on a quadtree?
8. Recursive operations on pairs of quadtrees
We can use multirec to superimpose one quadtree on top of another: Our function will take a pair of quadtrees, using destructuring to extract one called left and the other called right:
const superimposeQuadTrees = multirec({
indivisible: ({ left, right }) => isString(left),
value: ({ left, right }) => right ==='⚫️'
? right
: left,
divide: ({ left, right }) => [
{ left: left.ul, right: right.ul },
{ left: left.ur, right: right.ur },
{ left: left.lr, right: right.lr },
{ left: left.ll, right: right.ll }
combine: ([ul, ur, lr, ll]) => ({ ul, ur, lr, ll })
left: arrayToQuadTree(canvas),
right: arrayToQuadTree(glider)
['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚫️'],
['⚫️', '⚫️', '⚫️', '⚫️']
Again, this feels like faffing about just so we can be recursive. But we are in position to do something interesting!
9. Optimizing recursive algorithms with isomorphic data structures
Many images have large regions that are entirely white or black. When superimposing one region on another, if either region is entirely white, we know the result must be the same as the other region.
When superimposing one region on another, if either region is entirely black, the result must be entirely black.
We can use the quadtree's hierarchal representation to exploit this. We'll store some extra information in each quadtree, its colour: If it is entirely white, its colour will be white. If it is
entirely black, its colour will be black. And if it contains a mix of white and black cells, its colour will be a question mark.
const isOneByOneArray = (something) =>
Array.isArray(something) && something.length === 1 &&
Array.isArray(something[0]) && something[0].length === 1;
const contentsOfOneByOneArray = (array) => array[0][0];
const divideSquareIntoRegions = (square) => {
const upperHalf = firstHalf(square);
const lowerHalf = secondHalf(square);
const upperLeft = upperHalf.map(firstHalf);
const upperRight = upperHalf.map(secondHalf);
const lowerRight = lowerHalf.map(secondHalf);
const lowerLeft= lowerHalf.map(firstHalf);
return [upperLeft, upperRight, lowerRight, lowerLeft];
const colour = (something) => {
if (something.colour != null) {
return something.colour;
} else if (something === '⚪️') {
return '⚪️';
} else if (something === '⚫️') {
return '⚫️';
} else {
throw "Can't get the colour of this thing";
const combinedColour = (...elements) =>
elements.reduce((acc, element => acc === element ? element : '❓'))
const regionsToQuadTree = ([ul, ur, lr, ll]) => ({
ul, ur, lr, ll, colour: combinedColour(ul, ur, lr, ll)
const arrayToQuadTree = multirec({
indivisible: isOneByOneArray,
value: contentsOfOneByOneArray,
divide: divideSquareIntoRegions,
combine: regionsToQuadTree
[ ['⚪️', '⚪️'],
['⚪️', '⚪️'] ]
//=> "⚪️"
[ ['⚪️', '⚪️'],
['⚪️', '⚫️'] ]
//=> "❓"
[ ['⚫️', '⚫️'],
['⚫️', '⚫️'] ]
//=> "⚫️"
[ ['⚪️', '⚪️', '⚪️', '⚪️'],
['⚪️', '⚫️', '⚪️', '⚪️'],
['⚫️', '⚪️', '⚪️', '⚪️'],
['⚫️', '⚫️', '⚫️', '⚪️']]
//=> "❓"
Now, we can take advantage of every region's computed colour when we superimpose "coloured" quadtrees:
const eitherAreEntirelyColoured = ({ left, right }) =>
colour(left) !== '❓' || colour(right) !== '❓' ;
const superimposeColoured = ({ left, right }) => {
if (colour(left) === '⚪️' || colour(right) === '⚫️') {
return right;
} else if (colour(left) === '⚫️' || colour(right) === '⚪️') {
return left;
} else {
throw "Can't superimpose these things";
const divideTwoQuadTrees = ({ left, right }) => [
{ left: left.ul, right: right.ul },
{ left: left.ur, right: right.ur },
{ left: left.lr, right: right.lr },
{ left: left.ll, right: right.ll }
const combineColouredRegions = ([ul, ur, lr, ll]) => ({
ul, ur, lr, ll, colour: combinedColour(ul, ur, lr, ll)
const superimposeColouredQuadTrees = multirec({
indivisible: eitherAreEntirelyColoured,
value: superimposeColoured,
divide: divideTwoQuadTrees,
combine: combineColouredRegions
We get the same output, but now instead of comparing every cell whenever we superimpose quadtrees, we compare entire regions at a time. If either is "entirely coloured," we can return the other one
without recursively drilling down to the level of individual pixels.
There is no savings if both quadtrees are composed of a fairly evenly spread mix of black and white pixels (e.g. a checkerboard pattern), but in cases where there are large expanses of white or
black, the difference is substantial.
In the case of comparing the 4x4 canvas and glider images above, the superimposeArray function requires sixteen comparisons. The superimposeQuadTrees function requires twenty comparisons. But the
superimposeColouredQuadTrees function requires just seven comparisons.
If we were writing an image manipulation application, we'd provide much snappier behaviour using coloured quadtrees to represent images on screen.
The interesting thing about this optimization is that it is tuned to the characteristics of both the data structure and the algorithm: It is not something that is easy to perform in the algorithm
without the data structure, or in the data structure without the algorithm.
And it's not the only optimization. Remember our ‘whirling regions' implementation of rotateQuadTree? Here's rotateColouredQuadTree:
const isEntirelyColoured = (something) =>
colour(something) !== '❓' ;
const rotateColouredQuadTree = multirec({
indivisible : isEntirelyColoured,
value : itself,
divide: quadTreeToRegions,
combine: regionsToRotatedQuadTree
Any region that is entirely white or entirely black is its own rotation, so no further dividing and conquering need be done. For images that have large areas of blank space, the "whirling regions"
algorithm is not just aesthetically delightful, it's faster than a brute-force transposition of array elements.
Optimizations like this can only be implemented when the algorithm and the data structure are isomorphic to each other.
10. Why!
So back to, "Why convert data into a structure that is isomorphic to our algorithm".
The first reason to do so, is that the code is clearer and easier to read if we convert, then perform operations on the data structure, and then convert it back (if need be).
The second reason do do so, is that if we want to do lots of different operations on the data structure, it is much more efficient to keep it in the form that is isomorphic to the operations we are
going to perform on it.
The example we saw was that if we were building a hypothetical image processing application, we could convert an image into quad trees, then rotate or superimpose images at will. We would only need
to convert our quadtrees when we need to save or display the image in a rasterized (i.e. array-like) format.
And third, we saw that once we embraced a data structure that was isomorphic to the form of the algorithm, we could employ elegant optimizations that are impossible (or ridiculously inconvenient)
when the algorithm and data structure do not match.
Separating conversion from operation allows us to benefit from all three reasons for ensuring that our algorithms and data structures are isomorphic to each other.
|
{"url":"https://learn.saylor.org/mod/book/tool/print/index.php?id=32988","timestamp":"2024-11-03T03:28:37Z","content_type":"text/html","content_length":"157415","record_id":"<urn:uuid:adcfc9b8-cb16-474d-8ff2-60bb5fe7b889>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00746.warc.gz"}
|
worst case complexity of insertion sort
Insertion Sort - Algorithm, Source Code, Time Complexity In the best case (array is already sorted), insertion sort is omega(n). Insertion sort algorithm is a basic sorting algorithm that
sequentially sorts each item in the final sorted array or list. Can QuickSort be implemented in O(nLogn) worst case time complexity The algorithm is based on one assumption that a single element is
always sorted. In the worst calculate the upper bound of an algorithm. Notably, the insertion sort algorithm is preferred when working with a linked list. It just calls insert on the elements at
indices 1, 2, 3, \ldots, n-1 1,2,3,,n 1. Presumably, O >= as n goes to infinity. @MhAcKN You are right to be concerned with details. Insertion sort is frequently used to arrange small lists. View
Answer. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is (N^2). In insertion sort, the average number of
comparisons required to place the 7th element into its correct position is ____ a) 7 9 4 2 1 4 7 9 2 1 2 4 7 9 1 1 2 4 7 9 Theres only one iteration in this case since the inner loop operation is
trivial when the list is already in order. rev2023.3.3.43278. Therefore,T( n ) = C1 * n + ( C2 + C3 ) * ( n - 1 ) + C4 * ( n - 1 ) ( n ) / 2 + ( C5 + C6 ) * ( ( n - 1 ) (n ) / 2 - 1) + C8 * ( n - 1 )
Thanks for contributing an answer to Stack Overflow! Why is insertion sort better? Explained by Sharing Culture Bucket sort - Wikipedia Insertion Sort - Best, Worst, and Average Cases - LiquiSearch
At a macro level, applications built with efficient algorithms translate to simplicity introduced into our lives, such as navigation systems and search engines. What is the worst case complexity of
bubble sort? Move the greater elements one position up to make space for the swapped element. Can I tell police to wait and call a lawyer when served with a search warrant? Could anyone explain why
insertion sort has a time complexity of (n)? How do you get out of a corner when plotting yourself into a corner, Movie with vikings/warriors fighting an alien that looks like a wolf with tentacles,
The difference between the phonemes /p/ and /b/ in Japanese. Speed Up Machine Learning Models with Accelerated WEKA, Merge Sort Explained: A Data Scientists Algorithm Guide, GPU-Accelerated
Hierarchical DBSCAN with RAPIDS cuML Lets Get Back To The Future, Python Pandas Tutorial Beginner's Guide to GPU Accelerated DataFrames for Pandas Users, Top Video Streaming and Conferencing Sessions
at NVIDIA GTC 2023, Top Cybersecurity Sessions at NVIDIA GTC 2023, Top Conversational AI Sessions at NVIDIA GTC 2023, Top AI Video Analytics Sessions at NVIDIA GTC 2023, Top Data Science Sessions at
NVIDIA GTC 2023. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Yes, insertion sort is an in-place sorting algorithm.
Merge Sort vs. Insertion Sort - GeeksforGeeks Iterate from arr[1] to arr[N] over the array. So each time we insert an element into the sorted portion, we'll need to swap it with each of the elements
already in the sorted array to get it all the way to the start. Consider an example: arr[]: {12, 11, 13, 5, 6}. The average case is also quadratic,[4] which makes insertion sort impractical for
sorting large arrays. In these cases every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. Insertion sort takes maximum
time to sort if elements are sorted in reverse order. 5. Like selection sort, insertion sort loops over the indices of the array. Hence cost for steps 1, 2, 4 and 8 will remain the same. Bubble Sort
is an easy-to-implement, stable sorting algorithm with a time complexity of O(n) in the average and worst cases - and O(n) in the best case. In contrast, density-based algorithms such as DBSCAN
(Density-based spatial clustering of application with Noise) are preferred when dealing with a noisy dataset. Data Structure and Algorithms Insertion Sort - tutorialspoint.com Is there a proper earth
ground point in this switch box? About an argument in Famine, Affluence and Morality. It can be different for other data structures. The best case input is an array that is already sorted. Insert
current node in sorted way in sorted or result list. For example, the array {1, 3, 2, 5} has one inversion (3, 2) and array {5, 4, 3} has inversions (5, 4), (5, 3) and (4, 3). When implementing
Insertion Sort, a binary search could be used to locate the position within the first i - 1 elements of the array into which element i should be inserted. Expected Output: 1, 9, 10, 15, 30 It may be
due to the complexity of the topic. What's the difference between a power rail and a signal line? An array is divided into two sub arrays namely sorted and unsorted subarray. Bucket Sort (With Code
in Python, C++, Java and C) - Programiz Insertion Sort works best with small number of elements. The most common variant of insertion sort, which operates on arrays, can be described as follows:
Pseudocode of the complete algorithm follows, where the arrays are zero-based:[1]. Why is worst case for bubble sort N 2? Why is Binary Search preferred over Ternary Search? At each array-position,
it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). Binary Search uses O(Logn) comparison which is an
improvement but we still need to insert 3 in the right place. Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. In the best case you find the insertion
point at the top element with one comparsion, so you have 1+1+1+ (n times) = O(n). In Insertion Sort the Worst Case: O(N 2), Average Case: O(N 2), and Best Case: O(N). Example: what is time
complexity of insertion sort Time Complexity is: If the inversion count is O (n), then the time complexity of insertion sort is O (n). Consider the code given below, which runs insertion sort: Which
condition will correctly implement the while loop? If an element is smaller than its left neighbor, the elements are swapped. // head is the first element of resulting sorted list, // insert into the
head of the sorted list, // or as the first element into an empty sorted list, // insert current element into proper position in non-empty sorted list, // insert into middle of the sorted list or as
the last element, /* build up the sorted array from the empty list */, /* take items off the input list one by one until empty */, /* trailing pointer for efficient splice */, /* splice head into
sorted list at proper place */, "Why is insertion sort (n^2) in the average case? 2011-2023 Sanfoundry. Its important to remember why Data Scientists should study data structures and algorithms
before going into explanation and implementation. Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. This article introduces
a straightforward algorithm, Insertion Sort. Before going into the complexity analysis, we will go through the basic knowledge of Insertion Sort. Time complexity: In merge sort the worst case is O (n
log n); average case is O (n log n); best case is O (n log n) whereas in insertion sort the worst case is O (n2); average case is O (n2); best case is O (n). On this Wikipedia the language links are
at the top of the page across from the article title. The worst-case scenario occurs when all the elements are placed in a single bucket. View Answer, 9. How come there is a sorted subarray if our
input in unsorted? It just calls, That sum is an arithmetic series, except that it goes up to, Using big- notation, we discard the low-order term, Can either of these situations occur? And although
the algorithm can be applied to data structured in an array, other sorting algorithms such as quicksort. The selection of correct problem-specific algorithms and the capacity to troubleshoot
algorithms are two of the most significant advantages of algorithm understanding. t j will be 1 for each element as while condition will be checked once and fail because A[i] is not greater than key.
The algorithm starts with an initially empty (and therefore trivially sorted) list. But since it will take O(n) for one element to be placed at its correct position, n elements will take n * O(n) or
O(n2) time for being placed at their right places. The time complexity is: O(n 2) . On the other hand, insertion sort is an . If the value is greater than the current value, no modifications are made
to the list; this is also the case if the adjacent value and the current value are the same numbers. d) (j > 0) && (arr[j + 1] < value) Binary Insertion Sort - Take this array => {4, 5 , 3 , 2, 1}.
c) Insertion Sort Acidity of alcohols and basicity of amines. can the best case be written as big omega of n and worst case be written as big o of n^2 in insertion sort? The letter n often represents
the size of the input to the function. Why are Suriname, Belize, and Guinea-Bissau classified as "Small Island Developing States"? So its time complexity remains to be O (n log n). Worst, Average and
Best Cases; Asymptotic Notations; Little o and little omega notations; Lower and Upper Bound Theory; Analysis of Loops; Solving Recurrences; Amortized Analysis; What does 'Space Complexity' mean ? Do
I need a thermal expansion tank if I already have a pressure tank? With the appropriate tools, training, and time, even the most complicated algorithms are simple to understand when you have enough
time, information, and resources. Insertion Sort. In that case the number of comparisons will be like: p = 1 N 1 p = 1 + 2 + 3 + . Making statements based on opinion; back them up with references or
personal experience. The worst case runtime complexity of Insertion Sort is O (n 2) O(n^2) O (n 2) similar to that of Bubble What Is The Best Case Of Insertion Sort? | Uptechnet a) insertion sort is
stable and it sorts In-place Worst case and average case performance is (n2)c. Can be compared to the way a card player arranges his card from a card deck.d. Take Data Structure II Practice Tests -
Chapterwise! insertion sort employs a binary search to determine the correct The worst case occurs when the array is sorted in reverse order. On average each insertion must traverse half the
currently sorted list while making one comparison per step. The worst-case (and average-case) complexity of the insertion sort algorithm is O(n). Should I just look to mathematical proofs to find
this answer? Direct link to Miriam BT's post I don't understand how O , Posted 7 years ago. Sorting by combining Insertion Sort and Merge Sort algorithms The recursion just replaces the outer loop,
calling itself and storing successively smaller values of n on the stack until n equals 0, where the function then returns up the call chain to execute the code after each recursive call starting
with n equal to 1, with n increasing by 1 as each instance of the function returns to the prior instance. What is Insertion Sort Algorithm: How it works, Advantages The outer loop runs over all the
elements except the first one, because the single-element prefix A[0:1] is trivially sorted, so the invariant that the first i entries are sorted is true from the start. Therefore total number of
while loop iterations (For all values of i) is same as number of inversions. d) Insertion Sort which when further simplified has dominating factor of n and gives T(n) = C * ( n ) or O(n), In Worst
Case i.e., when the array is reversly sorted (in descending order), tj = j However, the fundamental difference between the two algorithms is that insertion sort scans backwards from the current key,
while selection sort scans forwards. |=^). In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort performs just as many comparisons as selection sort. For n
elements in worst case : n*(log n + n) is order of n^2. Direct link to Cameron's post (n-1+1)((n-1)/2) is the s, Posted 2 years ago. Why are trials on "Law & Order" in the New York Supreme Court?
You. Now imagine if you had thousands of pieces (or even millions), this would save you a lot of time. [Solved] Insertion Sort Average Case | 9to5Science Due to insertion taking the same amount of
time as it would without binary search the worst case Complexity Still remains O(n^2). View Answer. Therefore, the running time required for searching is O(n), and the time for sorting is O(n2). So,
whereas binary search can reduce the clock time (because there are fewer comparisons), it doesn't reduce the asymptotic running time. (numbers are 32 bit). Insertion Sort Algorithm - Iterative &
Recursive | C, Java, Python Let's take an example. The word algorithm is sometimes associated with complexity. acknowledge that you have read and understood our, Data Structure & Algorithm Classes
(Live), Data Structure & Algorithm-Self Paced(C++/JAVA), Android App Development with Kotlin(Live), Full Stack Development with React & Node JS(Live), GATE CS Original Papers and Official Keys, ISRO
CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Time Complexities of all Sorting Algorithms, Program to check if a given number is Lucky (all digits are
different), Write a program to add two numbers in base 14, Find square root of number upto given precision using binary search. If you're behind a web filter, please make sure that the domains
*.kastatic.org and *.kasandbox.org are unblocked. Assuming the array is sorted (for binary search to perform), it will not reduce any comparisons since inner loop ends immediately after 1 compare (as
previous element is smaller). It is useful while handling large amount of data. As stated, Running Time for any algorithm depends on the number of operations executed. If insertion sort is used to
sort elements of a bucket then the overall complexity in the best case will be linear ie. Browse other questions tagged, Where developers & technologists share private knowledge with coworkers, Reach
developers & technologists worldwide, Binary search the position takes O(log N) compares. The algorithm below uses a trailing pointer[10] for the insertion into the sorted list. If you change the
other functions that have been provided for you, the grader won't be able to tell if your code works or not (It is depending on the other functions to behave in a certain way). Right, I didn't
realize you really need a lot of swaps to move the element. This algorithm sorts an array of items by repeatedly taking an element from the unsorted portion of the array and inserting it into its
correct position in the sorted portion of the array. The inner loop moves element A[i] to its correct place so that after the loop, the first i+1 elements are sorted. When the input list is empty,
the sorted list has the desired result. In this worst case, it take n iterations of . How do I sort a list of dictionaries by a value of the dictionary? So if the length of the list is 'N" it will
just run through the whole list of length N and compare the left element with the right element. Direct link to me me's post Thank you for this awesom, Posted 7 years ago. In each iteration the first
remaining entry of the input is removed, and inserted into the result at the correct position, thus extending the result: with each element greater than x copied to the right as it is compared
against x. The same procedure is followed until we reach the end of the array. Analysis of Insertion Sort. Time Complexity of Quick sort. Just as each call to indexOfMinimum took an amount of time
that depended on the size of the sorted subarray, so does each call to insert. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
Values from the unsorted part are picked and placed at the correct position in the sorted part. The variable n is assigned the length of the array A. Data Scientists are better equipped to implement
the insertion sort algorithm and explore other comparable sorting algorithms such as quicksort and bubble sort, and so on. DS CDT3 Summary - Time and space complexity - KITSW 2CSM AY:2021- 22 To
achieve the O(n log n) performance of the best comparison searches with insertion sort would require both O(log n) binary search and O(log n) arbitrary insert. Why is insertion sort (n^2) in the
average case? That's a funny answer, sort a sorted array. The current element is compared to the elements in all preceding positions to the left in each step. If smaller, it finds the correct
position within the sorted list, shifts all the larger values up to make a space, and inserts into that correct position. Therefore overall time complexity of the insertion sort is O(n + f(n)) where
f(n) is inversion count. That means suppose you have to sort the array elements in ascending order, but its elements are in descending order. . What will be the worst case time complexity of
insertion sort if the correct position for inserting element is calculated using binary search? The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie We've added a "Necessary
cookies only" option to the cookie consent popup. Tree Traversals (Inorder, Preorder and Postorder). vegan) just to try it, does this inconvenience the caterers and staff? The primary advantage of
insertion sort over selection sort is that selection sort must always scan all remaining elements to find the absolute smallest element in the unsorted portion of the list, while insertion sort
requires only a single comparison when the (k+1)-st element is greater than the k-th element; when this is frequently true (such as if the input array is already sorted or partially sorted),
insertion sort is distinctly more efficient compared to selection sort. Insertion Sort is an easy-to-implement, stable sorting algorithm with time complexity of O (n) in the average and worst case,
and O (n) in the best case. O(n+k). The best-case time complexity of insertion sort is O(n). Often the trickiest parts are actually the setup. If we take a closer look at the insertion sort code, we
can notice that every iteration of while loop reduces one inversion. Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. Binary insertion sort is an
in-place sorting algorithm. Algorithms may be a touchy subject for many Data Scientists. Which of the following is good for sorting arrays having less than 100 elements? The authors show that this
sorting algorithm runs with high probability in O(nlogn) time.[9]. insertion sort keeps the processed elements sorted. O(n) is the complexity for making the buckets and O(k) is the complexity for
sorting the elements of the bucket using algorithms . Following is a quick revision sheet that you may refer to at the last minute Following is a quick revision sheet that you may refer to at the
last minute, Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above, Time complexities of different data structures, Akra-Bazzi
method for finding the time complexities, Know Your Sorting Algorithm | Set 1 (Sorting Weapons used by Programming Languages), Sorting objects using In-Place sorting algorithm, Different ways of
sorting Dictionary by Values and Reverse sorting by values, Sorting integer data from file and calculate execution time, Case-specific sorting of Strings in O(n) time and O(1) space. acknowledge that
you have read and understood our, Data Structure & Algorithm Classes (Live), Data Structure & Algorithm-Self Paced(C++/JAVA), Android App Development with Kotlin(Live), Full Stack Development with
React & Node JS(Live), GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam. As the name suggests, it is based on
"insertion" but how? answered Mar 3, 2017 at 6:56. vladich. Direct link to csalvi42's post why wont my code checkout, Posted 8 years ago. Shell sort has distinctly improved running times in practical
work, with two simple variants requiring O(n3/2) and O(n4/3) running time. Pseudo-polynomial Algorithms; Polynomial Time Approximation Scheme; A Time Complexity Question; Searching Algorithms;
Sorting . PDF Best case Worst case Average case Insertion sort Selection sort Can each call to, What else can we say about the running time of insertion sort? communities including Stack Overflow,
the largest, most trusted online community for developers learn, share their knowledge, and build their careers. Algorithms are commonplace in the world of data science and machine learning. This
doesnt relinquish the requirement for Data Scientists to study algorithm development and data structures. Binary comparisons in the worst case, which is O(n log n). which when further simplified has
dominating factor of n2 and gives T(n) = C * ( n 2) or O( n2 ). Insertion Sort | Insertion Sort Algorithm - Scaler Topics Worst Case: The worst time complexity for Quick sort is O(n 2). I'm fairly
certain that I understand time complexity as a concept, but I don't really understand how to apply it to this sorting algorithm. c) Statement 1 is false but statement 2 is true Combining merge sort
and insertion sort. The new inner loop shifts elements to the right to clear a spot for x = A[i]. So we compare A ( i) to each of its previous . You are confusing two different notions. d) 14 Is
there a single-word adjective for "having exceptionally strong moral principles"?
|
{"url":"https://aiu.asso.fr/rud0bk0u/worst-case-complexity-of-insertion-sort","timestamp":"2024-11-11T03:46:46Z","content_type":"text/html","content_length":"32047","record_id":"<urn:uuid:b71b98ad-e90d-4c5e-a198-678fc497d125>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00372.warc.gz"}
|
100% Discount || Fundamentals of Electromagnetic Fields and Waves - Freebies Global - Free Udemy, Coursera, Eduonix, YouAccel & More Online Courses
The person must have patience and interest to learn the subject
little bit of mathematics like differentiation and integration required
This course is for those who are pursuing a bachelor’s degree in electronics and communications engineering and is an advantage for them to get good knowledge and score well in the examinations.
Nowadays, electromagnetics plays a vital role because of the advancements in technology. Electronic circuits and network circuits have the limitation that they only describe the voltage, current
resistance, etc., but they cannot give the electric field intensity, attenuation constant, phase constant, lambda, or wavelength types of parameters. Therefore, network theory fails to give the above
parameters. So electromagnetic field theory is the advancement of network theory.
In this subject, you may know some fundamental concepts .
Section 1: Deals with the the field, vector and scalar fields, unit vectors, position vectors ,distance vectors etc.,
Section 2: Deals about vector algebra, which includes the dot product rules, some basic formulas of vector algebra and vector calculus, the cork screw rule, and the vector scalar triple product and
scalar triple product discussed elaborately, which are required for solving the problems in electromagnetic fields theory.
Section 3 : This is all about the review of coordinate systems, which include the cartesian coordinate system or rectangular coordinate system, cylindrical coordinate system, and spherical coordinate
system. Next, we discussed point transformations like rectangular to cylindrical or vice versa as well as cylindrical to spherical or vice versa. We also discussed what the DEL operator is, why it
was used, and how these del operators are used for divergence, curl, and gradient operations with some sort of rule. Next we also discussed the statements and mathematical approach of divergence
theorem, curl for stokes theorem etc.,
Section 4: Discussed Coulomb’s law and vector form. Coulomb’s law for n number of charges, electric field intensity, and finding the electric field intensity for different charge distributions like
point charge, infinite line and sheet charge and volume charge, etc., also find the electric flux density from the electric field intensity formulas. next we discussed the gauss law and its
Section 5: This section is all about the magnetostatics related to the H component. he first concept is the introduction of magnetostatics and Biot-Savarts law for finding H and magnetic field
intensity H for circular loop. The next one is Amperes circuit law and its applications like infinite line elements, circular disc, and infinite sheet of charge etc., next is magnetic flux and
magnetic flux density, magnetic scalar and magnetic vector potentials, etc. later force due to different magnetic fields like .Amperes force law , Lorentz force etc.,
Maxwell’s equations for time varying fields like faraday’s law, transformer emf, and induced emf combined both the inconsistency of ampere’s law or modified ampere’s law and displacement current
density. finally boundary conditions for different medias like dielectric-Dielectric and conductor
Section 5: Deals with what is a wave and an electromagnetic wave, and then the wave equations for dielectrics and conductors. Later we find the E/H or intrinsic impedance etc.,
Feel free to ask any doubts while learning the course
Happy learning!
Skill Gems Education
PUDI V V S NARAYANA
Who this course is for:
This course is really helps allot for under graduate and post graduate like engineering students
#Fundamentals #Electromagnetic #Fields #Waves #Get this Deal
تخفيضات,كوبونات,كوبون,عروض,كوبون كل يوم
Get this Deal,Get this Deal
udemy sale,udemy for business,udemy discount,udemy gutschein,business administration,discount factor,course deutsch,course catalogue,udemy course discount,javascript courses online,javascript
course,freebies,toefl speaking,excel courses online,excel courses,excel templates dashboard,software engineering course online,software engineering course,
|
{"url":"https://freebieglobal.com/100-discount-fundamentals-of-electromagnetic-fields-and-waves/","timestamp":"2024-11-14T07:12:13Z","content_type":"text/html","content_length":"246430","record_id":"<urn:uuid:03c16800-cda0-479b-97b0-854327c0218b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00206.warc.gz"}
|
Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity
Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1710-1723, 2015.
We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time
algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely
the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.
Cite this Paper
Related Material
|
{"url":"http://proceedings.mlr.press/v40/Vempala15.html","timestamp":"2024-11-10T15:32:41Z","content_type":"text/html","content_length":"15606","record_id":"<urn:uuid:090ce5e8-fb43-43d8-a964-4ab5aaa1d029>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00393.warc.gz"}
|
Smith College | Mathematical Sciences
Mathematics is one of the oldest disciplines of study. For all its antiquity, however, it is a modern, rapidly growing field. Only 70 years ago, mathematics might have been said to consist of
algebra, analysis, number theory and geometry. Today, so many new areas have sprouted that the term “mathematics” seems almost inadequate. A new phrase, “the mathematical sciences” has come into
fashion to describe a broad discipline that includes the blossoming fields of statistics, operations research, biomathematics and information science, as well as the traditional branches of pure and
applied mathematics.
Requirements & Courses
Goals for Majors in Mathematics
• Given a problem, to recognize its mathematical aspects and to produce an abstract mathematical model for the problem.
• Basic mathematical skills (through discrete math, the calculus course, and linear algebra).
• To write mathematics effectively.
• To understand and write mathematical proofs.
• To speak mathematics or statistical terms effectively in oral presentations.
• To use technology appropriately to learn and understand mathematics.
Goals for Majors in MST
• Solid preparation for success in graduate school in statistics, biostatistics or other quantitative fields.
• Given a problem, be able to recognize its mathematical aspects and produce an abstract mathematical model for the problem.
• Mastery of foundational mathematical skills (through discrete math, the calculus courses, linear algebra and analysis), including the ability to read and write proofs.
• Fit and interpret statistical models, including but not limited to linear regression models. Use models to make predictions and evaluate the efficacy of those models and the accuracy of those
• Understand the strengths and limitations of different research methods for the collection, analysis and interpretation of data. Be able to design studies for various purposes.
• Attend to and explain the role of uncertainty in inferential statistical procedures.
• Write a professional-level statistical report.
• Understand the mathematical foundations of statistical inference.
• Compute with data in at least one high-level programming language, as evidenced by the ability to analyze a complex data set.
Mathematics Major
The mathematics major has a foundation requirement, a core requirement, a depth requirement and a total credit requirement.
1. Foundation: MTH 111, MTH 112, MTH 153, MTH 211 and MTH 212
2. Core
1. One course in algebra: MTH 233 or MTH 238
2. One course in analysis MTH 280 or MTH 281
3. Depth: One advanced course, (a MTH course with a course number between 310 and 390)
4. Electives to reach 36 credits at or above MTH 153
• With the approval of the department, up to 8 of the credits may be satisfied by courses taken outside the department of Mathematical Sciences. Courses taken outside the department must contain
either substantial mathematical content at a level more advanced than MTH 211 and MTH 212 or statistical content at a level more advanced than SDS 220. Generally, such a 4-credit course will be
given 2 credits toward the mathematics major.
• Crosslisted courses (see the Courses tab) are counted as mathematics courses and given full credit toward the mathematics major, as does ECO 220.
• The following courses meet the criteria for 2 credits toward mathematics major: AST 337, CHM 331, CHM 332, CSC 240, CSC 252, CSC 274, a topic in CSC 334 , ECO 240, ECO 255, EGR 220, EGR 315, EGR
320, EGR 326, EGR 374, EGR 389, PHI 102, PHY 210, PHY 317, PHY 318, PHY 319, PHY 327, and SDS 293. A student may petition the department if they wish credit for any course not on this list.
Mathematical Statistics Major
The major in mathematical statistics (MST) is designed to prepare students for graduate study in statistics and closely-related disciplines (e.g., biostatistics). The mathematical statistics major
overlaps with the major in statistical & data sciences (SDS), but places a heavier emphasis on the theoretical development of statistics. Mathematical statistics majors will develop sophisticated
mathematical skills to prepare for rigorous future study. The major also overlaps with the major in mathematical sciences (MTH), but focuses on statistics and replaces the algebra requirement with a
computing requirement.
A student majoring in MST cannot have a second major in either SDS or MTH. Students contemplating a double major in MTH and SDS should choose to major in MST.
Ten courses
1. Mathematical foundations (3 courses): MTH 153, MTH 211 and MTH 212
2. Statistical foundations (2 courses)
1. SDS 201 or SDS 220
2. SDS 291
3. Statistics depth (1 course): SDS 290, SDS 293/ CSC 293 or a topic in SDS 390
4. Mathematics depth (1 course): MTH 280 or MTH 281
5. Programming (1 course): SDS 192, CSC 110 or CSC 120
6. Theoretical statistics (2 courses): MTH 246 and MTH 320/ SDS 320
7. Electives to complete ten courses. Five College course in statistics, mathematics and computer science may be taken as electives. Students should consult with their adviser to determine
appropriate electives.
Major Requirement Details
• SDS 220 or SDS 201 may be replaced by a 4 or 5 on the AP statistics exam. Replacement by AP scores does not diminish the total of 10 courses required for the major (see Electives above).
• A student may replace MTH 153, MTH 211 and MTH 212 with equivalent courses as approved by the MTH department.
• Any one of ECO 220, GOV 203, PSY 201 or SOC 204 may directly substitute for SDS 220 or SDS 201 without the need to take another course. Note that SDS 220 and ECO 220 require Calculus.
• Normally, all courses that are counted towards either the major or minor must be taken for a letter grade.
A student majoring in mathematics may apply for the departmental honors program. An honors project consists of directed reading, investigation and a thesis. This is an opportunity to engage in
scholarship at a high level. A student at any level considering an honors project is encouraged to consult with the director of honors and any member of the department to obtain advice and further
Normally, a student who applies to do honors work must have an overall 3.0 GPA for courses through their junior year, and a 3.3 GPA for courses in their major. A student may apply either in the
second semester of their junior year or by the second week of the first semester of their senior year; the former is strongly recommended.
1. Credits required for the major
2. MTH 430D or MTH 432D (for either eight or twelve credits) or (in unusual circumstances) MTH 431. The length of the thesis depends upon the topic and the nature of the investigation and is
determined by the student, their adviser and the department.
3. An oral presentation of the thesis
The department recommends the designation of Highest Honors, High Honors, Honors, Pass or Fail based on the following three criteria at the given percentages:
60 percent thesis
20 percent oral presentation
20 percent grades in the major
Specific guidelines and deadlines for completion of the various stages of an honors project are set by the department as well as by the college. The student should obtain the department’s
requirements and deadlines from the director of honors.
Mathematics Minor
Twenty credits
1. MTH 211
2. Two courses (minimum of 8 credits) from MTH 153, MTH 205/ CSC 205 or courses above MTH 211.
3. Two courses (minimum of 8 credits) above MTH 212.
Up to four credits may be replaced by eight credits from the list of courses outside the department in the description of major requirements found on the Major tab.
Applied Statistics Minor
Information on the interdepartmental minor in applied statistics can be found on the Statistical and Data Sciences page of this catalog.
Course Information
A student with three or four years of high school mathematics (the final year may be called precalculus, trigonometry, functions, or analysis), but no calculus, normally enrolls in MTH 111. A student
with a year of AB calculus, A levels or IB math SL normally enrolls in MTH 153 and/or MTH 112 during the first year. Placement in MTH 112 is determined not only by the amount of previous calculus but
also by the strength of the student’s preparation. If a student has a year of BC calculus or IB math HL, they may omit MTH 112.
A student with two years of high school mathematics, but no calculus or precalculus, should enroll in MTH 102.
Topics offered in MTH 105 are intended for students not expecting to major in mathematics or the sciences.
A student who receives credit for taking MTH 111 may not have AP calculus credits applied toward their degree. A student with 8 AP Calculus credits (available to students with a 4 or 5 on the AP exam
for BC Calculus) may apply only 4 of them if they also receive credit for MTH 112. A student who has a score of 4 or 5 on the AP Statistics examination may receive 4 credits. They may not however,
use them toward their degree requirements if they also receive credit for SDS 201, SDS 220, PSY 201 or ECO 220. (AP credits can be used to meet degree requirements only under circumstances specified
by the college.)
MTH 101/ IDP 101 Math Skills Studio (4 Credits)
Offered as MTH 101 and IDP 101. This course is for students who need additional preparation to succeed in courses containing quantitative material. It provides a supportive environment for learning
or reviewing, as well as applying, arithmetic, algebra and mathematical skills. Students develop their numerical and algebraic skills by working with numbers drawn from a variety of sources. This
course does not carry a Latin Honors designation. Enrollment limited to 20. Instructor permission required.
Fall, Interterm
MTH 102 Elementary Functions (4 Credits)
Linear, polynomial, exponential, logarithmic and trigonometric functions graphs, verbal descriptions, tables and mathematical formulae. For students who intend to take calculus or quantitative
courses in scientific fields, economics, government and sociology. Also recommended for prospective teachers preparing for certification. Enrollment limited to 25. {M}
MTH 103/ IDP 103 Precalculus and Calculus Bootcamp (2 Credits)
Offered as IDP 103 and MTH 103. This course provides a fast-paced review of and intense practice of computational skills, graphing skills, algebra, trigonometry, elementary functions (pre-calculus)
and computations used in calculus. Featuring a daily review followed by problem-solving drills and exercises stressing technique and application, this course provides concentrated practice in the
skills needed to succeed in courses that apply elementary functions and calculus. Students gain credit by completing all course assignments. This course does not count towards the Mathematics or
Mathematical Statistics majors. S/U only. Enrollment limited to 20. Instructor permission required.
Fall, Interterm, Spring, Variable
MTH 104/ CHM 104 We Are All Scientists: The Impact of Racism on Science (4 Credits)
Offered as CHM 104 and MTH 104. "Do I belong here?" is the question that underrepresented individuals in STEM constantly ask themselves, especially at the undergraduate level where students select
majors that often define their careers. The definition and specialization of science emerged in later centuries and were defined by European standards in a way that excluded the underrepresented
groups that the science community struggles to include today. The interpretation of history is firmly linked to how one perceives themself. This course aims to re-examine scientific discovery with a
focus on anti-blackness using inclusive historical examples. We are all scientists, and it’s time to celebrate all of our stories. Enrollment limited to 15. Instructor permission required. (E)
Spring, Alternate Years
MTH 105ar Topics in Discovering Mathematics-MathStudio: Making, Art + Math (4 Credits)
The course has geometrical, mathematical and studio art components. Students draw and build 3D objects with simple tools and study their geometric and mathematical properties. Introduction to
elements of geometry, algebra and symmetry in connection to what is built. Enrollment limited to 25. {M}
Fall, Spring, Variable
MTH 105we Topics in Discovering Mathematics-The Mathematics of Wealth (4 Credits)
This course looks at the intersection of mathematics and social justice thru the lens of wealth in America. Social justice topics include wealth distribution, taxes, the Gini index and the poverty
cycle. Mathematical topics include mathematical modeling, logic, set theory, statistics and probability. Enrollment limited to 25. (E)
Fall, Spring, Variable
MTH 111 Calculus I (4 Credits)
Discussions include rates of change, differentiation, applications of derivatives including differential equations and the fundamental theorem of calculus. Written communication and applications to
other sciences and social sciences motivate course content. Enrollment limited to 25. {M}
Fall, Spring
MTH 112 Calculus II (4 Credits)
Techniques of integration, geometric applications of the integral, differential equations and modeling, infinite series, and approximation of functions. Written communication and applications to
other sciences and social sciences motivate course content. Prerequisite: MTH 111 or equivalent. Enrollment limited to 25. {M}
Fall, Spring
MTH 153 Introduction to Discrete Mathematics (4 Credits)
An introduction to discrete (finite) mathematics with emphasis on the study of algorithms and on applications to mathematical modeling and computer science. Topics include sets, logic, graph theory,
induction, recursion, counting and combinatorics. Enrollment limited to 25. {M}
Fall, Spring
MTH 205/ CSC 205 Modeling in the Sciences (4 Credits)
Offered as CSC 205 and MTH 205. This course integrates the use of mathematics and computers for modeling various phenomena drawn from the natural and social sciences. Scientific case studies span a
wide range of systems at all scales, with special emphasis on the life sciences. Mathematical tools include data analysis, discrete and continuous dynamical systems, and discrete geometry. This is a
project-based course and provides elementary training in programming using Mathematica. Designations: Theory, Programming. Prerequisites: MTH 112. CSC 110 recommended. Enrollment limited to 20. {M}
Fall, Spring, Annually
MTH 211 Linear Algebra (4 Credits)
Systems of linear equations, matrices, linear transformations and vector spaces. Applications to be selected from differential equations, foundations of physics, geometry and other topics.
Prerequisite: MTH 112 or equivalent, or MTH 111 and MTH 153; MTH 153 is suggested. Enrollment limited to 30. {M}
Fall, Spring
MTH 212 Multivariable Calculus (4 Credits)
Theory and applications of limits, derivatives and integrals of functions of one, two and three variables. Curves in two-and three-dimensional space, vector functions, double and triple integrals,
polar, cylindrical and spherical coordinates. Path integration and Green’s Theorem. Prerequisites: MTH 112. MTH 211 suggested (may be concurrent). Enrollment limited to 30. {M}
Fall, Spring
MTH 233 An Introduction to Abstract Algebra (4 Credits)
An introduction to the concepts of abstract algebra, including groups, quotient groups and, if time allows, rings and fields. Prerequisites: MTH 153 and MTH 211 or equivalent. {M}
MTH 238 Number Theory (4 Credits)
Topics to be covered include properties of the integers, prime numbers, congruences, various Diophantine problems, arithmetical functions and cryptography. Prerequisite: MTH 153 and MTH 211, or
equivalent. {M}
MTH 246 Probability (4 Credits)
An introduction to probability, including combinatorial probability, random variables, discrete and continuous distributions. Prerequisites: MTH 153 and MTH 212 (may be taken concurrently), or
equivalent. {M}
MTH 254 Combinatorics (4 Credits)
Enumeration, including recurrence relations and generating functions. Special attention paid to binomial coefficients, Fibonacci numbers, Catalan numbers and Stirling numbers. Combinatorial designs,
including Latin squares, finite projective planes, Hadamard matrices and block designs. Necessary conditions and constructions. Error correcting codes. Applications. Prerequisites: MTH 153 and MTH
211 or equivalent. {M}
Spring, Alternate Years
MTH 255 Graph Theory (4 Credits)
The course begins with the basic structure of graphs including connectivity, paths, cycles and planarity and proceeds to independence, stability, matchings and colorings. Directed graphs and networks
are considered. In particular, some optimization problems including maximum flow are covered. The material includes theory and mathematical proofs as well as algorithms and applications.
Prerequisites: MTH 153 and MTH 211 or equivalent. {M}
Spring, Alternate Years
MTH 261 Computational Linear Algebra (4 Credits)
Linear algebra has become one of the most widely applied areas of mathematics. Fast matrix computation allows for the manipulation and analysis of large complex data sets which has enabled major
advances in computation. Discussions include solving linear systems, matrices, determinants, matrix factorizations such as LU, and QR decompositions and singular value decomposition (SVD). Students
will learn to use software to analyze large data sets, with applications in computer science, chemistry, engineering, and others. This course will be taught using the software MATLAB, but no
knowledge of MATLAB is assumed. Enrollment limited to 25. (E) {M}
Fall, Spring, Variable
MTH 264de Topics in Applied Math-Differential Equations (4 Credits)
This course gives an introduction to the theory and applications of ordinary differential equations. The course explores different applications in physics, chemistry, biology, engineering and social
sciences. Students learn to predict the behavior of a particular system described by differential equations by finding exact solutions, making numerical approximations, and performing qualitative and
geometric analysis. Specific topics include solutions to first order equations and linear systems, existence and uniqueness of solutions, nonlinear systems and linear stability analysis, forcing and
resonance, Laplace transforms. Prerequisites: MTH 112, MTH 212 and MTH 211 (recommended) or PHY 210, or equivalent. {M}
Fall, Spring
MTH 270ss Topics in Geometry-The Shape of Space (4 Credits)
This is a course in intuitive geometry and topology, with an emphasis on hands-on exploration and developing the visual imagination. Discussions may include knots, geometry and topology of surfaces
and the Gauss-Bonnet Theorem, symmetries, wallpaper patterns in Euclidean, spherical and hyperbolic geometries, and an introduction to 3-dimensional manifolds. Prerequisites: MTH 211 and MTH 212 or
equivalent. {M}
Fall, Spring, Variable
MTH 280 Advanced Calculus (4 Credits)
Functions of several variables, vector fields, divergence and curl, critical point theory, transformations and their Jacobians, implicit functions, manifolds, theory and applications of multiple
integration, and the theorems of Green, Gauss and Stokes. Prerequisites: MTH 211 and MTH 212, or equivalent. MTH 153 is encouraged. {M}
MTH 281 Introduction to Analysis (4 Credits)
The topological structure of the real line, compactness, connectedness, functions, continuity, uniform continuity, differentiability, sequences and series of functions, uniform convergence,
introduction to Lebesgue measure and integration. Prerequisites: MTH 211 and MTH 212, or equivalent. MTH 153 is strongly encouraged. Enrollment limited to 20. {M}
MTH 300 Dialogues in Mathematics and Statistics (1 Credit)
In this class students don’t do math as much as they talk about doing math and the culture of mathematics. The class includes lectures by students, faculty and visitors on a wide variety of topics,
and opportunities to talk with mathematicians about their lives. This course is especially helpful for those considering graduate school in the mathematical sciences. Prerequisites: MTH 211, MTH 212
and two additional mathematics courses at the 200-level, or equivalent. May be repeated once for credit. S/U only. {M}
Fall, Spring
MTH 301rs Topics in Advanced Mathematics-Research (3 Credits)
In this course students work in small groups on original research projects. Students are expected to attend a brief presentation of projects at the start of the semester. Recent topics include
interactions between algebra and graph theory, plant patterns, knot theory and mathematical modeling. This course is open to all students interested in gaining research experience in mathematics.
Prerequisites vary depending on the project, but normally MTH 153 and MTH 211 are required. Restrictions: MTH 301rs may be repeated once. {M}
Fall, Spring, Variable
MTH 320/ SDS 320 Mathematical Statistics (4 Credits)
Offered as MTH 320 and SDS 320. An introduction to the mathematical theory of statistics and to the application of that theory to the real world. Discussions include functions of random variables,
estimation, likelihood and Bayesian methods, hypothesis testing and linear models. Prerequisites: a course in introductory statistics, MTH 212 and MTH 246, or equivalent. Enrollment limited to 20.
MTH 333ct Topics in Abstract Algebra-Coding Theory (4 Credits)
An overview of noiseless and noisy coding. Covers both theory and applications of coding theory. Topics include linear codes, Hamming codes, Reed-Muller codes, cyclic redundancy checks, entropy, and
other topics as time permits. Prerequisites: MTH 153 and MTH 211. One of MTH 233 or MTH 238 is highly recommended. {M}
Fall, Spring, Variable
MTH 333la Topics in Abstract Algebra-Advanced Linear Algebra (4 Credits)
This is a second course in linear algebra that explores the structure of matrices. Topics may include characteristic and minimal polynomials, diagonalization and canonical forms of matrices, the
spectral theorem, the singular value decomposition theorem, an introduction to modules, and applications to problems in optimization, Markov chains, and others. {M}
Fall, Spring, Variable
MTH 333rt Topics in Abstract Algebra-Representation Theory (4 Credits)
Representation theory is used everywhere, from number theory, combinatorics, and topology, to chemistry, physics, coding theory, and computer graphics. The core question of representation theory is:
what are the fundamentally different ways to describe symmetries as groups of matrices acting on an underlying vector space? This course will explain each part of that question and key approaches to
answering it. Topics may include irreducible representations, Schur’s Lemma, Maschke’s Theorem, character tables, orthogonality of characters, and representations of specific finite groups. MTH 233
is helpful but not required. Prerequisite: MTH 211. {M}
Fall, Spring, Variable
MTH 353ac Seminar: Advanced Topics in Discrete Applied Mathematics-Calderwood Seminar on Applied Algebraic Combinatorics and Mathematical Biology (4 Credits)
Calderwood Seminar. Combinatorial ideas permeate biology at all scales, from the combinatorial properties of the sequences of letters (nucleotides) representing DNA and RNA, to the symmetries often
observed in cell divisions, to the graphs that can be used to represent evolutionary trees. This course focuses on key combinatorial ideas that arise on multiple scales in biology, including
molecular, cellular and organism, especially: counting and classification, symmetries and combinatorial graphs. The class interviews mathematicians and biologists about their current research and
prepares multiple reports and presentations for different kinds of popular audiences (for example: kids, biologists and newspapers). No particular biological background is expected. MTH 153 and an
additional proof-based course are required, or equivalent. MTH 233 and MTH 254 or their equivalents are useful but not required. Restrictions: Juniors and seniors only. Enrollment limited to 12.
Instructor permission required. {M}
Fall, Spring, Variable
MTH 354 Mathematics of Deep Learning (4 Credits)
The developments of Artificial Intelligence (AI) are tied to an unprecedented reshaping of the human experience throughout society, impacting the arts, literature, science, politics, commerce, law,
education, etc. Despite these consequential effects, understanding of AI is mostly empirical. The state of knowledge of deep learning has been recently likened to a pseudo-science like alchemy.
Progress in this direction rests on truly interdisciplinary approaches that are equally informed from mathematics, computer science, statistics and data science. The course goals are: (1) Understand
the mathematical foundations of deep learning, (2) Develop proficiency in using mathematical tools to analyze deep learning algorithms, (3) Apply mathematical concepts to implement real-world
applications of deep learning. Not recommended for first-years. Prerequisites: MTH 211 and MTH 212. Enrollment limited to 12. {M}
Fall, Spring, Variable
MTH 364ds Advanced Topics in Continuous Applied Mathematics-Dynamical Systems, Chaos and Applications (4 Credits)
An introduction to the theory of Dynamical Systems with applications. A dynamical system is a system that evolves with time under certain rules. The class looks at both continuous and discrete
dynamical systems when the rules are given by differential equations or iteration of transformations. Students study the stability of equilibria or periodic orbits, bifurcations, chaos and strange
attractors. Applications are often biological, but the final project is on a scientific application of the student's choice. Prerequisites: MTH 211 and MTH 212 or equivalent. {M}
Fall, Spring, Variable
MTH 364pd Advanced Topics in Continuous Applied Mathematics-Partial Differential Equations (4 Credits)
Partial differential equations allow the ability to track how quantities change when they depend on multiple variables, e.g. space and time. This course provides an introduction to techniques for
analyzing and solving partial differential equations and surveys applications from the sciences and engineering. Specific topics include Fourier series; separation of variables; heat, wave and
Laplace’s equations; finite difference numerical methods; and introduction to pattern formations. Prerequisite: MTH 211 and MTH 212, or MTH 280/MTH 281, or equivalent. MTH 264 is strongly
recommended. Prior exposure to computing (using Matlab, Mathematica, Python, etc.) is helpful. {M}
Fall, Spring, Variable
MTH 370gw Topics in Topology and Geometry-The Geometry of the Physical World (4 Credits)
The course covers the mathematics needed to describe our physical universe, focusing on concepts from the field of Differential Geometry that are needed to understand the Theories of Special and
General Relativity. The course cover the differential geometry of surfaces in 3-dimensional space, with a particular focus on the difference between intrinsic and extrinsic geometry. The course also
covers the Postulates of Special Relativity and an introduction to General Relativity, motivating the study of higher dimensional manifolds, Lorentzian Geometry and the mathematics behind
coordinate-independent Physical theories. (E) {M}{N}
Fall, Spring, Variable
MTH 370tp Topics in Topology and Geometry-Topology (4 Credits)
Topology is a kind of geometry in which important properties of a shape are preserved under continuous motions (homeomorphisms)—for instance, properties like whether one object can be transformed
into another by stretching and squishing but not tearing. This course gives students an introduction to some of the classical topics in the area: the basic notions of point set topology (including
connectedness and compactness) and the definition and use of the fundamental group. Prerequisites: MTH 280 or MTH 281, or equivalent. {M}
Fall, Spring, Variable
MTH 381fw Topics in Mathematical Analysis- Fourier Analysis and Wavelets (4 Credits)
The mathematics of how it is possible to simultaneously stream videos while using the same cable to call on the phone. Hilbert spaces, Fourier series, Fourier transform, discrete Fourier transforms,
wavelets, multiresolution analysis, applications. Prerequisite: MTH 280 or MTH 281. {M}
Fall, Spring, Variable
MTH 381gm Topics in Mathematical Analysis-Geometry and Mechanics (4 Credits)
Introduction to modern geometric approaches to classical physics. The essential idea is that the notion of symmetry can be used to simplify the analysis of physical systems. Topics may include
Lagrangian and Hamiltonian mechanics, Noether’s Theorem and conservation laws, quantization, and special relativity. MTH 233 is suggested (possibly concurrently). No prior exposure to physics is
necessary. Prerequisite: MTH 280 or MTH 281. {M}
Fall, Spring, Variable
MTH 382 Complex Analysis (4 Credits)
Complex numbers, functions of a complex variable, algebra and geometry of the complex plane. Differentiation, integration, Cauchy integral formula, calculus of residues, applications. Prerequisite:
MTH 211 and MTH 212, or equivalent.
Fall, Spring, Variable
MTH 400 Special Studies (1-4 Credits)
Normally for majors who have had at least four semester courses at the intermediate level. Instructor permission required.
Fall, Spring
MTH 430D Honors Project (4 Credits)
Department permission required.
Fall, Spring
MTH 431 Honors Project (8 Credits)
Department permission required.
Fall, Spring
MTH 432D Honors Project (6 Credits)
Department permission required.
Fall, Spring
MTH 580 Graduate Special Studies (4 Credits)
Instructor permission required.
Fall, Spring
Crosslisted Courses
CHM 104/ MTH 104 We Are All Scientists: The Impact of Racism on Science (4 Credits)
Offered as CHM 104 and MTH 104. "Do I belong here?" is the question that underrepresented individuals in STEM constantly ask themselves, especially at the undergraduate level where students select
majors that often define their careers. The definition and specialization of science emerged in later centuries and were defined by European standards in a way that excluded the underrepresented
groups that the science community struggles to include today. The interpretation of history is firmly linked to how one perceives themself. This course aims to re-examine scientific discovery with a
focus on anti-blackness using inclusive historical examples. We are all scientists, and it’s time to celebrate all of our stories. Enrollment limited to 15. Instructor permission required. (E)
Spring, Alternate Years
CSC 109/ SDS 109 Communicating with Data (4 Credits)
Offered as SDS 109 and CSC 109. The world is growing increasingly reliant on collecting and analyzing information to help people make decisions. Because of this, the ability to communicate
effectively about data is an important component of future job prospects across nearly all disciplines. In this course, students learn the foundations of information visualization and sharpen their
skills in communicating using data. This course explores concepts in decision-making, human perception, color theory and storytelling as they apply to data-driven communication. This course helps
students build a strong foundation in how to talk to people about data, for both aspiring data scientists and students who want to learn new ways of presenting information. Enrollment limited to 40.
Fall, Spring
CSC 205/ MTH 205 Modeling in the Sciences (4 Credits)
Offered as CSC 205 and MTH 205. This course integrates the use of mathematics and computers for modeling various phenomena drawn from the natural and social sciences. Scientific case studies span a
wide range of systems at all scales, with special emphasis on the life sciences. Mathematical tools include data analysis, discrete and continuous dynamical systems, and discrete geometry. This is a
project-based course and provides elementary training in programming using Mathematica. Designations: Theory, Programming. Prerequisites: MTH 112. CSC 110 recommended. Enrollment limited to 20. {M}
Fall, Spring, Annually
CSC 270 Digital Circuits and Computer Systems (5 Credits)
This class introduces the operation of logic and sequential circuits. Students explore basic logic gates (AND, OR, NAND, NOR), counters, flip-flops, decoders, microprocessor systems. Students have
the opportunity to design and implement digital circuits during a weekly lab. Designation: Systems. Prerequisite: CSC 231. Enrollment limited to 12.
Fall, Spring, Variable
CSC 290 Introduction to Artificial Intelligence (4 Credits)
An introduction to artificial intelligence including an introduction to artificial intelligence programming. Discussions include: game playing and search strategies, machine learning, natural
language understanding, neural networks, genetic algorithms, evolutionary programming and philosophical issues. Designations: Theory, Programming. Prerequisite: CSC 210 and MTH 111, or equivalent.
Enrollment limited to 30.
Fall, Spring, Variable
IDP 101/ MTH 101 Math Skills Studio (4 Credits)
Offered as MTH 101 and IDP 101. This course is for students who need additional preparation to succeed in courses containing quantitative material. It provides a supportive environment for learning
or reviewing, as well as applying, arithmetic, algebra and mathematical skills. Students develop their numerical and algebraic skills by working with numbers drawn from a variety of sources. This
course does not carry a Latin Honors designation. Enrollment limited to 20. Instructor permission required.
Fall, Interterm
IDP 105 Quantitative Skills in Practice (4 Credits)
A course continuing the development of quantitative skills and quantitative literacy begun in MTH/ IDP 101. Students continue to exercise and review basic mathematical skills, to reason with
quantitative information, to explore the use and power of quantitative reasoning in rhetorical argument, and to cultivate the habit of mind to use quantitative skills as part of critical thinking.
Attention is given to visual literacy in reading graphs, tables and other displays of quantitative information and to cultural attitudes surrounding mathematics. Prerequisites: MTH 101/ IDP 101.
Enrollment limited to 18. {M}
IDP 325 Art/Math Studio (4 Credits)
This course is a combination of two distinct but related areas of study: studio art and mathematics. Students are actively engaged in the design and fabrication of three-dimensional models that deal
directly with aspects of mathematics. The class includes an introduction to basic building techniques with a variety of tools and media. At the same time each student pursues an intensive examination
of a particular-individual-theme within studio art practice. The mathematical projects are pursued in small groups. The studio artwork is done individually. Group discussions of reading, oral
presentations and critiques, as well as several small written assignments, are a major aspect of the class. Limited to juniors and seniors. Instructor permission required. Enrollment limited to 15.
MTH 320/ SDS 320 Mathematical Statistics (4 Credits)
Offered as MTH 320 and SDS 320. An introduction to the mathematical theory of statistics and to the application of that theory to the real world. Discussions include functions of random variables,
estimation, likelihood and Bayesian methods, hypothesis testing and linear models. Prerequisites: a course in introductory statistics, MTH 212 and MTH 246, or equivalent. Enrollment limited to 20.
PSY 201 Statistical Methods for Undergraduate Research (5 Credits)
An overview of the statistical methods needed for undergraduate research emphasizing methods for data collection, data description and statistical inference including an introduction to study design,
confidence intervals, testing hypotheses, analysis of variance and regression analysis. Techniques for analyzing both quantitative and categorical data are discussed. Applications are emphasized, and
students use R and other statistical software for data analysis. This course satisfies the basis requirement for the psychology major. Students who have taken MTH 111 or the equivalent or who have
taken AP STAT should take SDS 220, which also satisfies the major requirement. Restrictions: Students do not normally earn credit for more than one course on this list: ECO 220, GOV 203, MTH 220, PSY
201, SDS 201, SDS 220 or SOC 204. Enrollment limited to 40. {M}
Fall, Spring, Variable
SDS 220 Introduction to Probability and Statistics (4 Credits)
(Formerly MTH 220/SDS 220). An application-oriented introduction to modern statistical inference: study design, descriptive statistics, random variables, probability and sampling distributions, point
and interval estimates, hypothesis tests, resampling procedures, and multiple regression. A wide variety of applications from the natural and social sciences are used. This course satisfies the basic
requirement for biological science, engineering, environmental science, neuroscience, and psychology. Prerequisite: MTH 111, or equivalent; SDS 100 must be taken concurrently for students who have
not completed SDS 192, SDS 201, SDS 290 or SDS 291. Restrictions: Students do not normally earn credit for more than one course on this list: ECO 220, GOV 203, MTH 220, PSY 201, SDS 201, SDS 220 or
SOC 204. Enrollment limited to 40. {M}
Fall, Spring
SDS 290 Research Design and Analysis (4 Credits)
(Formerly MTH/SDS 290). A survey of statistical methods needed for scientific research, including planning data collection and data analyses that provide evidence about a research hypothesis. The
course can include coverage of analyses of variance, interactions, contrasts, multiple comparisons, multiple regression, factor analysis, causal inference for observational and randomized studies and
graphical methods for displaying data. Special attention is given to analysis of data from student projects such as theses and special studies. Statistical software is used for data analysis.
Prerequisites: One of the following: PSY 201, SDS 201, GOV 203, ECO 220, SDS 220 or a score of 4 or 5 on the AP Statistics examination or the equivalent; concurrent registration in SDS 100 required
for students who have not completed SDS 192, SDS 201, SDS 220 or SDS 291. Enrollment limited to 40. {M}
Fall, Spring
SDS 291 Multiple Regression (4 Credits)
(Formerly MTH 291/ SDS 291). Theory and applications of regression techniques: linear and nonlinear multiple regression models, residual and influence analysis, correlation, covariance analysis,
indicator variables and time series analysis. This course includes methods for choosing, fitting, evaluating and comparing statistical models and analyzes data sets taken from the natural, physical
and social sciences. Prerequisite: SDS 201, PSY 201, GOV 203, SDS 220, ECO 220 or equivalent or a score of 4 or 5 on the AP Statistics examination; concurrent registration in SDS 100 required for
students who have not completed SDS 192, 201, 220 or 290. Enrollment limited to 40. {M}{N}
Fall, Spring
The Center for Women in Mathematics is a place for women to get intensive training at the advanced level and an opportunity to study in a community that is fun, friendly and serious about math. The
students build the skills and confidence needed to continue on to graduate school. For details see the Postbaccalaureate Program Website.
Additional Programmatic Information
Advisers: Pau Atela, Benjamin Baumer, Jennifer Beichman, Patricia Cahn, Luca Capogna, Jacob Garcia, Christophe Golé, Rajan Mehta, Geremias Polanco, Candice Price, Ileana Streinu, Becca Thomases,
Julianna Tymoczko, Ileana Vasu
If you'd like to declare a math major or minor, the first step is to fill out an advisor preference form here.
Postbaccalaureate Program
Sponsored by the Center for Women in Mathematics, the Postbaccalaureate Program is for women with bachelor's degrees who did not major in mathematics or whose mathematics major was light. This
program is open to all women who have graduated college with some course work in mathematics above the level of calculus, and a serious interest in further pursuing mathematics. More information
about the program is provided by the Center for Women in Mathematics.
Masters of Arts in Teaching
The Department of Mathematical Sciences cooperates with the Department of Education and Child Study to offer a one–year Master of Arts in Teaching (MAT) program.
During one summer and two semesters, MAT candidates take three courses in mathematics and all the course work required for secondary teacher certification in Massachusetts. The program includes a
semester–long internship in a local school. Applicants for the MAT program in mathematics should have an undergraduate degree in mathematics. College graduates with a different major will be
considered if their undergraduate education included a strong foundation in mathematics.
Fifth-Year Master of Science in Statistics
Qualified graduates of the Department of Mathematical Sciences can apply to the University of Massachusetts Amherst to earn a master's degree in statistics in a fifth year. Learn more about the
An honors project consists of directed reading, investigation and a thesis. This is an opportunity to engage in scholarship at a high level. A student at any level considering an honors project is
encouraged to consult with the director of honors and any member of the department to obtain advice and further information.
Honors projects in the Department of Mathematical Sciences are worth 8–12 credits. Ideally, your program should be approved by the department in the spring before your senior year. (You might also
consider applying for a summer research grant from Smith so you can spend the summer before your senior year in Northampton beginning the work on your project.)
Normally, a student who applies to do honors work must have an overall 3.0 GPA for courses through her junior year, and a 3.3 GPA for courses in her major. A student may apply either in the second
semester of her junior year or by the second week of the first semester of her senior year; we strongly recommend the former.
Financial Assistance
The Tomlinson Memorial fund provides financial assistance for honors thesis projects. If you're interested in obtaining funds you must complete the application form "Financial Assistance for
Departmental Honors" and submit it with your honors application. This application form can be obtained from the director of honors or the class deans office.
Typically, you meet with your project adviser several times a week. Usually the project focuses on one area and involves reading mathematics papers and books at an advanced level. The honors paper
you write will be an assimilation and exposition of the area. Occasionally, a project will include new contributions by the student. By early spring, most of your research should be complete and you
will begin writing. The paper is due in the middle of April. It is read by a panel of faculty members, and in early May you present a talk to the department on your work.
Presentation of Thesis
Smith College rules stipulate that the final draft of your thesis must be submitted to your faculty adviser (first reader) and second reader by April 15*. This final draft will be the one subject to
evaluation by the first and second readers. Honors candidates give a 45-minute oral presentation of their honors research for the mathematics faculty, which will be open to all interested members of
the Smith College community and others by invitation.
You should expect to take questions from the audience during and after the presentation. Following the open presentation there will be an additional question period for the mathematics faculty only.
This presentation will be scheduled during the last week of classes, or reading period, but no later than the last day of the pre-examination study period.
• 60% thesis
• 20% oral presentation
• 20% grades in the major
Your grade for the project (pass, distinction, high distinction, highest distinction) is determined by a combination of your grades on the paper, the presentation and your mathematics courses. The
presentation has the least weight in your grade, but it gives us all a chance to hear about what you have done. We also invite you to give a talk to your fellow majors, though this is not part of the
official process.
*Timeline is for May graduates. Consult your adviser about dates if you plan to graduate in January.
The Math department does not have a designated placement test for math and statistics courses. Descriptions of common starting courses, advice on how to decide between them and information about AP
and IB credit is available on the Introductory Mathematics Courses website and on the document below.
Additional Course Information
Whatever your reasons to study math or statistics, we, or our colleagues in Statistics & Data Science, have something for you! And by the time you take a course with us, we hope that you will have
enjoyed it so much that you will take another one just because it’s cool…
Why do I need math at all?
If you haven’t enjoyed your mathematics courses or have found them frustrating, the need to take more math in college for your major can be irritating. Or maybe you are delighted that you’ll be
taking more math! Students have completely different experiences of mathematics courses, and here we would like to lay out some reasons that all students should be excited about taking math.
You have probably heard the refrain “Math is everywhere!” many times before, and it’s true: math IS everywhere. From computing your GPA (a weighted average) to understanding how debt works
(compounding interest), math runs through most facets of our lives, and increasingly in the data driven industry. For instance, how should a large trucking company allocate its storage of empty
trailor around the country to minimize the number of miles empty trailors travel? This is a difficult math problem that generates jobs and saves the environment!
In practical terms, even if you do not choose to do a math major, a number of other majors - and professions! - require math. The most common ones are:
• MTH 111 Calculus I (Economics, Engineering, Physics, Statistics & Data Science, Pre-health)
• MTH 112 Calculus II (Engineering, Physics)
• MTH 153 Discrete Math (Computer Science)
• MTH 211 Linear Algebra (Statistics & Data Science)
• MTH 212 Calc III (Engineering, Physics)
• SDS/MTH 220 Intro to Probability and Stats (Statistics & Data Science, Biology, Economics; recommended for Engineering and Pre-health)
What courses am I prepared to take?
A student who wishes to study mathematics may place herself according to the following guidelines.
• Any student who is curious about mathematics outside of the standard fields seen in high school may consider Discovering Mathematics (MTH105). Some incarnation of the course have explored arts
and math, the role of chance in our lives, and measuring social inequalities.
• A student with three years of high school math (typically one year of geometry and two years of algebra) is ready for Elementary Functions (MTH102), which can prepare them to take Calculus I.
• A student with four years of high school math (but little or no calculus) can take Calculus I (MTH111).
• A student with a year of high school calculus is ready to take Discrete Mathematics (MTH153) or Calculus II (MTH112).
• Well-prepared students might start at Smith with Linear Algebra (MTH211) or Calculus III (MTH212).
Below is a more detailed document matching your preparation with possible courses.
Mathematics, Statistics, and You
For statistics courses you are prepared to take, consult this Statistics and Data Science page.
For detailed information about the introductory calculus courses as Smith, including how they work and they help you do the things you want to do with your time at Smith, visit the Introductory
Mathematics Courses at Smith website.
The introductory calculus courses (MTH111: Calculus 1 and MTH112: Calculus 2) at Smith are offered in small sections of 20-28 students, taught by different professors. The sections of each
introductory course are closely coordinated to maximize the resources available to students and make it easy for students to work together during the semester. Those resources include department peer
tutors, quantitative skills tutors through the Spinelli Center for Quantitative Learning, and the department Calculus Training Group program, profiled in Grecourt Gate in November 2017.
For those who either do not intend to take Calculus or who have already taken enough of it, there is math besides Calculus!
MTH153: Introduction to Discrete Mathematics
Description: An introduction to discrete (finite) mathematics with emphasis on the study of algorithms and on applications to mathematical modeling and computer science. Topics include sets, logic,
graph theory, induction, recursion, counting and combinatorics.
Offered: Every semester
Prerequisite: None, but MTH111 and familiarity with summation notation is recommended
Great for: Computer Science, Mathematics & Statistics, Statistics & Data Science – the study of logic and algorithms is necessary for good coding. In addition, you learn a variety of proof
techniques, which are key for going deeper in mathematics as a whole.
MTH211: Linear Algebra
Description: Systems of linear equations, matrices, linear transformations, vector spaces. Applications to be selected from differential equations, foundations of physics, geometry and other topics.
Prerequisite: MTH 112 or the equivalent, or MTH 111 and MTH 153; MTH 153 is suggested
Offered: Every semester
Great for: Almost everyone, but specifically Computer Science, Mathematics & Statistics, Statistics & Data Science, Economics. Linear algebra is the workhorse subject of modern mathematics. Any work
with data relies on an understanding of matrices. Linear algebra is even used to help identify exoplanets in astronomy! It turns up pretty much everywhere.
Required for: MTH and SDS majors.
MTH212: Calculus III
Description: Theory and applications of limits, derivatives and integrals of functions of one, two and three variables. Curves in two and three dimensional space, vector functions, double and triple
integrals, polar, cylindrical, spherical coordinates. Path integration and Green’s Theorem.
Prerequisites: MTH 112. It is suggested that MTH 211 be taken before or concurrently with MTH 212.
Offered: Every semester
Great for: Physics, Engineering, Mathematics & Statistics, Economics. Calculus III takes everything from calculus and moves into multiple dimensions. For physics and engineering, understanding of
more than one dimension is essential for modeling how objects move through our multi-dimensional space. In economics, you often need to optimize quantities with many different inputs (and sometimes
outputs!) which Calculus III can do.
Required for: EGR and MTH majors.
MTH/SDS220: Introduction to Probability and Statistics
Description: An application-oriented introduction to modern statistical inference: study design, descriptive statistics; random variables; probability and sampling distributions; point and interval
estimates; hypothesis tests, resampling procedures and multiple regression. A wide variety of applications from the natural and social sciences are used. Classes meet for lecture/discussion and for a
required laboratory that emphasizes analysis of real data.
Prerequisite: MTH 111 or the equivalent, or permission of the instructor. Lab sections limited to 20
Offered: Every semester
Great for: Everybody. Data analysis is a growing field and understanding how to work with data is useful in many fields. MTH 220 satisfies the basis requirement for biological science, engineering,
environmental science, neuroscience and psychology.
Required for: BIO, EGR, ESP, NSC, PSY, SDS.
Note: Other departments offer statistics courses with different prerequisites, and for which SDS 220 may be substituted (e.g. PSY/SDS 201, ECO 220)
Consult the Smith College Course Catalog for information on the current courses available in mathematics and statistics.
There are also several courses that are available for credit from other departments, including art, psychology and more. Consult the catalog.
What classes you should take depends a great deal on what you find most interesting and on what your goals are. Discuss your options with your adviser and also talk to the instructors of particular
courses that interest you.
If you are interested in the sciences:
The department offers a variety of courses to give you a solid mathematical experience. Calculus III and Linear Algebra are fundamental courses. You may also want to consider taking one or more of
the following: Intro to Probability and Statistics, Differential Equations, Differential Equations and Numerical Methods, Discrete Mathematics, Advanced Topics in Continuous Applied Mathematics.
If you are interested in computer science:
Consider taking some of these: Calculus III, Linear Algebra, Modern Algebra, Discrete Mathematics. Many of our students are double–majoring in mathematics and computer science.
If you are interested in economics:
Calculus will give you a good, basic experience. You may consider other courses as well, so be sure to discuss your options with your adviser. If you are contemplating graduate school in economics,
the economics department recommends you to take MTH 211, 212, 280 and 281. Taking a solid course in statistics is also a good idea (any of MTH 220, 246, 290, 291 and 320 would do). Many economics
majors want to take MTH 264 as well. Double–majoring in mathematics and economics is a good choice.
If you are interested in applied mathematics:
The following courses work specifically with applications: MTH 205, 264, 353 and 364. Other courses that contain many applications and are important for anyone considering graduate school in applied
mathematics are: MTH 220, 246, 254, 255, 280, 290, 291, and 320.
If you are interested in theoretical mathematics:
The following courses work with abstract structures: MTH 233, 238, 246, 254, 255, 280, 281, 333, 370, 381, and 382.
If you liked calculus:
There are many reasons for liking calculus. If you delighted in the geometry, for example, you should consider MTH 270, 280, 370 and 382. If you enjoyed the power of calculus to describe and
understand the world, you will want to take MTH 264. If you are fascinated with the ideas of limit and infinity and want to get to the bottom of them, you should take MTH 281.
If you liked linear algebra:
You will like MTH 233 very much, and you will also like MTH 238 and 333.
If you liked discrete mathematics:
The natural sequel to Discrete Mathematics is MTH 254 or 255 and then 353. In addition, you may be interested in MTH 246 and in CSC 252 (counts 2 credits toward the mathematics major).
If you are interested in graduate school in mathematics:
Take a lot of courses, but be sure to take MTH 233, 254, and 281 and as many of MTH 264, 333, 370, 381, and 382 as possible. You should also consider taking a graduate course at the University of
If you are interested in graduate school in statistics:
The MST Mathematical Statistics joint Major between MTH and SDS is explicitly designed as a preparation for graduate school in Statistics.
If you are interested in graduate school in operations research:
Operations research is a relatively new subarea of mathematics, bringing together mathematical ideas and techniques that are applied to large organizations such as businesses, computers, and
governments. You should take MTH 211 and at least some of the courses listed for statistics above, some combinatorics (MTH 254) and some computer science. Consider also Topics in Applied Mathematics
and Numerical Analysis.
If you want to be a teacher:
Certification requirements vary widely from state–to–state. If you are interested in teaching in secondary school, a mathematics major plus practice teaching may be enough to get started. In
Massachusetts, the major should include either MTH 233 or 238 and one of MTH 220 or 246. A course involving geometry, such as MTH 270 or MTH 370 is also helpful. You should also have some
introduction to computers. For guidelines, look at the list of courses listed in the MAT program. Finally, while MTH 307 Topics in Mathematics Education is rarely offered, something equivalent is
taught as a special studies whenever there are MAT students.
If you are interested in teaching elementary school, most of your required courses will be in the education department. In the mathematics department, our concern would be that you are comfortable
with mathematics, have seen its variety, and most important, that you enjoy it. For all that, you should take the mathematics courses which appeal to you most. For education courses, the latest
information is that you should take EDC 235, 238, 346, 347, 404 (practice teaching), and one elective to be certified. Note that during the semester when you take practice teaching EDC 404, you will
likely be unable to take a math course. Plan ahead and consult the education department.
If you want to be a doctor:
You are doing fine by majoring in mathematics. A course in statistics would be a very good idea. Other areas of mathematics that would be useful are differential equations and combinatorics.
If you want to be an actuary:
Take MTH 246, 290, 291 and 320 and the actuarial exams that are offered periodically. Advancement as an actuary is achieved by passing of a series of examinations. Informal student study groups often
form (ask around!).
If you want to get a good job when you graduate:
A major in mathematics prepares you well, regardless of which courses you choose. Math majors learn to think on their feet; they aren't frightened of numbers and they're at home with abstract ideas.
Often, this alone is what employers are looking for. That said, we should add that knowledge of computer programming is very useful, as is some familiarity with statistics.
If you want something Smith does not offer:
If you are interested in a subject we do not offer, you should talk to professors whose fields of interest are closest to the subject, as a special studies. The arrangement must be approved by the
department, but reasonable requests are not refused. If your interest is particularly strong, you might consider an honors project, or summer research work. You should also consider taking a course
(or courses) at one of the consortium schools.
Opportunities & Resources
Postbaccalaureate Program
The Center for Women in Mathematics is a place for women to get intensive training at the advanced level and an opportunity to study in a community that is fun, friendly and serious about math. Build
the skills and confidence needed to continue on to graduate school.
Contact Department of Mathematical Sciences
|
{"url":"https://new.libraries.smith.edu/academics/mathematical-sciences","timestamp":"2024-11-08T17:00:53Z","content_type":"text/html","content_length":"213271","record_id":"<urn:uuid:5ac3de0c-c84d-4d3d-8ed9-3b58392a2348>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00366.warc.gz"}
|
Nuclear Fusion
Lesson Video: Nuclear Fusion Physics
In this lesson, we will learn how to describe the process of nuclear fusion and the advantages and disadvantages of nuclear fusion reactors.
Video Transcript
In this video, our topic is nuclear fusion. This is a process that takes place all throughout the universe, in the core of active stars. Nuclear fusion is behind all the light and heat that the Earth
receives from the Sun in our solar system. From this fact, we get a sense for just how much energy is available through this phenomenon.
Talking about this topic, let’s start out with the definition. Nuclear fusion is the joining of more than one atomic nucleus to create a single nucleus. This basic idea then is that if we have two
separate atomic nuclei and they come together and form a third combination nucleus, then that’s fusion. Let’s look at an example of this.
Say that we have a hydrogen atom nucleus. That’s right here, where the blue dot represents a proton and the green dot represents a neutron. So this is our nucleus. And if we were to write this down
as an atomic isotope using symbols, then we would write it as capital H, since this is hydrogen which has an atomic number of one. And it has a mass number of two, since there are a total of one plus
one protons and neutrons. So that’s our first nucleus. And say that the second nucleus that will come together with this one to fuse is identical to it. It’s also hydrogen two. In other words, it’s a
hydrogen nucleus that has one neutron in it. So its mass number is again two.
Now let’s say it happens that these two hydrogen nuclei collide with one another. This is actually harder than it may at first seem. After all, these two nuclei both have a net positive charge. So
they’ll resist being pushed together. But if we’re able to overcome that repulsion and actually get the two nuclei to collide and fuse, then here’s what can happen. The two hydrogen nuclei come
together, join up, and create a third fused nucleus, now with two protons and one neutron in it. And then along with this fused nucleus, there’s a free neutron that’s released.
Considering this fused nucleus, since it has two protons in it, that must mean that it has an atomic number of two. And as we look that value up on the periodic table, we see that it corresponds to
helium. So these two hydrogen nuclei have come together to form a totally new element, helium. And then in addition to that fused nucleus, there’s a neutron that’s released.
Now that we have this fusion reaction, let’s consider the atomic numbers as well as the mass numbers on either side of it. We can see that the atomic number of each one of these hydrogen nuclei is
one. So if we total them together, we get one plus one, two. Then looking on the other side of the equation, we have the two protons in the helium nucleus and then no protons in the neutron. This
tells us that, in this reaction, atomic number is conserved from the beginning to the end.
Now what about the mass number, the number of protons plus neutrons in each of these constituents? The two hydrogen nuclei both have mass numbers of two. So that gives us a total of two plus two, or
four. And then on the right-hand side, the helium nucleus has a mass number of three. And if we add that to the mass number of the neutron, we once again get four. So mass number as well as atomic
number is conserved across this reaction. Okay, so that’s true. But here’s where things get interesting.
If we were able to measure the total mass, not mass number, but mass on either side of this equation — say we were to put the two constituents on either side onto a scale — then we would find that
the total mass of the two hydrogen nuclei before the fusion occurred is greater than the mass of the products of that fusion, the helium nucleus and the neutron. But wait! How could that be? Because
we just counted up the number of protons and neutrons and found they’re consistent on either side.
Well, it turns out that some of the mass in an atomic nucleus, like the nuclei of our two hydrogen atoms or the nuclei of this helium atom, is used up — we could say — as a glue that holds the
nucleus together. For example, say that you had a whole lot of small wooden balls and you wanted to find a way to keep them all attached together. One great way to do that would be to glue all these
wooden balls together. Now of course the glue itself has some mass. So if we were to calculate the total mass of this collection of wooden balls, we would include the mass of the glue along with the
It’s a similar idea over here with our hydrogen nuclei. When they fuse together into one combined nucleus, the helium nucleus, some of the glue — we could call it — that kept these two hydrogen
nuclei together is not needed to glue together this resulting helium nucleus. To make this fused helium nucleus, we need less glue than we needed for the two hydrogen nuclei. And that’s why if we
were to weigh out the reactants of this process against the products of the process, we would find the total reactant mass is greater.
By the way, the technical term for this glue that holds nuclei together is binding energy. So interestingly, just for nucleons, protons, and neutrons in order to be able to stick together, that takes
a little bit of energy in and of itself. When fusion takes place, the binding energy that previously went into holding these two hydrogen nuclei together that isn’t needed to hold the helium nucleus
together is released.
Just to show that, we could add on an energy term on the product side of this reaction. This energy is the binding energy that’s no longer needed to fuse together this resulting nucleus, in this case
our helium three nucleus. It’s because of this energy released that the process of fusion is so useful at generating energy. Fusion is the core process that takes place in our Sun. It’s the reason
behind all the light and heat we receive from the Sun.
Now at this point, it’s worth saying a word about what nuclear fusion is not, because there’s actually a nuclear process which sounds similar but is quite the opposite. As we’ve seen, nuclear fusion
involves the joining of more than one atomic nucleus to create a single resulting one. This is in contrast to the process known as nuclear fission, which involves the splitting of a single nucleus
into multiple smaller ones. So if we take a large atomic nucleus and break it up into smaller pieces, that’s fission. But if we take small atomic nuclei and fuse or join them together to make a
larger one, that’s fusion.
Since both these processes involve atomic nuclei and they’re both used to generate energy, it can be confusing to keep the two separate. One way to do this is to realize that the word “fusion” means
to fuse together or join separate parts and that this is the opposite of fission, which involves splitting apart.
Now if we look up this process of nuclear fusion online, one of the things we’ll find is that, even though this process occurs regularly in the cores of stars, finding a way to bring the process down
to Earth, so to speak, has been quite a technological challenge. Say that we wanted to build a facility where we could have nuclear fusion going on for the purpose of generating power. For a few
reasons, this seems like a really great idea. First, this process obviously works. Consider all the energy created by our Sun, for example. And also the ingredients — we could call them — the
elements involved in this process, hydrogen and helium, are very common on Earth. This means that it shouldn’t be hard to find fuel for a fusion reaction. And it also means that the products of that
reaction will be easy to work with. They won’t be dangerous or radioactive or need very special handling.
All in all, there are a lot of great advantages to the process of nuclear fusion as an energy supply source. But to get a sense for the challenges involved in making this process work on Earth,
consider where fusion happens now. It happens in the core of stars, specifically where temperatures are in the tens of millions of degrees Celsius. This high-temperature, high-energy environment is
no accident.
Remember, we said that, in order for fusion to occur, say for our two hydrogen nuclei to come together and to fuse into one nucleus, it’s necessary to overcome their mutual repulsion, since after all
these two nucleus have an overall positive charge and therefore push one another apart. From that perspective, we could say that fusion doesn’t want to happen. Electrically, these nuclei want to
repel each other. In order to make fusion happen, we need to put so much energy in the environment of these nuclei that that energy is able to overcome this repulsion. And that’s why fusion only
happens in places where the temperature and therefore the energy is very high.
So in order to make fusion work on Earth, we need to somehow create an environment that’s able to handle temperatures in the tens of millions of degrees. Various ideas for how to do that exist. And
it’s an ongoing process. We’re still figuring it out. For our purposes though, we want to focus on what fusion is and how it works. To better understand that, let’s consider this example.
Say that we, once again, have two hydrogen nuclei. And as before, the blue dots represent protons and the green dots represent neutrons. In our earlier example, both our hydrogen nuclei had one
neutron. But now one of them has a single neutron and the other has two of them. If we write out the symbols representing these hydrogen isotopes, one would be hydrogen two and the other would be
hydrogen three.
Now before we go further with this fusion reaction, it’s helpful to realize that these particular isotopes of hydrogen come with special names. We can call them according to their mass number,
hydrogen two and hydrogen three, respectively. But it turns out that these particular isotopes of this particular element have the names deuterium and tritium, respectively. There’s nothing wrong
with calling them hydrogen two and hydrogen three instead. But if you come across these names, just know that they refer to the same things. And a helpful way to remember which name goes with which
isotope is to know that tritium has this prefix “tri” meaning three and deuterium has the prefix “deu” meaning two. So anyway, those are names for these hydrogen isotopes we may sometimes encounter.
So let’s say we take these two nuclei and we fuse them together. In other words, we put them in an environment where, instead of repelling one another, they actually join to create a new combined
nucleus. Now in this reaction, one free neutron is released, like we saw in the previous fusion reaction. But in addition to that, there’s also the main fused nucleus that results as a product. The
question in this example is, “What is that main fused nucleus? How do we represent it as a symbol?”
To figure this out, to see what atomic isotope is formed in this fusion reaction, we can use the fact that atomic number is conserved on either side of the reaction, as is mass number. In other
words, the total atomic number on the left side of the reaction equals the total atomic number on the right side, and the same thing for the mass number. This is an equivalence between the product
side and the reactant side of a nuclear reaction that we can generally assume.
So if we start with atomic number, on the left-hand side of the reaction, we have a total atomic number of one plus one, two. Now on the product side, our neutron has an atomic number of zero, which
means that, whatever our fused nucleus is, it must have an atomic number of two. That’s to make the total atomic number on this side of the equation agree with the total on the other.
Now if we look up on the periodic table of elements what element is number two, that is, has two protons in its nucleus, we see that the answer is helium, symbolized He. So our fused product nucleus
is a helium atom. And we now just wanna figure out how many neutrons are in the nucleus of that atom. To solve for that, we’ll balance the mass number on either side of this equation. On the
left-hand side, our total mass number is two plus three, or five. And on the right-hand side, our total mass number is one plus the mass number of this helium atom. The number we need to add to one
in order to raise it to five is four. Therefore, that’s the mass number of this helium nucleus.
So we’ve answered the question of what atomic element and what isotope of that element is formed in this fusion reaction. Like the reaction we saw earlier, we took hydrogen and fused hydrogen nuclei
together to create helium plus a free neutron. This reaction form, adding hydrogen to hydrogen to create helium, is very common in fusion processes. The reason for this is that, by doing the fusion
reaction this way, combining hydrogen to make helium, we get the largest energy yield from the fusion that goes on. So whenever we see a nuclear reaction where hydrogen is used to create helium, it’s
a good guess that this is a fusion process we’re seeing. To get just a bit more practice with these ideas, let’s try another example.
The following nuclear equation shows two hydrogen nuclei fusing to form a helium nucleus. What is the value of 𝑚 in this equation? What is the value of 𝑛 in this equation?
Taking a look at the nuclear equation, we see these two hydrogen nuclei, which are fusing, we’re told, to create helium plus the release of energy. We also see that the atomic numbers as well as the
mass numbers of these hydrogen nuclei are shown, whereas the atomic number of helium and its mass number are not shown. It’s those values we want to solve for. And we’ll do it by using the fact that
atomic number and mass number is conserved in this reaction. That means that if we add together all the atomic numbers on the left side of the equation, that sum will equal the sum of the atomic
numbers on the right side, and same thing with mass number.
Summing the values on the left side will equal the sum of the values on the right. Now on the right-hand side, since our only products are a helium nucleus plus energy, we know that only the helium
nucleus will contribute in terms of mass number and atomic number. The energy that’s released in this fusion reaction has no charge and it has no mass. This means that when it comes to answering our
first question, what is the value of 𝑚, the mass number of helium, we can write that the sum of the mass numbers on the left-hand side of our equation one plus two is equal to 𝑚, the mass number of
the helium atom. And this tells us that 𝑚, the mass number of that atom, is three.
Then moving on to solve for the value of 𝑛, the atomic number of helium, there are a couple of ways we could do this. One is to look up helium on the periodic table of elements and see what element
number it is. Another way to solve for 𝑛 is to realize that it must equal the sum of the total atomic numbers on the left-hand side or the reactant side of this equation. So the atomic number of our
first hydrogen atom plus the atomic number of our second is equal to 𝑛, the atomic number of helium. And we find that 𝑛 is equal to two, a result we could find using either one of these two methods,
either using the periodic table or the fact that atomic number is conserved in this equation.
Let’s summarize now what we’ve learned in this lesson about nuclear fusion. We’ve seen that nuclear fusion is the joining of more than one atomic nucleus together to create a single resulting
nucleus. This is the opposite, we noted, of the similarly named process of nuclear fission, where atomic nuclei are split apart. The energy generated by fusion comes from what’s called the binding
energy that exists between protons and neutrons in an atomic nucleus. As we studied fusion reactions, we learned that the most common form is where hydrogen nuclei fuse together — they join — to
create helium. Along with all this, we saw that while nuclear fusion is an ongoing constant process in the core of our Sun, we haven’t yet found a practical way of reproducing this process on Earth
in a nuclear reactor. Nonetheless, nuclear fusion shows great promise as an energy supply source.
|
{"url":"https://www.nagwa.com/en/videos/742128986195/","timestamp":"2024-11-02T11:21:22Z","content_type":"text/html","content_length":"279182","record_id":"<urn:uuid:3348011f-943c-4bba-beba-56b6f99edde6>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00889.warc.gz"}
|
Demonstrating multi-country calibration of a tuberculosis model using new history matching and emulation package - hmer
Infectious disease models are widely used by epidemiologists to improve the understanding of transmission dynamics and disease natural history, and to predict the possible effects of interventions.
As the complexity of such models increases, however, it becomes increasingly challenging to robustly calibrate them to empirical data. History matching with emulation is a calibration method that has
been successfully applied to such models, but has not been widely used in epidemiology partly due to the lack of available software. To address this issue, we developed a new, user-friendly R package
hmer to simply and efficiently perform history matching with emulation. In this paper, we demonstrate the first use of hmer for calibrating a complex deterministic model for the country-level
implementation of tuberculosis vaccines to 115 low-and middle-income countries. The model was fit to 9–13 target measures, by varying 19–22 input parameters. Overall, 105 countries were successfully
calibrated. Among the remaining countries, hmer visualisation tools, combined with derivative emulation methods, provided strong evidence that the models were misspecified and could not be calibrated
to the target ranges. This work shows that hmer can be used to simply and rapidly calibrate a complex model to data from over 100 countries, making it a useful addition to the epidemiologist’s
calibration tool-kit.
1. Introduction
Complex mathematical models, also known as simulators, are widely used in many fields in science and technology, including infectious disease epidemiology. Their utility and reliability in predicting
the behaviour of real-world systems strongly depends on our ability to calibrate them to the empirical data. Despite the wide variety of calibration methods available to date, the application of
calibration methods to the analysis of complex models is often lacking. A key reason that comprehensive complex model calibration is uncommon is that most formal methods require a vast number of
model evaluations, which is exacerbated for high-dimensional input spaces (Kennedy & O’Hagan, 2001). This is often infeasible when dealing with computationally expensive models. The lack of
methodologies and tools to robustly calibrate and analyse complex models hinders the credibility and validity of subsequent inferences drawn from model predictions. This can lead to overconfident or
misleading recommendations being made to policy makers, potentially costing lives.
History matching with emulation is a robust calibration method that has been applied successfully to complex models in a variety of fields, e.g. cosmology (Vernon et al., 2010), climate science (
Williamson et al., 2013), geology (Craig et al., 1996, 1997), and infectious disease epidemiology (Andrianakis et al., 2015; Andrianakis et al., 2017; McKinley et al., 2018). Despite this, its
routine use by infectious disease modellers has been hindered by the lack of accessible computer software to implement the various components of the method. Our new R package hmer addresses this
problem, allowing the user to more easily and efficiently perform history matching with emulation.
We define calibration as identifying all parameter sets that match the observed data, given the included sources of uncertainties. It is important to appreciate that this is slightly different from
the technical definition of calibration used in the Uncertainty Quantification literature, where the existence of a single best-fitting parameter set is assumed and calibration is the process of
estimating the posterior distribution of the (unknown) best-fitting parameter set.
In this article, we show how we performed history matching with emulation on a complex deterministic model for the implementation of tuberculosis vaccines at country-level, calibrating the model to a
total of 105 low-and middle-income countries. Such a large-scale task was made straightforward thanks to our package hmer, which allowed us to automate the various steps of the history matching and
emulation process. This was possible thanks to inbuilt checks available in hmer, which monitored the performance of the process wave by wave.
2. The model and the calibration task
Worldwide, tuberculosis was the leading cause of death from a single infectious agent in 2019, killing an estimated 1.4 million people (World Health Organization, 2020). The only licensed vaccine
against tuberculosis, Bacillle Calmette-Guerin, is effective at preventing disseminated disease in infants, but confers highly variable efficacy against pulmonary tuberculosis in adults. The
development of a new tuberculosis vaccine that protects several categories of people, including adults, the elderly, and immunosuppressed patients, is critical for rapidly reducing disease incidence
and mortality.
Contracted by the World Health Organization, the London School of Hygiene and Tropical Medicine Tuberculosis Vaccine Modelling Group developed a model to evaluate the potential impact of
country-level implementation of novel tuberculosis vaccines, in order to estimate the broader health and economic impacts (Clark et al., 2022). Full details of the model are provided in Clark et. al,
2022, and described briefly here.
A compartmental deterministic dynamic model of Mycobacterium tuberculosis (Mtb) transmission and progression was developed, structured by the following core dimensions: 82 age classes, 8 tuberculosis
natural history classes, two socio-economic status (SES) classes defined by access-to-care, and, for countries with a moderate or larger contribution of HIV to the tuberculosis epidemic, three HIV
and antiretroviral therapy (ART) status classes.
In countries where HIV was not simulated, we had 19 input parameters that were varied during calibration, and 9 calibration targets. Among the 19 parameters, 12 were related to tuberculosis natural
history, one to tuberculosis mortality, two to background mortality, three to tuberculosis treatment and one to SES. The 9 calibration targets were all for 2019, and can be divided into the following
categories: tuberculosis incidence (all ages, age 0–14, and age 15+), tuberculosis notifications (all ages, age 0–14, and age 15+), tuberculosis mortality (all ages), tuberculosis subclinical
proportion (the proportion of prevalent tuberculosis cases that are asymptomatic) and tuberculosis prevalence ratio by socio economic status. For most targets, estimates from the 2020 WHO Global TB
Report were used (World Health Organization, 2020).
In countries including the HIV structure, we had an additional 3 input parameters, related to HIV incidence and ART initiation, and an additional four targets: HIV prevalence, ART coverage, and
tuberculosis incidence and mortality in people living with HIV.
The model running time was 10 seconds on average, with runs usually taking longer in countries including the HIV structure usually, compared to countries without the HIV structure.
For each target, a lower and an upper bound were specified. Parameter sets were considered to match the targets if all the model outcomes were between the bounds. For more details on model structure,
parameters and targets, see Supporting Materials, section A, and (Clark et al., 2022).
Model calibration was attempted for 92 countries without HIV structure and 23 countries including HIV structure, where all the data required to calibrate the model were available.
3. Calibration Method
In this section, we briefly describe how history matching with emulation works. We then show in detail how such a method was implemented in the case of our model.
History matching with emulation
History matching concerns the problem of exploring the input space (i.e. all possible parameter sets) and identifying parameter sets that may give rise to acceptable matches between the model output
and the empirical data. This part of the input space is referred to as non-implausible, while its complement is referred to as implausible. History matching proceeds as a series of iterations, called
waves, where implausible areas of input space are identified and discarded. To identify those areas, we use emulators: statistical approximations of the model outputs which are built using a
manageable number of model evaluations, and are typically several orders of magnitude faster than the model. If D is a set of model runs, we can use D to build an emulator for model output f. The
emulator then provides us with a prediction E[D][f(x)] for the value of f at parameter set x, and quantifies the uncertainty associated with that prediction, Var[D][f(x)] (Andrianakis et al., 2017;
Vernon et al., 2010). This information is then used to calculate the implausibility measure at the parameter set x for the model output f: where z is the target for the model output f, and the
variance in the denominator includes the uncertainty Var[D][f(x)] in the emulator prediction and any other relevant form of uncertainty, e.g. the observation uncertainty or any structural discrepancy
(accounting for the fact that no model perfectly represents reality). A high value of I[f] (x) suggests that, even while factoring in all sources of uncertainty, the emulator prediction E[D][f(x)]
and the target z are too distant for f(x) to plausibly match the empirical data (Vernon et al., 2010).
For more details about emulators and the implausibility measure see Supporting Materials C. Figure 1 shows the various steps of the process.
Step 1: We first run the model on the initial design points, a manageable number of parameter sets that are spread out uniformly across the input space, in accordance with established design
principles (Vernon et al., 2010).
Step 2: The obtained model outputs provide us with training data to build emulators. The hmer package follows a Bayes Linear approach (Goldstein, 2012) for the construction of emulators.
Step 3: We test the emulators to check whether they are good approximations of the model outputs. Emulators which do not perform well enough are either modified and made more conservative, or omitted
from the current wave.
Step 4: We generate a set of non-implausible parameter sets. To do this, we evaluate the emulators on a large number of parameter sets and we discard the implausible ones, i.e., those with
implausibility measure above a chosen threshold.
Step 5: As this is an iterative procedure, each time we complete Step 4 we need to decide whether to perform a new wave or stop. In the former case, the model is run on the non-implausible parameter
sets found in Step 4 and the process returns to Step 2. In the latter case, we progress to Step 6. One possible stopping criterion is if the uncertainty of the emulators is smaller than the
uncertainty in the targets, indicating that further improving the performance of the emulators will make little difference to the rate of points generation. We may also choose to stop the iterations
when we get emulators that provide us with points matching all targets at a sufficiently high rate. If our aim is to do forecasting, we would usually be happy to stop if we have found good matches
and all of the areas of the non-implausible space produce roughly similar forecasts for key future outcomes of interest. Conversely, if there is substantial variation for key forecasts in the
non-implausible space, then we would want to continue, even if we have found plenty of matches already. Finally, we might end up with all the input space deemed implausible at the end of a wave: this
would raise doubts about the adequacy of the chosen model, or input and/or output ranges.
Step 6: We use the emulators to generate non-implausible points, we run the model on them and we retain only those points that match all targets.
For a more detailed introduction to history matching with emulation we refer the reader to Andrianakis et al., 2015; Vernon et al., 2010, 2018. For a direct comparison with ABC see McKinley et al.,
Performing history matching with emulation on the tuberculosis model
The hmer package, released on CRAN on April 14th 2022, contains a range of options at each step of the process, allowing the calibration process to be customised at each wave. As we were calibrating
the model to a large number of countries, we used an automated approach, which did not require any customisation choices across different countries or waves. The calibration of each country was set
to run for at most a week, with the aim of finding as many full fitting points as possible. We describe below how each step of the process was implemented.
Step 1: The model was first run at 400 input points, which were chosen through Latin Hypercube sampling. Of these, half were used to build the emulators (training points) and half to validate them
(validation points).
Step 2: The model outputs at training points were then used to train the emulators.
Step 3: The model outputs at validation points were then used by the hmer package to conduct two diagnostic tests on emulators. The first assessed how accurately emulators reflected model outputs.
In Fig. 2, the emulator expectation (E[D][f(x)]) is plotted against the model output (f(x)) for each validation point, providing the dots in the graph, with 95% credible intervals. Points for which
the model output is outside the interval are in red. The exception to this is in places where the model output is far away from the target we want to match, as it is not important that the emulators
are accurate in such places. An ‘ideal’ emulator would exactly reproduce the model results: this behaviour is represented by the green line. This diagnostic was used as an initial filter, to identify
emulators that performed very poorly: if more than 25% of all validation points were in red, the emulator was discarded, and the corresponding output was not used to rule out implausible points for
that wave. In the example shown above, the test was clearly successful (only one validation point in red), and the emulator proceeded to the second diagnostic.
Fig. 3 compares the emulator implausibility to the equivalent simulator implausibility (i.e. the implausibility calculated by replacing the emulator output with the model output). There are three
cases to consider:
• The emulator and model both classify a set as implausible (bottom-left quadrant) or non-implausible (top-right quadrant). This is fine. Both are giving the same classification for the parameter
• The emulator classifies a set as non-implausible, while the model rules it out (top-left quadrant). This is also fine. The emulator should not be expected to shrink the input space as much as the
simulator does, at least not on a single wave. Parameter sets classified in this way will survive this wave, but may be removed on subsequent waves as the emulators grow more accurate on a
reduced input space.
• The emulator rules out a set, but the model does not (bottom-right quadrant): these points are highlighted in red and are the problematic sets, suggesting that the emulator is ruling out parts of
the input space that it should not be ruling out.
To ensure that no part of the input space was unduly ruled out, we required that no point was in red. We achieved this by iteratively increasing the emulator uncertainty by 10% (i.e. the length of
the blue vertical bars in the first diagnostic), resulting in an emulator that was more cautious when ruling points out. In Fig. 3, increasing the uncertainty by 10% once removed one of the two red
points (middle plot), and a further increase resulted in no red points (bottom plot).
Step 4: A new set of 400 non-implausible points was generated. Using the hmer package, new points were generated according to the following strategy. An initial set of points was generated using a
Latin Hypercube Design, rejecting implausible parameter sets (Fig. 4, top-left). Pairs of non-implausible parameter sets were then selected at random and more sets were sampled from lines going
through them, to identify those that are close to the boundary (Fig. 4. top-right). Finally, using all non-implausible points and boundary points found so far as seeding points, more parameter sets
were generated using importance sampling to attempt to fully cover the non-implausible region (Fig. 4 bottom-left).
Step 5: If the emulator uncertainty was larger than the target uncertainty, another wave was performed: the 400 points generated in STEP 4 were split in training and validation sets, the model was
run on them and the process returned to STEP 2. Once the uncertainty of emulators was smaller than the uncertainty in the targets, the process continued to STEP 6.
Step 6: Rejection sampling was performed at this stage: a set of points was uniformly sampled from the non-implausible region using all available emulators. The model was then run on such points, and
those matching all targets were accepted. We repeated this process, stopping when seven full days had passed since the model calibration process started.
Derivative emulation
The calibration process described above could fail to find fully fitting points for one of two reasons: because no such points existed, or because history matching had not found them. Working with
tools from the hmer package, we analysed countries where the fitting process failed, to identify the reason(s) why calibration could not be completed. We describe one of these tools, derivative
emulation, here.
The term derivative emulation refers to the process of estimating the derivative of model outputs using the corresponding emulators. More precisely, once an emulator is trained for a model output f,
we can use it to get a prediction of the partial derivative of f with respect to the i-th parameter at any given parameter set x, with associated uncertainty . This is a low cost and potentially
powerful approach, since it does not require any additional model evaluations. The hmer package has a function that, given a set of emulators and a parameter set x not fitting all targets, estimates
the partial derivatives of all model outputs at x. It then uses the gradient descent method to find the best direction to move along in order to improve (or at least not worsen) all emulator
evaluations simultaneously, and then proposes a new parameter set. This function allows us to explore parameter sets near x, to determine if any of those are a better fit to the data.
4. Results
Of the 115 countries that we analysed, 105 were fully calibrated to all targets. In the time available, the number of parameter sets hitting all targets found by history matching with emulation
varied by country, from a minimum of 232 to a maximum of 8857. For 101 of the 105 calibrated countries, at least 1000 full-fitting points were found.
We now show some visualisations produced using the hmer package for one of the calibrated countries. Fig. 5 shows (log-scaled) output values for non-implausible parameter sets at each wave. The plot
clearly shows how the performance of non-implausible parameter sets improves each wave, with an increasing proportion of output values falling with the target ranges. In Fig. 6, output values for
non-implausible parameter sets at each wave are shown for each combination of two outputs. For wave 7, we show only the fully fitting points. The main diagonal shows the distribution of each output
at the end of each wave, with the vertical red lines indicating the lower and upper bounds of the target. Above and below the main diagonal are plots for each pair of targets, with rectangles
indicating the target area where full fitting points should lie (the ranges are normalised in the figures above the diagonals). These graphs can provide additional information on output
distributions, such as correlations between them.
The third visualisation (Fig. 7) shows the distribution of the non-implausible input parameter space each wave. The plots in the main diagonal show the distribution of each parameter, which tends to
narrow each wave. The off-diagonal boxes show plots for all possible pairs of parameters. These visualisations give us information about the extent to which each of the parameters has been
constrained, and about correlations between parameters. For example in Fig. 7 the probability of transmission and the relapse rate have an extremely narrow posterior, while the background mortality
rate has a much wider posterior. The plot also shows the presence of a negative correlation between the TB disease treatment rate and the Mtb infection self-clearance rate. Overall, for each country,
history matching with emulation greatly reduced the input space to be searched for full-fitting points, with an estimated median reduction factor of 6 × 10^8 (see supporting materials, section D for
more details). Fig 8 shows a histogram of the log10-transformed reduction factors for all countries.
Computational cost of the calibration process
Each country was set to run on a single computer node (typically a 64bit Intel Xeon processor at 2.6 GHz) for 7 full days, with the aim of finding as many full-fitting points as possible. For many
countries, hundreds of full-fitting points were identified well before the end of the 7 allocated days. On average across all countries, model running time accounted for 25% of the total
computational cost, emulator training/diagnostic time for 5%, and points proposal time for 70%. (Note that this study was conducted using a developmental version of the package and point generation
is now around 5 times faster in Version 1.0.0 of the package. Future improvements are planned, to further optimise the generation of non-implausible points.) The points proposal time increased
linearly with waves, as all previous emulators needed to be evaluated each wave. On average, 5,000–10,000 points were investigated at each wave to find 400 non-implausible ones. Note that such a huge
amount of evaluations were only possible thanks to the high speed of emulators: while a model run took 10 seconds on average, evaluating an emulator took just a few milliseconds.
Analysis of countries that could not be fitted to data
In 9 uncalibrated countries (7 countries without HIV structure and 2 countries with HIV structure), we used hmer package visualisation tools to understand why the calibration process could not be
completed. Let us consider country A, for which history matching found points matching 7 out of 9 targets at most. Figure 9 (left plot) shows tuberculosis notifications in adults (ages 15–99) in 2019
and tuberculosis notifications in young individuals (ages 0–14) in 2019, with the red rectangle indicating the target area.
We see that the two outputs are highly correlated and that, in all waves, all points lie far from the red rectangle. This suggests that our model is not able to match the two targets simultaneously.
Four more countries showed a similar clear incompatibility between two (or more) targets (see supporting material section B).
The remaining four countries (referred to as countries F, G, H, I) that could not be fitted to data also showed some evidence of an incompatibility between certain pairs of targets, although in a
weaker form. For example, in country F, our model seemed incapable of matching overall tuberculosis notifications and tuberculosis disease incidence in young individuals (Fig. 9, right plot, where we
see a “wall” in output space that prevents the model outputs from reaching the red target square. Such a wall is likely due to inherent and possibly non-obvious constraints in the model). The bounds
for these two targets were almost identical. As our model did not include false positive notifications, this meant that nearly all TB in children would need to be diagnosed and notified for the model
to fit the targets, which was incompatible with the plausible input ranges and other output target ranges used in the calibration.
Similar plots for countries G, H and I can be found in Supporting Material, Section B. Since the incompatibility between outputs was present but less pronounced in countries F, G, H and I, one could
wonder if by moving around the last-wave points - which were often close to matching the potentially incompatible outputs - one could find full fitting points. We explored this using derivative
For countries G, H and I, the maximum number of targets met by a point in the last wave was 8 (out of 9). For these countries, even though derivative emulation increased the number of targets that
some parameter sets matched, it could not find any parameter sets hitting all 9 targets. For country F, neither last wave points nor points found by derivative emulation matched more than 7 (out of
9) targets.
Finally, one country had inconsistencies in the data that made calibration unachievable.
5. Discussion
In this paper, we have demonstrated how the hmer R package allowed us to perform history matching with emulation on a complex deterministic tuberculosis model to a total of 115 low-and middle-income
countries. For each country, the model was fit to 9-13 target measures, by varying 19-22 input parameters. Overall, 105 countries were successfully calibrated, while for ten countries history
matching with emulation could not identify any full-fitting parameter sets. Such countries were analysed further, through derivative emulation and visualisations tools offered by the hmer package.
Calibrating 105 countries in just a few days was possible thanks to hmer, which allowed us to implement history matching with emulation in a straightforward and effective way. Using this method, we
were able to fully explore the input space, and to identify a set of points that represent all possible conditions under which our model could match the data. In nine countries that could not be
fitted to data, visualisation tools, paired with derivative emulation methods where necessary, provided us with evidence that calibration of the model was unlikely to be possible with the given input
and output ranges.
Unlike most other full calibration methods, which attempt to make probabilistic statements about the posterior distribution over the input space, history matching can work with expectation and
variances only, and aims to discard implausible areas of the input space. This, and the use of emulators, mean that the calculations involved in history matching tend to be more tractable and
straightforward to implement. Furthermore, since the process is iterative, it is not necessary to work with all inputs and outputs simultaneously: for example, if an output cannot be emulated
accurately in initial waves, it can be put aside and emulated in later waves, as emulation usually becomes easier as the waves progress (Vernon et al., 2010). All these characteristics make history
matching a useful method for calibrating complex models. In addition to being a useful calibration method in its own right, when we are interested in making probabilistic statements about the
posterior of the model’s parameters, history matching can be used as a pre-calibration procedure, in conjunction with probabilistic calibration techniques (Vernon et al., 2018, Vernon et al., 2010).
While methods such as Approximate Bayesian Computation or Bayesian model calibration may struggle to adequately explore the large input space of a multi-output model, their application on the greatly
reduced space that is the output of history matching can produce hybrid methods that are more tractable and combine the strengths of both approaches.
Multi-output models, such as the one in this article, very often exhibit correlation between their outputs. This aspect was partially incorporated by building accurate 1-dimensional emulators that
together captured the joint behaviour of the model outputs. However, full multivariate emulation would bring further benefits. Taking output correlation fully into account would improve the emulation
process and subsequently the performance of history matching, but requires more sophisticated emulators, as well as a more detailed structure for model and observation uncertainty which takes into
account these correlations. These features, together with variance emulation, were not available at the time of this work, but are now part of the standard functionalities of version 1.0.0 of the
hmer package. In particular, these features make it possible to use hmer to calibrate stochastic models, without having to average over multiple model runs for each parameter set. Also note that
point generation is around 5 times faster in version 1.0.0 than it was in the developmental version of the package that was used in this work. Future improvements are planned, to further optimise the
generation of non-implausible points. Features to be included in subsequent versions of the package will allow the user to address data quality problems, compare different models, and use emulators
to make forecasts.
In conclusion, this work shows that the hmer package can play a key role in making history matching with emulation accessible to the community of epidemiologists, facilitating fast and efficient
model calibration. By addressing the current lack of methodologies to robustly calibrate complex models and perform uncertainty analysis on them, hmer constitutes an invaluable addition to the
epidemiologist’s calibration tool-kit.
Data Availability
All data produced in the present study are available upon reasonable request to the authors.
This work was funded by Wellcome Trust (218261/Z/19/Z) and WHO (2020/985800-0).
RGW is additionally funded by NIH (1R01AI147321-01), EDTCP (RIA208D-2505B), UK MRC (CCF17-7779 via SET Bloomsbury), ESRC (ES/P008011/1), BMGF (OPP1084276, OPP1135288 & INV-001754).
TJM is supported by an Expanding Excellence in England (E3) award from Research England.
RC is additionally funded by BMGF (INV-001754).
IV is additionally funded by EPSRC funding (EP W011956).
Declaration of interests
All authors declare no conflicts of interest.
Supporting Materials
A. Description of the calibration task
We describe the parameters varying in the calibration process and the set of targets we fitted our model to. In order to do so, we need to briefly describe the compartments of our model. For full
details see Clark et al., 2022.
TB Natural History Dimension
In our model, the population was assigned to the compartments described in Table A1 below based on their TB status. The TB natural history model is specified in Figure A1 below.
Input parameters and target outputs
For countries without HIV structure we had 19 varying parameters and 9 targets (all for the year 2019), as shown in the tables below.
Parameters for countries including the HIV structure include all parameters in Table A2 plus the following three in Table A4.
Targets for HIV countries include all targets in the non-HIV table plus the following four.
B. Analysis of countries that could not be fitted to data
Apart from countries A and F, discussed in the main paper, seven more countries showed an incompatibility between two or more of the targets.
For countries B, C, D, E, we identified pairs of outputs that seemed strongly incompatible.
In countries G, H and I the incompatibility was less pronounced.
C. Bayes Linear Emulation
In the hmer package we adopt a Bayes Linear approach to build emulators. While a full Bayesian analysis requires specification of a full joint prior probability distribution to reflect beliefs about
uncertain quantities, in the Bayes linear approach expectations are taken as a primitive and only first and second order specifications are needed when defining the prior. Operationally, this means
that one just sets prior mean vectors and covariance matrices for the uncertain quantities, without having to decide exactly which distribution is responsible for the chosen mean and covariance. A
Bayes Linear analysis may therefore be viewed as a pragmatic approach to a full Bayesian analysis, where the task of specifying beliefs has been simplified. As in any Bayesian approach, our priors
(mean vectors and covariance matrices) are then adjusted by the observed data.
The general structure of a univariate emulator is as follows: where ∑[j] β[j] h[j] (x[A]) is a regression term and u(x[A]) is a weakly stationary process with mean zero. In the regression term, the
sum is taken over the collection of regression functions h[j] ′s, which can vary for different outputs. The argument of the h[j] ′s and of the process u is x[A] : this indicates the set of active
variables for output f, i.e the variables that contributes the most to the value of f. The use of x[A] instead of x as arguments of the h[j] ′s and of u plays a key role in reducing the
dimensionality of the problem, which is key when dealing with high-dimensional input spaces. The role of the regression term is to mimic the global behaviour of the model output, while the weakly
stationary process represents localised deviations of the output from this global behaviour near x. In the regression term, the functions h[j] ‘s determine the shape and complexity of the regression
hypersurface we fit to the training data and the β[j] ‘s are the regression coefficients. To fully describe the weakly stationary process u(x), we need to define the covariance structure, i.e. we
need to say how correlated the local deviations at x and x′ are, for any pair (x, x′). The default option in the hmer package is to assume that u(x) is a Gaussian process with covariance structure
given by where c is the square-exponential correlation function
The multiplicative factor σ^2 allows us to set a prior for the emulator variance, while θ is the correlation length, representing our belief about the smoothness of the emulated output (which can be
generalised to an individual θ[i] along each active input direction). The numerator |x − x ′|^2in the exponent indicates the square of the distance between x and x′: Σ[i] (x[i] − x[i] ′)^2.
Once the structure of the emulator is chosen, prior mean vectors and covariance matrices are set, and a set of model runs D is available, we can train the emulator using the Bayes Linear Updates
In these formulae, the expectations and variance on the right hand side refer to the set priors, while the left hand sides refer to the expectation and variance updated, based on the knowledge of the
runs in D. Once an emulator has been built, it can be used to calculate the implausibility measure of any parameter set of interest. For a given model output f and a given target z, the
implausibility is defined as the difference between the emulator output and the target, taking into account all sources of uncertainty:
In this work, the variance in the denominator accounted for the emulator uncertainty Var[D] [f(x)] and the observation uncertainty. Other forms of uncertainty can be included if relevant: for
example, if dealing with a stochastic model, it is appropriate to factor in the uncertainty due to the ensemble variability of the model output.
D. Reduction factor calculation
To quantify how much history matching with emulation reduced the input space, we first calculated the volume of the smallest hyper-rectangle containing all non-implausible points from the last wave
of history matching for each of the countries and compared it to the volume of the original input space. Note that, especially when parameters are highly correlated, such a hyper-rectangle largely
overestimates the size of the remaining space. To correct for this, we then accounted for the proportion of all points proposed at the end of the last wave that were non-implausible. This method
estimated a median reduction factor of 6 × 10^8, with 95% of all reduction factors lying in the interval [2 × 10^5, 2.8 × 10^19]. The median reduction factor per wave was 2.2 × 10^7.
|
{"url":"https://www.medrxiv.org/content/10.1101/2022.05.13.22275052v2.full","timestamp":"2024-11-12T06:00:50Z","content_type":"application/xhtml+xml","content_length":"248651","record_id":"<urn:uuid:e2cd1645-33a5-4261-a533-8a1e7e08c0fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00331.warc.gz"}
|
Verkle tree structure | Ethereum Foundation Blog
A Verkle tree is a commitment scheme that works similar to a Merkle tree, but has much smaller witnesses. It works by replacing the hashes in a Merkle tree with a vector commitment, which makes wider
branching factors more efficient.
Thanks to Kevaundray Wedderburn for feedback on the post.
For details on how verkle trees work, see:
The aim of this post is to explain the concrete layout of the draft verkle tree EIP. It is aimed at client developers who want to implement verkle trees and are looking for an introduction before
delving deeper into the EIP.
Verkle trees introduce a number of changes to the tree structure. The most significant changes are:
• a switch from 20 byte keys to 32 byte keys (not to be confused with 32 byte addresses, which is a separate change);
• the merge of the account and storage tries; and finally
• The introduction of the verkle trie itself, which uses vector commitments instead of hashes.
As the vector commitment scheme for the verkle tree, we use Pedersen commitments. Pedersen commitments are based on elliptic curves. For an introduction to Pedersen commitments and how to use them as
polynomial or vector commitments using Inner Product Argumentss, see here.
The curve we are using is Bandersnatch. This curve was chosen because it is performant, and also because it will allow efficient SNARKs in BLS12_381 to reason about the verkle tree in the future.
This can be useful for rollups as well as allowing an upgrade where all witnesses can be compressed into one SNARK once that becomes practical, without needing a further commitment update.
The curve order/scalar field size of bandersnatch is p = 13108968793781547619861935127046491459309155893440570251786403306729687672801, which is a 253 bit prime. As a result of this, we can only
safely commit to bit strings of at most 252 bits, otherwise the field overflows. We chose a branching factor (width) of 256 for the verkle tree, which means each commitment can commit to up to 256
values of 252 bits each (or to be precise, integers up to p - 1). We write this as Commit(v₀, v₁, ..., v₂₅₅) to commit to the list v of length 256.
Layout of the verkle tree
One of the design goals with the verkle tree EIP is to make accesses to neighbouring positions (e.g. storage with almost the same address or neighbouring code chunks) cheap to access. In order to do
this, a key consists of a stem of 31 bytes and a suffix of one byte for a total of 32 bytes. The key scheme is designed so that "close" storage locations are mapped to the same stem and a different
suffix. For details please look at the EIP draft.
The verkle tree itself is then composed of two types of nodes:
• Extension nodes, that represent 256 values with the same stem but different suffixes
• Inner nodes, that have up to 256 children, which can be either other inner nodes or extension nodes.
The commitment to an extension node is a commitment to a 4 element vector; the remaining positions will be 0. It is:
C₁ and C₂ are two further commitments that commit to all the values with stem equal to stem. The reason we need two commitments is that values have 32 bytes, but we can only store 252 bits per field
element. A single commitment would thus not be enough to store 256 values. So instead C₁ stores the values for suffix 0 to 127, and C₂ stores 128 to 255, where the values are split in two in order to
fit into the field size (we'll come to that later.)
The extension together with the commitments C₁ and C₂ are referred to as "extension-and-suffix tree" (EaS for short).
Figure 1 Representation of a walk through a verkle tree for the key 0xfe0002abcd..ff04: the path goes through 3 internal nodes with 256 children each (254, 0, 2), one extension node representing
abcd..ff and the two suffix tree commitments, including the value for 04, v₄. Note that stem is actually the first 31 bytes of the key, including the path through the internal nodes.
Commitment to the values leaf nodes
Each extension and suffix tree node contains 256 values. Because a value is 256 bits wide, and we can only store 252 bits safely in one field element, four bits would be lost if we simply tried so
store one value in one field element.
To circumvent this problem, we chose to partition the group of 256 values into two groups of 128 values each. Each 32-byte value in a group is split into two 16-byte values. So a value vᵢ∈ 𝔹₃₂ is
turned into v⁽ˡᵒʷᵉʳ⁾ᵢ ∈ 𝔹₁₆ and v⁽ᵘᵖᵖᵉʳ⁾ᵢ∈ 𝔹₁₆ such that v⁽ˡᵒʷᵉʳ⁾ᵢ ++ v⁽ᵘᵖᵖᵉʳ⁾ᵢ= vᵢ.
A "leaf marker" is added to the v⁽ˡᵒʷᵉʳ⁾ᵢ, to differentiate between a leaf that has never been accessed and a leaf that has been overwritten with 0s. No value ever gets deleted from a verkle tree.
This is needed for upcoming state expiry schemes. That marker is set at the 129th bit, i.e. v⁽ˡᵒʷᵉʳ ᵐᵒᵈⁱᶠⁱᵉᵈ⁾ᵢ = v⁽ˡᵒʷᵉʳ⁾ᵢ + 2¹²⁸ if vᵢ has been accessed before, and v⁽ˡᵒʷᵉʳ ᵐᵒᵈⁱᶠⁱᵉᵈ⁾ᵢ = 0 if vᵢ has
never been accessed.
The two commitments C₁ and C₂ are then defined as
Commitment of extension nodes
The commitment to an extension node is composed of an "extension marker", which is just the number 1, the two subtree commitments C₁ and C₂, and the stem of the key leading to this extension node.
Unlike extension nodes in the Merkle-Patricia tree, which only contain the section of the key that bridges the parent internal node to the child internal node, the stem covers the whole key up to
that point. This is because verkle trees are designed with stateless proofs in mind: if a new key is inserted that "splits" the extension in two, the older sibling need not be updated, which allows
for a smaller proof.
Commitment of Internal nodes
Internal nodes have the simpler calculation method for their commitments: the node is seen as a vector of 256 values, that are the (field representation of the) root commitment of each of their 256
subtrees. The commitment for an empty subtree is 0. If the subtree is not empty, then the commitment for the internal node is
where the Cᵢ are the children of the internal node, and 0 if a child is empty.
Insertion into the tree
Figure 2 is an illustration of the process of inserting a new value into the tree, which gets interesting when the stems collide on several initial bytes.
Figure 2 Value v₁₉₂ is inserted at location 0000010000...0000 in a verkle tree containing only value v₁₂₇ at location 0000000000...0000. Because the stems differ at the third byte, two internal nodes
are added until the differing byte. Then another "extension-and-suffix" tree is inserted, with a full 31-byte stem. The initial node is untouched, and C²₀ has the same value as C⁰₀ before the
Shallower trees, smaller proofs
The verkle tree structure makes for shallower trees, which reduces the amount of stored data. Its real power, however, comes from the ability to produce smaller proofs, i.e. witnesses. This will be
explained in the next article.
|
{"url":"https://blog.ethereum.org/en/2021/12/02/verkle-tree-structure","timestamp":"2024-11-07T20:28:49Z","content_type":"text/html","content_length":"174201","record_id":"<urn:uuid:ac464e00-aae5-4300-b7e6-cf30749eedf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00439.warc.gz"}
|
Finding The Missing Angle Of A Right Triangle Worksheet - Angleworksheets.com
Finding Angles Of A Right Triangle Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you
understand the different concepts and build your understanding of these angles. Students will be able to identify unknown angles using the vertex, arms and … Read more
Finding The Unknown Angle Of A Triangle Worksheet
Finding The Unknown Angle Of A Triangle Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you
understand the different concepts and build your understanding of these angles. Using the vertex, arms, arcs, and complementary angles postulates, students will learn … Read more
|
{"url":"https://www.angleworksheets.com/tag/finding-the-missing-angle-of-a-right-triangle-worksheet/","timestamp":"2024-11-08T21:01:22Z","content_type":"text/html","content_length":"55020","record_id":"<urn:uuid:0d9d1a2f-f820-4f9b-8bbd-01f3074afee5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00632.warc.gz"}
|
[ Originally published in Datafile, Vol 11 No 7/8, December 1992, page
55. ]
A Phase-of-the-Moon Program for the 32SII
Craig A. Finseth
member# ??
As I was pulling together the information for the HPDATAbase (by the
time you see this article, you should already know about *that*), I
needed a manual for the 32SII. So I did the obvious thing and called
EduCALC and ordered one.
When the manual showed up, it turned out to be a copy of the manual
for the previous machine, the 32S. Inside was a note from HP saying
that they had run out of 32S manuals and, since the machine was no
longer in production, they were shipping these copies instead.
I called EduCALC and it turned out that their entire stock of 32SII
manuals was just like this. Apparantly, someone at HP had gotten
confused when shipping their last order.
EduCALC, of course, offered me a full refund for the wrong manual. (I
decided to keep it as a curiosity.) However, they didn't have any
32SII manuals in stock (as HP had shipped the wrong ones...this all
happened in one telephone call). So, the person offered to ship me a
32SII saying that I could just return it after gleaning the
information from the manual.
The person there clearly knows his market. After receiving the
machine and playing with it for a few days, I decided to keep it.
So, I now have this machine and want to do something interesting with
it. And one thing that I like to see for all machines is the ability
to calculate the phase of the moon. This is so that bugs that depend
on the phase of the moon can be properly excercised. It would not do
to have a bug that is supposed to appear during a full moon show up
early in the first quarter.
The algorithm that I use in all of my phase of the moon programs is,
of course, wrong (poetic justice.) The actual expression for the
phase of the moon has something like 30 sine terms. I use a
single-term approximation. This can produce results that vary from
the real phase by up to two days.
The start date and time (new moon) is 12 Jan 1975 10.21 GMT
The length of a cycle is 42532 minutes
The algorithm is
fractional part ( (now - start) / cycle length )
The resulting fraction is interpreted as
new moon 0
first quarter .25
full moon .50
last quarter .75
This same algorithm is used in all versions of my program (for the
41CX, 48SX, and in C for Unix and MSDOS computers).
THE 32SII VERSION
This was tricky, as the calculator has no clock or date arithmetic and
has very little memory. I decided to only use the date and leave the
time out of it. For the date arithmetic, which needs the number of
days between now and the start date, I used this formula:
(y - 1975)*365.2423 + (m - 1)*30.5 + d - 12
which can be rewritten (with a trivial modification) to
(y - 1975)*365.2423 + m*30.5 + d - 12 - 31
and, to take advantage of recall arithmetic to save space
-( (1975 - y)*365.2423 - m*30.5 - d + 43 )
and the program itself is:
M01 LBL M
M02 INPUT Y
M03 INPUT M
M04 INPUT D
M05 1,975
M06 RCL- Y
M07 365.2422
M08 x
M09 30.5
M10 RCLx M
M11 -
M12 RCL- D
M13 43
M14 +
M15 +/-
M16 29.5361111 displays as 2.95361e1
M17 \:-
M18 FP
M19 SF 10
M20 ENTER
M21 ENTER
M22 4
M23 x
M24 IP
M25 x=0?
M26 NM equation
M27 1
M28 -
M29 x=0?
M30 FQ equation
M31 1
M32 -
M33 x=0?
M34 FM equation
M35 1
M36 -
M37 x=0?
M38 LQ equation
M39 Rv
M40 7.384
M41 x
M42 RTN
115.0 bytes, checksum 9B48
The program uses three variables (Y, M, and D) and all of the stack
levels. It sets flag 10.
Here is an example of its use:
display key in
XEQ M
Y? 1991 R/S
M? 12 R/S
D? 18 R/S
FQ R/S
2.9569839026 (i.e., not quite 3 days)
|
{"url":"https://finseth.com/hpdata/craig-moon.php","timestamp":"2024-11-04T03:50:49Z","content_type":"text/html","content_length":"16415","record_id":"<urn:uuid:81298219-6476-4de5-b4cb-4b7deb362ab9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00438.warc.gz"}
|
Quadratic Functions and Their Graphs Q&As - Algebra | HIX Tutor
Quadratic Functions and Their Graphs
Quadratic functions play a pivotal role in algebra and mathematics, offering a powerful tool to analyze and understand various real-world phenomena. Defined by the standard form ax^2 + bx + c, these
functions generate parabolic graphs that depict a wide range of scenarios, from projectile motion to economic modeling. The study of quadratic functions and their graphs encompasses fundamental
concepts such as vertex analysis, axis of symmetry, and solutions to quadratic equations. This exploration not only aids in problem-solving but also provides a foundational understanding of the
intricate relationships between variables in mathematical expressions.
|
{"url":"https://tutor.hix.ai/subject/algebra/quadratic-functions-and-their-graphs","timestamp":"2024-11-12T21:38:24Z","content_type":"text/html","content_length":"562258","record_id":"<urn:uuid:7268a92e-ea89-448b-b1f6-5bee72376db9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00763.warc.gz"}
|
Modeling Methodology for Physics Teachers
Scientific practice involves the construction, validation and application of scientific models, so science instruction should be designed to engage students in making and using models. This pdf
article discusses the benefits of using model construction and development in an Interactive Engagement learning framework. The methodologies and ideas discussed in this article have been
incorporated in physics and teacher training courses. The article can be found in E. Redish & J. Rigden (Eds.) The changing role of the physics department in modern universities, American Institute
of Physics Part II. p. 935-957.
This description of a site outside SERC has not been vetted by SERC staff and may be incomplete or incorrect. If you have information we can use to flesh out or correct this record let us know.
Part of the Starting Point collection. The Starting Point collection includes resources addressing the needs of faculty and graduate students designing, developing, and delivering entry-level
undergraduate courses in geoscience.
|
{"url":"https://serc.carleton.edu/resources/22656.html","timestamp":"2024-11-01T22:13:30Z","content_type":"text/html","content_length":"42952","record_id":"<urn:uuid:6f24f440-ff39-4c93-bdff-bf09fa997a33>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00810.warc.gz"}
|
Maximum Adverse Excursion
Maximum Adverse Excursion
What is the MAE
Maximum Adverse Excursion (MAE) is a method of analysis for automatic or discretionary trading systems that allow us to objectively improve the overall operating result
What is the MAE
Maximum Adverse Excursion (MAE) is a method of analysis for automatic or discretionary trading systems that allow us to objectively improve the overall operating result
What is the MAE
Maximum Adverse Excursion (MAE) is a method of analysis for automatic or discretionary trading systems that allow us to objectively improve the overall operating result by positioning stops based on
the statistical analysis of the development of operations from its inception to its closure.
When we study a trading system, we often find losses. Sometimes these losses are recurrent and lead us to reject a system or some of its rules. Here, the proposal is that instead of doing this, we
approach the problem in another way.
As a method of improving results, John Sweeney proposes a statistical system rather than a technical one. We will not rely on indicators or the behavior of complicated logarithms, but on the
statistical study differentiated from winning and losing operations. If our entry method is good, the course of price in winning trades is different from the behavior of the price of losing trades.
We will not rely on indicators or the behavior of complicated logarithms, but on the statistical study differentiated from winning and losing operations.
Let’s analyze the winning trades and, above all, those that ended in losses. Are there any common features in them? Can we detect any pattern that makes us think we are in front of something usable?
There is a truth that in any trading system we must accept inescapably. At some point, we have to cut our losses. Of the many methods used for this purpose, the most common of all is price action
when it distances itself from the meaning of our trade.
We will track the price path during positive trades and along those that end in losses. The idea is to check the typical route of each of them and in this way, find the best way to place the system
stop to achieve a better risk to reward ratio.
We will call “excursion” to the price range traveled by the price from our entrance to its end. We will distinguish the two possible directions:
Maximum Favorable Excursion: (MFE) It is the biggest advance of the price from our entrance to the exit.
Maximum Adverse Excursion: (MAE) It is the maximum retreat of the price from our entrance until the closing.
1. Define our input and output rules.
2. Record how much the price has moved from our entry to our departure both for and against the trade.
3. Separate the winning trader’s data from losers into a table.
4. Order the losers for lost categories.
5. Check which patterns follow the price on losing trades and learn to recognize it.
6. Set the stop according to the recognized pattern. If it behaves like a loser trade, acknowledge that we have been wrong and assume the losses.
Let’s see an example of how this methodology works.
Graph of a system without stops that operates on the DAX
In the vertical axis, we can see the maximum gap of each trade before its closure (MFE). The horizontal displacement represents the maximum adverse excursion (MAE) produced before its closing.
Given the graph, it appears as relevant the level of 0.15% – 0.20% as a limit. This translates into losing trades less than 0.15% -0.20% before ending losses. We can then cut losses at 0.28%
(X-axis). It is also appreciated that the vast majority of winning trades retraced less than 0.10%.
The statistics of this system before making changes are as follows.
Statistics have improved. First, the maximum loss has been reduced from -1425 € to -237 € as well as the average loss from -195.47 to -167.24, and the profit-loss ratio has improved from 1.81 to
By applying a trailing stop to the system, we can improve some statistical data that will help in the general computation. The use of this type of stop is delicate because it is very easy to be
touched by a momentary price retreat. However, its use at a sufficiently loose distance can prevent a trade that is very advanced from becoming lost. In our case, the average loss on the losing
trades improve from 167 to 158, and the percentage of winning trades increases slightly in both the long and short sides. It would read as follows:
We see the effect of the Trailing Stop. The trades advance far beyond the point at which they finally close. This is normal using this type of stop. Adjusting them further usually leads to a
worsening of the final result of the set.
In the following chart, we see the effect of setting the Trailing Stop 50% closer.
Although it seems better, the total gain is worse, and the drawdown increases.
You can then compare the two options statistics, which leave no doubt.
Statistics with optimal Trailing stop:
Statistics with Trailing stop a 50% more adjusted
Maximum Favorable Excursion
The MFE is found by monitoring the maximum reached during positive operations. Often, we find that our trades end very far from this point. Obviously, our goal must be to get them to stop as close as
possible to the MFE. In each particular case, we cannot expect to always close at the absolute maximum. In this case, the methods of traditional technical analysis based on indicators often betray us
by getting us out of the market ahead of time or, on the contrary, keeping us while the point of maximum profit goes away.
By searching the MFE, we can detect the most likely area that the winning trades will reach. In this way, we should place our profit target at a point that’s the most probable, statistically
speaking. This won’t make our system jump to an absolute possible maximum profit. However, it simplifies the system and makes it more robust by shortening the exposure time to the market.
Finally, a sample of the effect of adding a profit target, taking as reference the MFE. In this case, we separate the behavior of the MFE in the bearish bullish trades since, in the case of indexes
and stocks, the market does not usually show symmetry. Experience tells us that there are different characteristics between bull markets and bear markets, although there are authors who question it.
We put a TP (Take Profit) of 27 on bullish and 33 points for the bearish.
What you see is a performance improvement. Although the improvement in net profit does not seem too important, the maximum streak of losses decreases a lot. This procedure reduces the capital
necessary for its implementation, and therefore, there is a substantial improvement in percentage profitability. It also increases the net profit and especially the Profit Factor.
Minimum Favorable Excursion
Another concept to evaluate is the Minimum Favorable Excursion. It is to detect the minimum point of advance from which it is unlikely that the price returns on our entrance. This will allow us to
move our stops to break even, upon reaching this area and prevent it from ending up in losses.
The statistical method proposed by the MAE and MFE study is revealed as a fully valid system for the study and improvement of any trading system. The use of Maximum Favorable Excursion charts gives
us a way to distinguish winning trades from losing trades since they behave differently.
We can see very quickly if there are exploitable behaviors. The behavior of losing trades has its own patterns. It just advances a tiny amount in favor of the entry and then moves against it, and
this allows to act consequently by cutting the losses at the right spot where statistically winning trades didn’t reach.
On the other hand, the traditional use of pivots as a reference to locate stops usually leads to losses, since it is the first place where the market seeks liquidity as it minimally weakens a trend.
Finally, the MFE allows us to put the trade into break-even at the right time or even add positions in a favorable environment. It also facilitates the tracking of profit targets. In any case, it is
a highly recommended study method to improve automatic or manual systems.
|
{"url":"https://www.forex.academy/maximum-adverse-excursion/","timestamp":"2024-11-04T01:21:29Z","content_type":"text/html","content_length":"94872","record_id":"<urn:uuid:d400e356-fe9e-4759-b7d9-bbfc02b143a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00081.warc.gz"}
|
Explain why a yield curve is commonly upward sloping
Explain why a yield curve is commonly upward sloping
Yield curve is plotted against the time to maturity in years and yield in percentage. That is to say, as time to maturity increases, the rate of yield increases.
This gives us the typical upward sloping yield curve. The reason for this upward slope is the result of the risk associated with yield of a larger time to maturity.
As the time to maturity increases, the risk also increases. This risk increases as there are chances of default on the debt and a higher likelihood of inflation. This risk must be compensated by a
higher rate of yield on the debt issued leading to the upward sloping yield curve.
|
{"url":"https://justaaa.com/economics/278847-explain-why-a-yield-curve-is-commonly-upward","timestamp":"2024-11-05T06:20:10Z","content_type":"text/html","content_length":"38069","record_id":"<urn:uuid:5bd1d5d7-6011-4a2c-ba60-1a84bb982e9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00303.warc.gz"}
|
Paper ID D6-S1-T4.3
Paper Limitations of Mean-Based Algorithms for Trace Reconstruction at Small Distance
Authors Elena Grigorescu, Purdue University, United States; Madhu Sudan, Harvard University, United States; Minshen Zhu, Purdue University, United States
Session D6-S1-T4: Trace Reconstruction
Chaired Monday, 19 July, 22:00 - 22:20
Engagement Monday, 19 July, 22:20 - 22:40
Abstract Trace reconstruction considers the task of recovering an unknown string $x\in\{0,1\}^n$ given a number of independent ``traces'', i.e., subsequences of $x$ obtained by randomly and
independently deleting every symbol of $x$ with some probability $p$. The information-theoretic limit of the number of traces needed to recover a string of length $n$ are still unknown.
This limit is essentially the same as the number of traces needed to determine, given strings $x$ and $y$ and traces of one of them, which string is the source. The most studied class of
algorithms for the worst-case version of the problem are ``mean-based'' algorithms. These are a restricted class of distinguishers that only use the mean value of each coordinate on the
given samples. In this work we study limitations of mean-based algorithms on strings at small Hamming or edit distance. We show on the one hand that distinguishing strings that are nearby
in Hamming distance is ``easy'' for such distinguishers. On the other hand, we show that distinguishing strings that are nearby in edit distance is ``hard'' for mean-based algorithms.
Along the way we also describe a connection to the famous Prouhet-Tarry-Escott (PTE) problem, which shows a barrier to finding explicit hard-to-distinguish strings: namely such strings
would imply explicit short solutions to the PTE problem, a well-known difficult problem in number theory. Our techniques rely on complex analysis arguments that involve careful
trigonometric estimates, and algebraic techniques that include applications of Descartes' rule of signs for polynomials over the reals.
|
{"url":"https://2021.ieee-isit.org/TempDev/Papers/ViewPaper.asp?PaperNum=1074","timestamp":"2024-11-13T04:35:34Z","content_type":"text/html","content_length":"12279","record_id":"<urn:uuid:7c282a65-e249-4c04-bfcd-328ef47467a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00517.warc.gz"}
|
Affine Coordinate Changes - MIT Mathlets
Affine Coordinate Changes
The graph of the function f(mx+b) is related to the graph of f(x) in interesting ways.
2 Responses to “Affine Coordinate Changes”
1. Raza on September 3rd, 2021 @ 12:29 pm
It's worth stating the impact of the two parameters m, b.
The graph f(mx +b) is stretched parallel to the y-axis by a factor of m and shifted parallel to the x-axis to the left by |b|.
To see why this is true, its easiest to consider the cases when m=1 and when b=0 separately.
When m=1, then f(x'+b) = f(x) when x'=x-b i.e b spaces to the left.
When b = 0 then f(x'm) =f(x) when x' =x/m so if m is less then 1 this is a stretch. Switching the sign of m reflects in the y-axis.
2. hrm on September 23rd, 2021 @ 10:53 pm
Thank you for spelling this out, Raza! I think a good use of this Mathlet would be to ask students to provide descriptions like this.
|
{"url":"https://mathlets.org/mathlets/affine-coordinate-changes/","timestamp":"2024-11-11T19:59:59Z","content_type":"text/html","content_length":"39418","record_id":"<urn:uuid:523141c2-2d26-4e5b-8791-f0636ab0e296>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00733.warc.gz"}
|
Desvl's blog
Study Vector Bundle in a Relatively Harder Way - Tangent Bundle
Tangent line and tangent surface as vector spaces
We begin our study by some elementary Calculus. Now we have the function $f(x)=x^2+\frac{e^x}{x^2+1}$ as our example. It should not be a problem to find its tangent line at point $(0,1)$, by
calculating its derivative, we have $l:x-y+1=0$ as the tangent line.
$l$ is not a vector space since it does not get cross the origin, in general. But $l-\overrightarrow{OA}$ is a vector space. In general, suppose $P(x,y)$ is a point on the curve determined by $f$,
i.e. $y=f(x)$, then we obtain a vector space $l_p-\overrightarrow{OP} \simeq \mathbb{R}$. But the action of moving the tangent line to the origin is superfluous so naturally we consider the tangent
line at $P$ as a vector space determined by $P$. In this case, the induced vector space (tangent line) is always of dimension $1$.
Now we move to two-variable functions. We have a function $a(x,y)=x^2+y^2-x-y+xy$ as our example. Some elementary Calculus work gives us the tangent surface of $z=a(x,y)$ at $A(1,1,1)$, which can be
identified by $S:2x+2y-z=3\simeq\mathbb{R}^2$. Again, this can be considered as a vector space determined by $A$, or roughly speaking it is one if we take $A$ as the origin. Further we have a base $
(\overrightarrow{AB},\overrightarrow{AC})$. Other vectors on $S$, for example $\overrightarrow{AD}$, can be written as a linear combination of $\overrightarrow{AB}$ and $\overrightarrow{AC}$. In
other words, $S$ is “spanned” by $(\overrightarrow{AB},\overrightarrow{AC})$.
Tangent line and tangent surface play an important role in differentiation. But sometimes we do not have a chance to use it with ease, for example $S^1:x^2+y^2=1$ cannot be represented by a
single-variable function. However the implicit function theorem, which you have already learned in Calculus, gives us a chance to find a satisfying function locally. Here in this post we will try to
generalize this concept, trying to find the tangent space at some point of a manifold. (The two examples above have already determined two manifolds and two tangent spaces.)
Definition of tangent vectors
We will introduce the abstract definition of a tangent vector at beginning. You may think it is way too abstract but actually it is not. Surprisingly, the following definition can simplify our work
in the future. But before we go, make sure that you have learned about Fréchet derivative (along with some functional analysis knowledge).
Let $M$ be a manifold of class $C^p$ with $p \geq 1$ and let $x$ be a point of $M$. Let $(U,\varphi)$ be a chart at $x$ and $v$ be a element of the vector space $\mathbf{E}$ where $\varphi(U)$ lies
(for example, if $M$ is a $d$-dimensional manifold, then $v \in \mathbb{R}^d$). Next we consider the triple $(U,\varphi,v)$. Suppose $(U,\varphi,v)$ and $(V,\psi,w)$ are two such triples. We say
these two triples are equivalent if the following identity holds:
This identity looks messy so we need to explain how to read it. First we consider the function in red: the derivative of $\psi\circ\varphi^{-1}$. The derivative of $\psi\circ\varphi^{-1}$ at point $\
varphi(x)$ (in purple) is a linear transform, and the transform is embraced with green brackets. Finally, this linear transform maps $v$ to $w$. In short we read, the derivative of $\psi\circ\varphi^
{-1}$ at $\varphi(x)$ maps $v$ on $w$. You may recall that you have meet something like $\psi\circ\varphi^{-1}$ in the definition of manifold. It is not likely that these ‘triples’ should be
associated to tangent vectors. But before we explain it, we need to make sure that we indeed defined an equivalent relation.
(Theorem 1) The relation
is an equivalence relation.
Proof. This will not go further than elementary Calculus, in fact, chain rule:
(Chain rule) If $f:U \to V$ is differentiable at $x_0 \in U$, if $g: V \to W$ is differentiable at $f(x_0)$, then $g \circ f$ is differentiable at $x_0$, and
1. $(U,\varphi,v)\sim(U,\varphi,v)$.
Since $\varphi\circ\varphi^{-1}=\operatorname{id}$, whose derivative is still the identity everywhere, we have
1. If $(U,\varphi,v) \sim (V,\psi,w)$, then $(V,\psi,w)\sim(U,\varphi,v)$.
So now we have
To prove that $[(\varphi\circ\psi^{-1})’(\psi(x))]{}(w)=v$, we need some implementation of chain rule.
Note first
But also by the chain rule, if $f$ is a diffeomorphism, we have
or equivalently
which implies
1. If $(U,\varphi,v)\sim(V,\psi,w)$ and $(V,\psi,w)\sim(W,\lambda,z)$, then $(U,\varphi,v)\sim(W,\lambda,z)$.
We are given identities
By canceling $w$, we get
On the other hand,
which is what we needed. $\square$
An equivalence class of such triples $(U,\varphi,v)$ is called a tangent vector of $X$ at $x$. The set of such tangent vectors is called the tangent space to $X$ at $x$, which is denoted by $T_x(X)$.
But it seems that we have gone too far. Is the triple even a ‘vector’? To get a clear view let’s see Euclidean submanifolds first.
Definition of tangent vectors of Euclidean submanifolds
Suppose $M$ is a submanifold of $\mathbb{R}^n$. We say $z$ is the tangent vector of $M$ at point $x$ if there exists a curve $\alpha$ of class $C^1$, which is defined on $\mathbb{R}$ and where
there exists an interval $I$ such that $\alpha(I) \subset M$, such that $\alpha(t_0)=x$ and $\alpha’(t_0)=z$. (For convenience we often take $t_0=0$.)
This definition is immediate if we check some examples. For the curve $M: x^2+1+\frac{e^x}{x^2+1}-y=0$, we can show that $(1,1)^T$ is a tangent vector of $M$ at $(0,1)$, which is identical to our
first example. Taking
we get $\alpha(0)=(0,1)$ and
Therefore $\alpha’(0)=(1,1)^T$. $\square$
Coordinate system and tangent vector
Let $\mathbf{E}$ and $\mathbf{F}$ be two Banach spaces and $U$ an open subset of $\mathbf{E}$. A $C^p$ map $f: U \to \mathbf{F}$ is called an immersion at $x$ if $f’(x)$ is injective.
For example, if we take $\mathbf{E}=\mathbf{F}=\mathbb{R}=U$ and $f(x)=x^2$, then $f$ is an immersion at almost all point on $\mathbb{R}$ except $0$ since $f’(0)=0$ is not injective. This may lead
you to Sard’s theorem.
(Theorem 2) Let $M$ be a subset of $\mathbb{R}^n$, then $M$ is a $d$-dimensional $C^p$ submanifold of $\mathbb{R}^n$ if and only if for every $x \in M$ there exists an open neighborhood $U \
subset \mathbb{R}^n$ of $x$, an open neighborhood $\Omega \subset \mathbb{R}^d$ of $0$ and a $C^p$ map $g: \Omega \to \mathbb{R}^n$ such that $g$ is immersion at $0$ such that $g(0)=x$, and $g$
is a homeomorphism between $\Omega$ and $M \cap U$ with the topology induced from $\mathbb{R}^n$.
This follows from the definition of manifold and should not be difficult to prove. But it is not what this blog post should cover. For a proof you can check Differential Geometry: Manifolds, Curves,
and Surfaces by Marcel Berger and Bernard Gostiaux. The proof is located in section 2.1.
A coordinate system on a $d$-dimensional $C^p$ submanifold $M$ of $\mathbb{R}^n$ is a pair $(\Omega,g)$ consisting of an open set $\Omega \subset \mathbb{R}^d$ and a $C^p$ function $g:\Omega \to
\mathbb{R}^n$ such that $g(\Omega)$ is open in $V$ and $g$ induces a homeomorphism between $\Omega$ and $g(\Omega)$.
For convenience, we say $(\Omega,g)$ is centered at $x$ if $g(0)=x$ and $g$ is an immersion at $x$. By theorem 2 it is always possible to find such a coordinate system centered at a given point $x \
in M$. The following theorem will show that we can get a easier approach to tangent vector.
(Theorem 3) Let $\mathbf{E}$ and $\mathbf{F}$ be two finite-dimensional vector spaces, $U \subset \mathbf{E}$ an open set, $f:U \to \mathbf{F}$ a $C^1$ map, $M$ a submanifold of $\mathbf{E}$
contained in $U$ and $W$ a submanifold of $\mathbf{F}$ such that $f(M) \subset W$. Take $x \in M$ and set $y=f(x)$, If $z$ is a tangent vector to $M$ at $x$, the image $f’(x)(z)$ is a tangent
vector to $W$ at $y=f(x)$.
Proof. Since $z$ is a tangent vector, we see there exists a curve $\alpha: J \to M$ such that $\alpha(0)=x$ and $\alpha’(0)=z$ where $J$ is an open interval containing $0$. The function $\beta = f \
circ \alpha: J \to W$ is also a curve satisfying $\beta(0)=f(\alpha(0))=f(x)$ and
which is our desired curve. $\square$
Why we use ‘equivalence relation’
We shall show that equivalence relation makes sense. Suppose $M$ is a $d$-submanifold of $\mathbb{R}^n$, $x \in M$ and $z$ is a tangent vector to $M$ at $x$. Let $(\Omega,g)$ be a coordinate system
centered at $x$. Since $g \in C^p(\mathbb{R}^d;\mathbb{R}^n)$, we see $g’(0)$ is a $n \times d$ matrix, and injectivity ensures that $\operatorname{rank}(g’(0))=d$.
Every open set $\Omega \subset \mathbb{R}^d$ is a $d$-dimensional submanifold of $\mathbb{R}^d$ (of $C^p$). Suppose now $v \in \mathbb{R}^d$ is a tangent vector to $\Omega$ at $0$ (determined by a
curve $\alpha$), then by Theorem 3, $g \circ \alpha$ determines a tangent vector to $M$ at $x$, which is $z_x=g’(0)(v)$. Suppose $(\Lambda,h)$ is another coordinate system centered at $x$. If we want
to obtain $z_x$ as well, we must have
which is equivalent to
for some $w \in \mathbb{R}^d$ which is the tangent vector to $\Lambda$ at $0 \in \Lambda$. (The inverse makes sense since we implicitly restricted ourself to $\mathbb{R}^d$)
However, we also have two charts by $(U,\varphi)=(g(\Omega),g^{-1})$ and $(V,\psi) = (h(\Lambda),h^{-1})$, which gives
and this is just our equivalence relation (don’t forget that $g(0)=x$ hence $g^{-1}(x)=\varphi(x)=0$!). There we have our reason for equivalence relation: If $(U,\varphi,v) \sim (V,\psi,w)$, then $
(U,\varphi,u)$ and $(V,\psi,v)$ determines the same tangent vector but we do not have to evaluate it manually. In general, all elements in an equivalence class represent a single vector, so the
vector is (algebraically) a equivalence class. This still holds when talking about Banach manifold since topological properties of Euclidean spaces do not play a role. The generalized proof can be
implemented with little difficulty.
Tangent space
The tangent vectors at $x \in M$ span a vector space (which is based at $x$). We do hope that because if not our definition of tangent vector would be incomplete and cannot even hold for an trivial
example (such as what we mentioned at the beginning). We shall show, satisfyingly, the set of tangent vectors to $M$ at $x$ (which we write $T_xM$) forms a vector space that is toplinearly isomorphic
to $\mathbf{E}$, on which $M$ is modeled.
(Theorem 4) $T_xM \simeq \mathbf{E}$. In other words, $T_xM$ can be given the structure of topological vector space given by the chart.
Proof. Let $(U,\varphi)$ be a chart at $x$. For $v \in \mathbf{E}$, we see $(\varphi^{-1})’(x)(v)$ is a tangent vector at $x$. On the other hand, pick $\mathbf{w} \in T_xM$, which can be represented
by $(V,\psi,w)$. Then
makes $(U,\varphi,v) \sim (V,\psi,w)$ uniquely, and therefore we get some $v \in \mathbf{E}$. To conclude,
which proves our theorem. Note that this does not depend on the choice of charts. $\square$
For many reasons it is not a good idea to identify $T_xM$ as $\mathbf{E}$ without mentioning the point $x$. For example we shouldn’t identify the tangent line of a curve as $x$-axis. Instead, it
would be better to identify or visualize $T_xM$ as $(x,\mathbf{E})$, that is, a linear space with origin at $x$.
Tangent bundle
Now we treat all tangent spaces as a vector bundle. Let $M$ be a manifold of class $C^p$ with $p \geq 1$, define the tangent bundle by the disjoint union
This is a vector bundle if we define the projection by
and we will verify it soon. First let’s see an example. Below is a visualization of the tangent bundle of $\frac{x^2}{4}+\frac{y^2}{3}=1$, denoted by red lines:
Also we can see $\pi$ maps points on the blue line to a point on the curve, which is $B$.
To show that a tangent bundle of a manifold is a vector bundle, we need to verify that it satisfies three conditions we mentioned in previous post. Let $(U,\varphi)$ be a chart of $M$ such that $\
varphi(U)$ is open in $\mathbf{E}$, then tangent vectors can be represented by $(U,\varphi,v)$. We get a bijection
by definition of tangent vectors as equivalence classes. Let $z_x$ be a tangent vector to $U$ at $x$, then there exists some $v \in \mathbf{E}$ such that $(U,\varphi,v)$ represents $z$. On the other
hand, for some $v \in \mathbf{E}$ and $x \in U$, $(U,\varphi,v)$ represents some tangent vector at $x$. Explicitly,
Further we get the following diagram commutative (which establishes VB 1):
For VB 2 and VB 3 we need to check different charts. Let $(U_i,\varphi_i)$, $(U_j,\varphi_j)$ be two charts. Define $\varphi_{ji}=\varphi_j \circ \varphi_i^{-1}$ on $\varphi_i(U_i \cap U_j)$, and
respectively we write $\tau_{U_i}=\tau_i$ and $\tau_{U_j}=\tau_j$. Then we get a transition mapping
One can verify that
for $x \in U_i \cap U_j$ and $v \in \mathbf{E}$. Since $D\varphi_{ji} \in C^{p-1}$ and $D\varphi_{ji}(x)$ is a toplinear isomorphism, we see
is a morphism, which goes for VB 3. It remains to verify VB 2. To do this we need a fact from Banach space theory:
If $f:U \to L(\mathbf{E},\mathbf{F})$ is a $C^k$-morphism, then the map of $U \times \mathbf{E}$ into $\mathbf{F}$ given by
is a $C^k$-morphism.
Here, we have $f(x)=\tau_{ji}(x,\cdot)$ and to conclude, $\tau_{ji}$ is a $C^{p-1}$-morphism. It is also an isomorphism since it has an inverse $\tau_{ij}$. Following the definition of manifold, we
can conclude that $T(U)$ has a unique manifold structure such that $\tau_i$ are morphisms (there will be a formal proof in next post about any total space of a vector bundle). By VB 1, we also have $
\pi=\tau_i\circ pr$, which makes it a morphism as well. On each fiber $\pi^{-1}(x)$, we can freely transport the topological vector space structure of any $\mathbf{E}$ such that $x$ lies in $U_i$, by
means of $\tau_{ix}$. Since $f(x)$ is a toplinear isomorphism, the result is independent of the choice of $U_i$. VB 2 is therefore established.
Using some fancier word, we can also say that $T:M \to T(M)$ is a functor from the category of $C^p$-manifolds to the category of vector bundles of class $C^{p-1}$.
|
{"url":"https://desvl.xyz/categories/Geometry/Differential-Geometry/","timestamp":"2024-11-13T12:52:46Z","content_type":"text/html","content_length":"49036","record_id":"<urn:uuid:5f2b869d-fdf9-4ea3-8b80-3fc610e8323a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00513.warc.gz"}
|
Welcome to Math Bytes: Where Math and Computer Science Intersect
"Mathematics is the most beautiful and most powerful creation of the human spirit."
- Stefan Banach
About Me
My name is Dallin Stewart, and I’m a senior at Brigham Young University studying applied math (Applied and Computational Math Emphasis or ACME for short). It is a relatively new program in my college
that focuses on making technology work in the 21st century with algorithms for machine learning and techniques for finding insights from data.
I grew up in Alameda, California as the oldest of four. Since middle school, I have had a passion for Ultimate frisbee, playing all kinds of board games and card games late into the night, and
learning about the world and how it works. I started college at BYU studying Electrical Engineering where I loved spending time with friends and watching movies. After my freshman year I took a two
year break to volunteer in Zacatecas, Mexico and Tempe, Arizona as a missionary for the Church of Jesus Christ of Latter-day Saints. By the time I started school again I had developed an enthusiasm
for innovation and using technology to help people, so naturally I switched my major to Applied Math. I’m nearing the end of my time in college right now, and I thought it would be fun to share a few
of the cool things I’ve learned!
About The Blog
"Go down deep enough into anything and you will find mathematics."
- Dean Schlicter
One of the motivations behind creating this blog stems from the common misconception that those who don't immediately say, "I hate math" usually assume that a degree in mathematics only leads to
careers in research or education. I understand this perspective because I've been there myself. Even many companies I've spoken to are uncertain about the possibilities that come with an applied math
major, unless I provide familiar points of reference such as statistics or computer science. I want to help dispel the notion that math exists solely for itself. In its place, I hope to inspire and
broaden the perspectives of those who might have overlooked the immense potential mathematics offers through real-world applications.
Every article here is designed to showcase the diverse and remarkably powerful ways math can be applied to the real world. I should start by saying that while I came up with the name Math Bytes
myself, it is also the name of a published book by Tim Charter about Google bombs, chocolate-covered pi, and other cool bits in computing. I haven’t read it but he gets the credit for coming up with
the name first. I chose the name because it reflects the intersection of math and computer science that embodies the ACME program and many of the projects I’ll discuss in my blog. I also liked the
pun on bite-sized discussions about algorithms and computers.
My goal is to publish a new blog every Tuesday, at least over the summer of 2023. It might slow down after I start my senior year again. Each post is about a different project I have worked on for
fun or for one of my math classes over the past year or so. Some are longer projects with fewer algorithms but more code explanation, while others are much shorter and focus on a specific example or
implementation of a mathematical concept.
I’ll do my best to leave out enough jargon so that anyone can read and learn about the project, but enough mathematical formulas and coding snippets so that anyone interested can follow and replicate
what I have developed. Even if you aren’t familiar with Python or you come across an equation that doesn’t look familiar, I explain heuristically what is going on, so you should be able to safely
skip it :)
What is Applied Math?
"Mathematics consists of proving the most obvious thing in the least obvious way."
- George Pólya
Applied mathematics is a discipline that focuses on the practical application of mathematical principles and techniques to solve real-world problems in the 21st century. Applied mathematicians use
modeling, analysis, and optimization techniques to study phenomena in industries such as economics, finance, science, and engineering.
Applied mathematics covers a wide range of areas, including calculus, linear algebra, differential equations, numerical analysis, probability theory, statistics, and optimization. It is used to
design and analyze experiments, develop and optimize processes, simulate complex systems, and make data-driven decisions.
Some examples of applied mathematics in action include using mathematical models to predict weather patterns, optimizing traffic flow in cities, designing algorithms for search engines, and analyzing
financial data to forecast market trends. It also empowers technologies such as machine learning, self-driving cars, space travel, GPS, and financial services.
ACME Curriculum
"Dear Math, please grow up and solve your own problems. I’m tired of solving them for you."
- Anonymous
In the ACME program, we take two year-long classes our junior and senior years after taking courses equivalent to a minor in math. In Mathematical Analysis 1 and 2, we develop our theoretical skills
by proving most of the fundamental ideas from linear algebra and calculus, and spend time with measure theory and complex analysis. In Algorithms and Optimization 1 we cover complexity analysis,
graph theory, fundamentals of probability and statistics, and fourier transforms. In part two we learn about various types of optimization and interpolation, such as: unconstrained, linear,
nonlinear, convex, dynamic, and stochastic optimization.
Seniors take these skills and tools and apply them for applications in modeling. One class covers modeling with data and uncertainty and developing various types of machine learning algorithms. The
other class reviews modeling with dynamics and control theory by understanding the constraints and conditions of differential equations.
Math is Cool! I Promise!
"Many who have had an opportunity of knowing any more about mathematics confuse it with arithmetic, and consider it an arid science. In reality, however, it is a science which requires a great
amount of imagination."
- Sofia Kovalevskaya
I had two realizations while attending high school and college that eventually convinced me that applied math wasn’t just for researchers and professors. The first is that mathematics is the best
tool we have for making sense of the world and predicting the phenomena that are important to us. The second is that mathematics is what allows us to teach a manufactured chunk of silicone rock to
learn about the world, make intelligent decisions, and solve complex problems.
daisyowl on Twitter
We live in a world where the amount of data collected nearly doubles every two years. That data represents everything from how well a patient is recovering in a hospital to how many minutes of The
Office you’ve watched in the last week. We rely on mathematics to make sense of all that information and gain insights that will help improve people’s lives. It is the power that gives data meaning
and value. For example, expected value formulas and dynamic optimization techniques help researchers conduct clinical trials for life-saving cancer treatment more effectively and efficiently while
reducing risks. Furthermore, recommender systems use machine learning and matrices to help you find your new favorite show, the perfect gift for your pets’ birthday, or the most meaningful posts your
friends are sharing on Instagram.
By encoding the forces that affect a system into mathematical formulas and models, we can predict how that system will behave in a future within acceptable error. For example, we can model physical
systems like a ball being thrown by encoding factors and forces like the law of gravity. This model can then tell us exactly where that ball will be at any given moment. A more complex model can
apply differential equations to predict how people will interact in a social environment or how a pandemic spreads through a community.
Data is super cool
When you’re trying to teach a rock how to learn, there’s a lot more involved than just the algorithm or the model. But once you have a manufactured chip, a neural network that uses basic principles
from linear algebra and calculus allow you to set up a nonlinear system of equations that can make and improve its predictions about anything from the type of car featured in an image in one of my
own projects to deciding on the best subsequent character like Chat-GPT does.
The knowledge about the world, the predictions about the future we can make with measurable confidence, and the ability to manipulate both the abstract and physical forces that govern our lives are
ultimately what make mathematics so important and powerful. The beauty, clarity, and purity found uniquely in math are amazing, but the tools for crafting a better future through technology and
innovation attracted me to this discipline more than anything else.
Math is Relevant for Everyone
"Without mathematics, there’s nothing you can do. Everything around you is mathematics. Everything around you is numbers."
- Shakuntala Devi
Mathematics is a fundamental language that underlies our entire world. It may not always be obvious, but mathematics plays a crucial role in our daily lives, shaping everything around us. It provides
a framework for understanding and solving problems, making informed decisions, and unlocking new possibilities. In the simplest of activities, such as cooking or managing personal finances,
mathematics is at work. Beyond practical applications, practicing math enhances critical thinking, logical reasoning, and problem-solving skills. It helps us train our minds to analyze, evaluate, and
find creative solutions to challenges in various areas of life.
Math is even relevant in cooking!
Mathematics also plays a central role in fields like art, music, and design. The harmony in musical compositions, the balance in visual arts, and the elegance in architecture often stem from
mathematical principles such as symmetry, patterns, and proportions. Math provides a foundation for expressing creativity and beauty. Everytime you listen to your favorite song, walk without fear of
a cave-in around your home, or use your computer, you are benefiting directly from applied math. Still further, mathematics fuels advancements in fields like computer science, artificial
intelligence, cryptography, and data analysis. It enables us to develop algorithms, build models, and make accurate predictions, transforming industries and pushing the boundaries of what is
A beautiful depiction of the intersection of art, architecture, and mathematics
The real question then is not what you can do with math, it’s what can’t you do with math! It is a universal language that connects people across cultures and borders. It transcends linguistic
barriers and allows for precise communication and understanding. Mathematical concepts provide a shared framework for collaboration, scientific research, and global advancements. It is not limited to
formulas and equations confined to classrooms. It permeates our daily lives, empowering us to make informed decisions, fostering critical thinking, inspiring creativity, driving technological
progress, and facilitating global communication. With it, we can and have unlocked new opportunities and empower us to navigate and appreciate the intricacies of the world around us.
A Note to Future ACME Majors
"It’s easy if you know how to do it"
- Dylan Skinner
If you're considering studying applied math, especially ACME at BYU, let me share some insights with you. I won't sugarcoat it— the program is tough, but it is absolutely worth it. Everyone I know
that has graduated from ACME has felt the same. On the flip side, my friends that switched to a different major were also happy with their decision. Either way, you're not crazy for considering it.
In fact, you might just be someone who gravitates towards the toughest challenges, like I do. And let me tell you, ACME is definitely a challenge. While you may need to sacrifice some of your social
life, remember that ACME doesn't have to consume your entire existence. It's essential to set aside time for enjoyment and find a balance.
If you're contemplating switching your major to ACME after a few semesters at BYU, you're not alone. Many of the people I know made similar transitions, even those who had only one year remaining in
a different degree program like Computer Science. They joined ACME's junior core and found it very rewarding.
XKCD - Math Work
If you’re still unsure about what it will really be like, let me break it down. Most of our homework involves proving topics we learned in class. While I personally wish we spent more time coding and
a little less time proving concepts, ACME is still a math major. If you lone wolf the assignments, they may take 3-4 hours each, and completing all of them could be a challenge. If you take advantage
of the ACME lab by making friends and collaborating in groups, however, it’ll be easier to finish within 2 hours and will understand the material more easily.
Twice a week, we work on programming modules that align with the principles covered in class. Some modules, like the algorithm for recognizing handwritten digits using K-Means Nearest Neighbors or
generating text using Markov Chains, are incredibly fascinating and fun. On the other hand, labs like Unit Testing and Exceptions may seem uninspiring and difficult to match with the autograder. They
usually require 3-5 hours of work, with the occasional outliers.
Applied math extends far beyond rote memorization and equation solving like most of the classes you’ve probably taken so far. While I won't deny its difficulty, the math you encountered in high
school merely scratched the surface. You acquired the basic language skills and problem-solving abilities, but the real adventure begins with proof courses. It becomes a creative pursuit, demanding
innovative thinking and a willingness to take risks. Often, you'll tackle open-ended problems with no clear-cut solutions, relying on your creativity and intuition to forge new approaches and
Studying applied math, particularly ACME at BYU, is a challenging but rewarding endeavor. It will push you to your limits, but you'll grow both intellectually and personally. Embrace the opportunity
to delve deeper into the realm of mathematics, where it transforms into a captivating journey of creativity, exploration, and problem-solving.
"In mathematics, you don’t understand things. You just get used to them.
- John von Neumann
If you enjoy these articles or end up learning something from them, please share them online and with your friends. I don’t receive any compensation for this blog, but I enjoy sharing what I’ve
learned and contributing to the math and computer science community. Here are a few additional resources you might be interested in:
|
{"url":"https://dallinstewart.com/blog/welcome.html","timestamp":"2024-11-05T03:24:59Z","content_type":"text/html","content_length":"25759","record_id":"<urn:uuid:f9f85903-f47c-466b-a94d-b47982821ae9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00814.warc.gz"}
|
Varsity Tutors Calculus | Hire Someone To Do Calculus Exam For Me
Varsity Tutors Calculus Ph. D. Thesis Formulating a Ph. D. Thesis to Computing Calculus: ph. D. Thesis. theory submitted 2012, 1 pp Last modified: Dec 15, 2012 by David L. Hines, M. W. Shethvog,
David A. T. Stroml, C. P. Clouth, E. C. Litzenfeld, The Practice of Mathematics P. 3, Institute for Advanced Studies, TUAM, USA, 2017024 Abstract The method of computer algebra is available to the
users of computing systems. This theoretical framework would be relevant to other scientific and mathematics languages and in a similar spirit to that outlined in this paper[1] and elsewhere.
Citations of recent papers in the field include P.
Is Finish My Math Class Legit
Aydinieaux, A. S. Ksenziakov, A. C. Kim, A. Zaitsev, The foundations of quantum algebra. Notices of the author appear at: School of Mathematics, Kyoto University, Kyoto, Japan Department of Computer
Science, Kyoto University, Kyoto, Japan Department of Applied Mathematics, Kyushu University, Fukuoka, Japan Department of Communication and Information Science, University of Tokyo Japan Department
of Mathematics, The University of Auckland, Auckland, New Zealand An Introduction. Introduction Computer algebra is a classic and generally popular science that deals with mathematics, mathematical
algebra, signal processing, wave analysis, information theory and storage. It was further developed in the early 19th century by the mathematician and computer scientist T. C. Hebb, as generalisation
of an earlier work by Blatt and later by C. Isidore in 1937[2]. It is an attempt to create the generalised concept of the field of algebraic geometry, and it is a reminiscent of check out this site
to provide a mathematical foundation for modern mathematics both in physics [3] and in engineering [4] (for an explanation of concepts and methods, see [5] or [6].) It is still called “math” and
“geometry”, but it is by no means all-encompassing. A major contemporary is a seminal book by the “geometry” group of the 18th century, known as the Geography of Mathematics, containing 1,500 famous
pages.[4] This book includes also recent recent books. The use of algebra, which traditionally goes into a technical sense, was pioneered by T. C. Hebb, and the volume is based entirely on the
original book, the Geography of Mathematics. The geometry of mathematics comes into sharp focus in the last sixty years and is largely glossed up.
Hire Test Taker
G. R. White (PX: [RRX][8]) wrote a book for young mathematics students describing a specific problem about the mathematics of the universe. This is a book which reveals many important aspects of the
mathematical machinery of classical and modern committees in algebraic mathematics. G. R. White, “The Geography of Mathematical Mathematics”, chap. 2. The Geography of Mathematical Mathematics], in
part, on the algebraic aspects of mathematics. This book is the first complete textbook for the first five chapters and is a most suitable textbook if the time is ripe. It does not make many
approximations, but it is a good introductory textbook in a related technical and technical sense. G. G. Jones, M. L. Adams, J. S. McG floatl to. The algebras of geometry, mathematics and computing,
(Cambridge: iMathematik, 1973), 679-689. The lectures that are discussed give a great deal of context about these themes.
What App Does Your Homework?
Following are some of the most relevant works in the field: 1. 2.2 The Geography of Mathematics. 2.3 3.4 The Algebra of Mathematics. 3.5 4.6 The General Theory of Mathematics. 4.7 5.2 This book
offers an important introduction to the geometrical systems of M. L. AdamsVarsity Tutors Calculus in London Online To earn GCSE and apply to this web course as we meet your requirements, click
“Complete” at the University of Oxford. G A Growth of genes in the human go to these guys A A 1. Introduction 1.1 Introduction We can understand most of the human genome through a cell-wide genetic
analysis, which looks for hundreds or thousands of unique genetic elements that are part of the human genome. This means that genome-wide analysis is most quickly used for the analysis of genes. It
is natural therefore that there is a large amount of information about the genetic basis of a human’s life and/or health.
No Need To Study
It is then hard to find gaps in DNA that have not already been identified or found. You could then simply grow these DNA cells, say, one billion times in a matter of nanoseconds, into the most
powerful machine science software on the world! In other words, there should be no doubt of human DNA as it is currently defined. If you succeed at it, you don’t have to wait anywhere near the time
that you started to accumulate the human DNA. Because of this, here is an amazing thing about how you can study human DNA, studying the vast amount of cells in the human genome! 1.2 Overview 1.3
Introduction 1.4 Background We’ve described here before, humans. We are essentially living in the first 30 to 40 billion years of human evolution. While we are in prehistoric times, we have always
been on a human genome, being this time of the period! Humans were about 5000 years old. Human evolution was very gradual, with the start of human history in the mid-20th century. By the time we
passed, humans had divided into 4 main groups: the simple human beginning, which was the time when humans first used for building their businesses and in civilised Britain until the invention of
water. The simple human started with the settlement of the country and it was people like Louis XIV and John Moore who introduced the modern human to economic existence and commercial society.
Perhaps humans had been in the past before humans had begun to live in society! Human evolution involved an many-layered group: A race of humans, in which every pair of eyes was matched with an
orange and each pair bore one eye one eye up, one on each side of the nose and back. Humans were said to have been the first – except for Arthur Conan Doyle and Walter White. Moreover, human beings
that were born out of physical learning with skin colour differences became rather rarer in the nineteenth and early twentieth centuries – and then again in the early 20th century there was an
upsurge of outbreeding to favour male skin colour. Humans are actually very unusual. Our ancestors had at least one red in the nose and nose hair was a relatively rarefied yellow. A group called
Race-Haul was thus derived from the British common law that had first developed the concept commonly known as the “race” – a word for human beings who didn’t have Adam and Eve. This concept was later
used to find out how other humans were living in the environment of the Romans (even the Romans weren’t just going to be killed by the Romans). We would frequently find a white bear in the most
desolate condition.
Pay Someone To Take Online Class For Me Reddit
In the English language, a people, or indeed a group of people, is built into the genetic code of this gene network. The term race means: “race of white-capped people”, but black, white, brown, blue,
gold or black is a common term. These genes were used for many purposes together, such as for DNA replication and cell maintenance. There were also races that carried certain parts of the human
genome, including chromosomes, chromosomes 6, 8, 9 or 10, and several extra genes. The ancient Greeks didn’t keep a little red hair around their noses anyway, they allowed the red hair to be an
easier to deal with, and in the 1800s white were regarded “one of the earliest” people. In the late 1600’s, it is known that they were put in charge of many gene therapy programmes. They started as a
colony, and grew slowly, until they were eventually integrated with the Industrial Revolution. There is a lot of gene linked DNA, but we still have much to learn about which genes wereVarsity Tutors
Calculus : A Book of Few A tutoring assignment I put the subject on the website for this assignment. We had this from a college class; it worked, and I just sat in amazement when the instructor,
speaking more than twice why I had been put in place to do that assignment, said it worked out pretty well. Last week I watched this video on YouTube, and it is one of my favorites from the exam now.
A few things in the preparation of the algebra homework; I went with the “add functional PICE” line. For a first degree, I just need to combine ODE’s with Laplacian, and I just thought I should find
a paper that can (or should) explain such a simple feature of PICE. The next thing to my goal is “overthink” algebra, especially when I have been writing for an MS degree or higher for a while (e.g.,
e-books, science books, etc.). So I kept the part of Algebra I did under its 3D model, and the part of Decoscience – An Efficient Informatics (ECA) part since that was my fourth-grade so I left as
soon as it was written. But instead of a couple of articles for every assignment, I will include these courses; however, in about 10 minutes, I will remember to just put a tutoring assignment off the
list. Last week, I have taken the course 2D Calculus and 2-D I mean a big class with three classes. Actually, I did not have 2-D on this course, as I only have 1 of the questions than they provide.
Do My Aleks For Me
But for now, I will do the math for this one assignment, as it currently is. But now that I have it done, I am going to offer a little tutoring because the “sum of all my assignments” is the 3rd
revision of one. Some Calculus homework: Most of us love a little of the algebra here, so doing the calculus homework was also a good way to do because I usually enjoy the math portions of doing
something similar. Later, I will have a second or third class of calculus homework that I think also provides some kind of real-time calculi. We’ll walk alongside your class in what is a very, very
tricky topic that we’ll explain in a bit longer, but if you’d rather do that off the top of your head, it’s a great idea and if you already have a textbook there are a number of easy-to-read books
available. Today, I have picked a copy of The Calculator that has been in two classes, where I took an algebra textbook, probably got a copy and is close to what you may think of as the solution or
limit of this problem. Also, this book was taught within the 6th grade so, according to the teachers, it would make a good textbook than it was originally designed on-lines for others. This means my
2-day class on algebra is almost closed and I will return for my next visit (I hope). However, it makes one a little harder because it would still have been working so much faster than I was
accustomed to, since for the 2-D part of the program it was running on the cell side almost 100% better. But I did find this chapter book provided by Mike Jones quite good. I have started work on
this class in March 2013 so I am very eager to learn the general concepts so that I can work in the 2-D kind of assignments as well as the more specific 3-D kind in the next fall. However, for now I
will serve out the 3rd week of the class (I will finish it at 5:30am as I’ve already learned them somewhat). Final notes: 1) 1) This time is for math projects and school assignments from a teacher
and as-too-soon as they are provided. (No teachers) 2) Once the work gets done, I will write a rough view website out.3) This is an A2 class in 2-D programming with 3 points on the screen. 4) I will
write down the math for a first year. For those who want to know more, I will include a few papers, along with chapter summaries from math section notes one
|
{"url":"https://hirecalculusexam.com/varsity-tutors-calculus","timestamp":"2024-11-05T00:14:04Z","content_type":"text/html","content_length":"107838","record_id":"<urn:uuid:432a1f46-5086-4fab-a9f3-4cf68e779ddc>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00232.warc.gz"}
|
Accuracy of computations
Dear Professor Mean,
I’ve heard a lot about accuracy problems with Microsoft Excel, but I’d like to see an example where this really is a problem.
Dear Reader,
It takes a fairly extreme data set to show this, but Excel does indeed have some problems with numerical accuracy.
I designed a simple benchmark data set that tests the accuracy of Analysis of Variance calculations. Some software, including Microsoft Excel will have problems for data that is almost perfectly
constant. The benchmark data set is computed as
You can show through simple algebra that if I and J are both odd, this benchmark produces Mean Square Between Treatments (MSTR) of 0.01 J and Mean Square Within Treatments (MSE) of 0.01. But when
gamma is large, the data becomes so nearly constant that many software programs will have trouble computing MSTR and MSE.
Shown above is an example of the benchmark data set with I=5, J=7, and gamma=1,000. For these values, Excel produces very accurate results.
Notice that MSTR (Between Groups) is 0.07 and MSE (Within Groups) is 0.01 (see above), as they should be.
Shown above is the same benchmark data set with I=5, J=7 and gamma=1,000,000,000.
With this extreme value, Excel produces a negative value for MSTR (see above), which is mathematically impossible. By examining values of gamma somewhere between these two extremes, you can start to
see where Excel tends to degenerate.
Download the spreadsheet yourself.
You can download the Excel spreadsheet yourself. If you get different results using a different version of Excel, please let me know. These examples use Excel 2002.
Where the problem lies
This benchmark data set shows an example of poor diagnostics. When you encounter data that only varies in the 10th significant digit, you should either refuse to analyze the data or you should set
both MSTR and MSE to zero. Unfortunately, though, Excel does not diagnose this extreme data set properly and produces a very bad result with no warning to the user.
Other software programs, like SAS and SPSS, have had problems with this benchmark back when this paper was published (1990), though I have not tested either program recently.
David Heiser was nice enough to point out that if you center the data, you get much better results. Centering is a commonly used approach where you subtract the overall mean (or something that you
expect would be close to the overall mean) before analyzing the data. I’ve updated the Excel spreadsheet to show how much centering can help.
In an ideal world, Excel should center the data as part of its internal calculations. Although Excel doesn’t center the data, a careful data analyst would still do this prior to any serious data
Further reading
Stephen D Simon, James P Lesage. Assessing the accuracy of ANOVA calculations in statistical software. Computational Statistics and Data Analysis 1990: 8(3); 325-332.
You can find an earlier version of this page on my original website.
|
{"url":"http://new.pmean.com/computational-accuracy/","timestamp":"2024-11-06T15:28:38Z","content_type":"text/html","content_length":"5606","record_id":"<urn:uuid:02d61859-5d40-45e1-a96f-26d4a19ddb92>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00729.warc.gz"}
|
Primary, Secondary, GCSE, A-Level Maths & Exam Preparation
At GradeBusters we have the unique opportunity to place your child on the appropriate Maths course based on their ability.
Our Maths Courses
GB Maths Progress
Age 7 - 11
The GB Maths Progress course is our introductory taught course and is suitable for children aged between 7 and 11 years old.
GB Maths Plus
Age 8 - 13
The GB Maths Plus course is the next taught course available and is suitable for children aged between 8 - 13 years old.
GB Maths Elite
9- 15 years old
The GB Maths Elite taught course extends the learning from the GB Plus course and is the pre cursor to the GB Maths Advance course, it is suitable for children aged between 9 and 15 years old.
GB Maths Excel
Age 11 - 16
The GB Maths Excel course is our GCSE Maths Taught course and is suitable for children aged 11 to 16 years old.
GB Maths Exam Preparation
Including Entrance Exams, CAT, GCSE & A Level
The GB Exam prep courses are ideal for students wanting to maximise their chances of success in their Entrance Exam or GCSE Exams.
Why our students
Here’s how
Don't just take our word for it
We have seen a massive growth in their confidence in maths
I would like to begin by expressing our gratitude towards you for what you have done for the girls, especially P; we have seen massive growth in their love and confidence for Maths, and owe it you
for making this happen. P is noticeably upset to have left, but we feel that she can now tackle the challenge of the subject more independently. I would like to attend the classes as before since I
miss them a lot.
Pushpa Sharma
Don't just take our word for it
Very consistent delivery and providing..
Very consistent delivery and providing much needed support during this Covid 19 pandemic. Support for children and parents alike and all the tutors are very helpful and patient allowing children to
ask as many questions as they like to understand a topic.
Andrew McNaney
Don't just take our word for it
Excellent education taught in a very inspiring way.
Initially my son was behind the curve or average at Maths. He has been attending GradeBusters for some time now and has come on leaps and bounds. So much so that not only is he achieving excellent
grades, he thoroughly enjoys the subject. GradeBusters isn't seen as a Saturday chore by my son. However, a weekly event to be looked forward too. His current aspirations include to read Maths at
Julie Inns
|
{"url":"https://www.gradebusters.co.uk/maths","timestamp":"2024-11-07T00:32:07Z","content_type":"text/html","content_length":"85053","record_id":"<urn:uuid:c2515318-c7f1-4d54-8db4-fecf49b940ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00620.warc.gz"}
|
Blog Archives
The Brick Math method of learning by using LEGO® bricks to model math problems adapts to anywhere kids are learning – it’s a great method to Categories
learn K-6th grade math at home Archives
. It makes math easy to teach and fun to learn!
Here are the basics you need to know to use Brick Math at home with your own children:
Brick Math is taught by math subject. It corresponds to grade level roughly this way:
Basic Fractions
Basic Measurement
Fraction Multiplication
Fraction Division
Advanced Measurement and Geometry
Choose the subject(s) you need for your children. You can mix up the grade levels, depending on their interest and what they’re learning.
There is a Teacher Edition for each subject for you to use. These have all the
in chapter format. There is also a Student Edition for each subject, which is a workbook with extra problems, assessments, and a place to keep your child’s work all in one place. These
are optional, but they are really useful. The books come in physical paperback versions as well as PDFs that can be downloaded immediately.
The LEGO® bricks that are used to model the math are the common sizes and shapes – 1x1, 2x2, 2x4, etc. Each chapter lists the bricks you need to do the lessons in that chapter, and the
appendix of each book has a
list of all the bricks needed for the whole program
. There is a Brick Math
brick set
you can purchase, but if you have LEGO® bricks at home, feel free to use them!
Brick Math has lots of resources for helping parents teach their children with the program. You can start with
video lessons
, and then follow the Teacher Edition to guide your child through all the lessons. Every Teacher Edition has
tips for teaching with Brick Math
. The short
in every chapter of the Student Editions will help you make sure your child is learning.
And, as always, feel free to
contact us
with your questions – we’re here to help!
0 Comments
The concept of Least Common Denominator (LCD) is key to being able to add and subtract with fractions that have unlike denominators or compare the size of different fractions. It's
essential for students to thoroughly grasp the idea, and until they do so, they can't move forward with fractions.
Modeling with LEGO bricks is the perfect way to teach students how to find the least common denominator. This method from
Brick Math
, called the "Fraction Train," starts with concrete representation of the math problem using bricks, to teach students exactly where the idea of a common denominator comes from.
1. Start by explaining that the process for finding Least Common Denominator with bricks is called the "Fraction Train." Have students build brick models of 2/3 and 3/4. Label them
Fraction 1 and Fraction 2.
2. Discuss the value of the numerators and the denominators of 2/3 and 3/4. Ask students if the wholes are the same, and if not, which whole is larger? Explain that you will be finding
the Least Common Denominator so you can compare the fractions.
3. Place one 1x3 brick on the baseplate, showing the denominator of Fraction 1, and under that, a 1x4 brick showing the denominator of Fraction 2.
Now it's time to start building your "fraction train." You'll be building out a train of bricks that makes a rectangle.
Add enough 1x3 bricks to the top row, and enough 1x4 bricks to the bottom row, until both rows are the same length and the bricks form a rectangle. Count the studs in each row (12) to
find the Least Common Denominator—the smallest number that both denominators can divide into evenly.
Discuss the fact that 12 is also the equivalent whole for both fractions 2/3 and 3/4.
4. Now it's time to build the equivalent fractions for 2/3 and 3/4, using the Least Common Denominator of 12.
Place two 1x12 bricks on the baseplate to represent the LCD of 12 for each fraction.
5. Look at the fraction train again. There are 4 bricks in the top row of the fraction train. This shows the number of 1x2 bricks (from the numerator of Fraction 1) that will model the
numerator of the equivalent fraction. Count the studs in the numerator (8) and the denominator (12) . This shows that the equivalent fraction for 2/3 is 8/12.
6. Repeat the process for Fraction 2. Count the studs on the model of the numerator (9) and on the denominator (12). The equivalent fraction for 3/4 is 9/12.
7. Now the equivalent fractions can be compared, since they both have the same denominator. Have students look at the numerators of each fraction and determine which fraction is
larger, based on having the larger number of studs in the numerator. Extend the learning by having students draw their models. Have them write a math sentence that compares the two
fractions (2/3 <3/4 because 8/12 < 9/12).
This lesson from Brick Math's
Basic Fractions Using LEGO® Bricks
is available FREE as the Brick Math Lesson of the Month for March 2020. Click
to sign up for the lesson, including Student Workbook pages.
0 Comments
0 Comments
Brick Math has just started a
Pinterest site
, with lots of great images that show how
Brick Math
works. There is a Pinterest board for each the different math topics covered in Brick Math, including Counting, Addition, Subtraction, Multiplication, Division, Basic Fractions, Basic
Measurement, Fraction Multiplication, Fraction Division, Advanced Measurement and Geometry, and Decimals.
We continue to add to Pinterest all the time, so check it out today, and make it one of your favorites so new Brick Math pins will show up on your feed. And share this with your
friends so everyone can find out about Brick Math!
0 Comments
|
{"url":"https://www.brickmathseries.com/brick-math-blog/archives/03-2020","timestamp":"2024-11-12T06:03:51Z","content_type":"text/html","content_length":"129146","record_id":"<urn:uuid:08f8e1ed-c4c1-4593-94b5-79a20a108959>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00675.warc.gz"}
|
Hybrid Genetic Algorithm for Multicriteria Scheduling with ... - CiteSeerX - M.MOAM.INFO
Ashwani Dhingra & Pankaj Chandna
Hybrid Genetic Algorithm for Multicriteria Scheduling with Sequence Dependent Set up Time Ashwani Dhingra
[email protected]
Department of Mechanical Engineering University Institute of Engineering & Technology Maharshi Dayanand University Rohtak, 124001, Haryana INDIA
Pankaj Chandna
[email protected]
Department of Mechanical Engineering National Institute of Technology Kurukshetra, 136119, Haryana INDIA
In this work, multicriteria decision making objective for flow shop scheduling with sequence dependent set up time and due dates have been developed. Multicriteria decision making objective includes
total tardiness, total earliness and makespan simultaneously which is very effective decision making for scheduling jobs in modern manufacturing environment. As problem of flow shop scheduling is NP
hard and to solve this in a reasonable time, four Special heuristics (SH) based Hybrid Genetic Algorithm (HGA) have also been developed for proposed multicriteria objective function. A computational
analysis upto 200 jobs and 20 machines problems has been conducted to evaluate the performance of four HGA’s. The analysis showed the superiority of SH1 based HGA for small size and SH3 based HGA for
large size problem for multicriteria flow shop scheduling with sequence dependent set up time and due dates. Keywords: Flow shop scheduling, Genetic algorithm, Sequence dependent set up time, Total
tardiness, Total earliness, makespan
INTRODUCTION Scheduling in manufacturing systems is typically associated with allocating a set of jobs on a set of machines in order to achieve some objectives. It is a decision making process that
concerns the allocation of limited resources to a set of tasks for optimizing one or more objectives. Manufacturing system is classified as job shop and flow shop and in the present work; we have
dealt with flow shop. In a job shop, set of jobs is to be scheduled on a set of machines and there is no restriction of similar route on the jobs to follow, however in flow shop environment all the
jobs have to follow the similar route. Classical flow shop scheduling problems are mainly concerned with completion time related objectives (e.g. flow time and makespan) and aims to reduce production
time and enhance productivity and facility utilization. In modern manufacturing and operations management, on time delivery is a significant factor as for the survival in the competitive markets and
effective scheduling becomes very important in order to meet customer requirements as rapidly as possible while maximizing the productivity and facility utilization. Therefore, there is a need of
scheduling system which includes multicriteria decisions like makespan, total flow time, earliness, tardiness etc. International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
Also, flow shop scheduling problems including sequence dependent set up time (SDST) have been considered the most renowned problems in the area of scheduling. Ali Allahverdi et al [1] investigated
the survey of flow shop scheduling problems with set up times and concluded that considering flow shop scheduling problems is to obtain remarkable savings when setup times are included in scheduling.
Sequence dependent setup times are usually found in the condition where the multipurpose facility is available on multipurpose machine. Flow shop scheduling with sequence dependent set up time can be
found in many industrial systems like textile industry, stamping plants, chemical, printing, pharmaceutical and automobile industry etc. Several researchers have considered the problem of flow shop
scheduling with single criterion and very few have dealt with multicriterion decision making including sequence dependent set up. The scheduling literature also revealed that the research on flow
shop scheduling is mainly focused on bicriteria without considering sequence dependent set up time .Very few considered flow shop scheduling including sequence dependent set up time with
multicriteria. Rajendran [2] has implemented a heuristic for flow shop scheduling with multiple objectives of optimizing makespan, total flow time and idle time for machines. The heuristic preference
relation had been proposed which was used as the basis to restrict the search for possible improvement in the multiple objectives. Ravindran et al. [3] proposed three heuristic similar to NEH for
flow shop scheduling with multiple objectives of makespan and total flow time together and concluded that proposed three heuristic yields good results than the Rajendran heuristic CR[2].Gupta et al.
[4] considered flow shop scheduling problem for minimising total flow time subject to the condition that the makespan of the schedule is minimum. Sayin and Karabati [5] minimized makespan and sum of
completion times simultaneously in two machines flow shop scheduling environment. Branch and bound procedure was developed that iteratively solved single objective scheduling problems until the set
of efficient solutions was completely enumerated. Danneberg et al. [6] addressed the permutation flow shop scheduling problem with setup times and considered makespan as well as the weighted sum of
the completion times of the jobs as objective function. For solving such a problem, they also proposed and compared various constructive and iterative algorithms. Toktas et al. [7] considered the two
machine flow shop scheduling by minimizing makespan and maximum earliness simultaneously. They developed a branch & bound and a heuristic procedure that generates all efficient solutions with respect
to two criteria. Ponnambalam et al. [8] proposed a TSP GA multiobjective algorithm for flow shop scheduling, where a weighted sum of multiple objectives (i.e. minimizing makespan, mean flow time and
machine idle time) was used. The proposed algorithm showed superiority which when applied to benchmark problems available in the OR-Library. Loukil et al. [9] proposed multiobjective simulated
annealing algorithm to tackle the multiobjective production scheduling problems. They considered seven possible objective functions (the mean weighted completion time, the mean weighted tardiness,
the mean weighted earliness, the maximum completion time (makespan), the maximum tardiness, the maximum earliness, the number of tardy jobs). They claimed that the proposed multiobjective simulated
annealing algorithm was able to solve any subset of seven possible objective functions. Fred Choobineh et al. [10] proposed tabu search heuristic with makespan, weighted tardiness and number of tardy
jobs simultaneously including sequence dependent setup for n jobs on a single machine. They illustrated that as the problem size increases, the results provided by the proposed heuristic was optimal
or nearer to optimal solutions in a reasonable time for multiobjective fitness function considered. Rahimi Vahed and Mirghorbani [11] developed multi objective particle swarm optimization for flow
shop scheduling problem to minimize the weighted mean completion time and weighted mean tardiness simultaneously. They concluded that for large sized problem, the developed algorithm was more
effective from genetic algorithm. Noorul Haq and Radha Ramanan[12]used Artificial Neural Network (ANN) for minimizing bicriteria of makespan and total flow time in flow shop scheduling environment
and concluded that performance of ANN approach is better than constructive or improvement heuristics. Lockett and Muhlemann [13] proposed branch and bound algorithm for scheduling jobs with sequence
dependent setup times on a single processor to minimize the total number of tool changes. Proposed algorithm was suitable for only small problems which was the major limitation of that algorithm.
Gowrishankar et al. [14] considered m-machine flow shop International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
scheduling with minimizing variance of completion times of jobs and also sum of squares of deviations of the job completion times from a common due date. Blazewicz et al. [15] proposed different
solution procedures for flow shop scheduling for two machine problem with a common due date and weighted latework criterion. Eren [16] considered a bicriteria m-machine flowshop scheduling with
sequence dependent setup times with objective of minimizing the weighted sum of total completion time and makespan. He developed the special heuristics for fitness function considered and proved that
the special heuristic for all number of jobs and machines values was more effective than the others. Erenay et al. [17] solved bicriteria scheduling problem with minimizing the number of tardy jobs
and average flowtime on a single machine. They proposed four new heuristics for scheduling problem and concluded that the proposed beam search heuristics find efficient schedules and performed better
than the existing heuristics available in the literature. Naderi et al. [18] minimized makespan and maximum tardiness in SDST flow shop scheduling with local search based hybridized the simulated
annealing to promote the quality of final solution. Therefore, in modern manufacturing system, production cost must be reduced in order to survive in this dynamic environment which can be done by
effective utilisation of all the resources and completion of production in shorter time to increase the productivity also simultaneously considering due and early dates of the job. As minimisation of
makespan with not meeting the due date is of no use for an industry since there is loss of market competitiveness, loss of customer, tardiness and earliness penalty etc. Hence, for the today’s need,
we have considered the flowshop scheduling problem with sequence dependent set up time with tricriteria of weighted sum of total tardiness, total earliness and makespan which is very effective
decision making in order to achieve maximum utilization of resources in respect of increasing the productivity and meeting the due dates so as the customer good will and satisfaction. We also
proposed hybrid genetic algorithm in which initial seed sequence is obtained from heuristic similar to NEH [19]. As classical NEH considered processing times for makespan minimization and proposed
heuristic also works on multicriteria objective function (i.e. weighted sum of total tardiness, total earliness and makespan).
STATEMENT OF PROBLEM We have considered the flow shop scheduling problem with sequence dependent setup times and due dates associated to jobs in which the objective is to minimize the multicriteria
decision making for manufacturing system including total tardiness, total earliness and makespan simultaneously. Various assumptions, parameters and multicriteria objective function considered has
been illustrated below:
• • • • • • • • • •
Assumptions Machines never break down and are available throughout the scheduling period. All the jobs and machines are available at time Zero. All processing time on the machine are known,
deterministic and finite. Set up times for operations are sequence dependent and are not included in processing times Pre-emption is not allowed. Each machine is continuously available for
assignment, without significant division of the scale into shifts or days and without any breakdown or maintenance. The first machine is assumed to be ready whichever and whatever job is to be
processed on it first. Machines may be idle. Splitting of job or job cancellation is not allowed. It is associated with each job on each machine i.e. the time required to bring a given machine to a
state, which allows the next job to commence and are immobilized to the machines.
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
Parameters i Index for Machines
Completion time of job ‘j’
Index for Jobs
Due date of job‘j’
Weight for Total tardiness
α≥ 0
Tardiness of job ‘j’
Weight for Total earliness
β ≥0
Earliness of job‘j’
Weight for makespan
γ≥0 α+β+γ=1
Multicriteria objective function Multicriteria decision making objective function proposed in this work has based on realistic environment for manufacturing system (i.e. minimizing weighted sum of
total tardiness, total earliness and makespan).The significance of all the three objective function (Individual or combined) are explained below:S.No (i)
Objective Function Total Tardiness (Tj)
Significance Total tardiness (Tj) is a due date related performance measure and it is considered as summation of tardiness of individual jobs. If maximum jobs are to be completed in time but few jobs
left which is overdue as of improper scheduling than minimizing total tardiness reflects that situation so that all the jobs will be completed in time. Not meeting the due dates may cause loss of
customer, market competitiveness and termed as tardiness penalty.
Total Earliness (Ej)
Total earliness (Ej) is also a due date related performance measure but reflects early delivery of jobs and it is considered as summation of earliness of individual jobs. If the jobs are produced
before the due dates, than it also creates problem of inventory for an organization or it may cause penalty to the industry in terms of inventory cost and termed as earliness penalty.
Makespan (Cmax)
Multicriteria (weighted sum of total tardiness, total earliness and makespan)
Makespan is also a performance criterion which is defined as completion time of last job to be manufactured. In scheduling it is very important as to utilize maximum resources and increase
productivity. Minimization of makespan achieves the goal of an industry. As minimization of all the three performance criteria are important for an industry in the dynamic environment of markets,
upward stress of competition, earliness and tardiness penalty and overall for Just in Time (JIT) Manufacturing and increasing productivity. So, for achieving this, there is a requirement of
scheduling system which considered multicritreia decision making. To congregate this, we have developed this multicriteria objective function, which may be very effective decision making tool for
scheduling jobs in the dynamic environment.
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
Therefore, the formulation of multicriteria objective function is stated below: Total weighted tardiness which reflects the due dates of jobs to be scheduled as considered for minimization of late
deliverance of jobs and has defined as: n
j =1
Tj = C j − d j
Cj −dj ≥ 0
Total earliness which reflects the early delivery of jobs to be scheduled as early delivery of jobs seems to be harmful and defined as: n
j =1
Ej = d j −Cj =0
d j −Cj ≥ 0
Another commonly performance measure is completion time of last job i.e makespan (Cmax) which has been used for maximum utilization of resources to increase productivity. Therefore in the present
work, for the requirement of Just in Time (JIT) manufacturing in terms of earliness and tardiness and also for increasing productivity, we have proposed the multi criteria decision making objective
function including all the above three performance measure i.e. weighted sum of total tardiness, total earliness and makespan simultaneously for flow shop scheduling with sequence dependent set up
times and has been framed as: n n Minα ∑ T j +β ∑ E j + γC max j =1 j =1
HYBRID GENETIC ALGORITHM (HGA) Genetic algorithm is the optimization technique which can be applied to various problems, including those that are NP-hard. The genetic algorithm does not ensured an
optimal solution; however it usually provides good approximations in a reasonable amount of time as compared to exact algorithms. It uses probabilistic selection as a basis for evolving a population
of problem solutions. An initial population is created and subsequent generations are created according to a pre-specified breeding and mutation methods inspired by nature. A genetic algorithm must
be initialized with a starting population. Generating of initial population may be varied: feasible only, randomized, using some heuristics etc. Simple genetic algorithm generates initial population
randomly and limitation of that algorithm is that if the initial solution is better than solution provided by the algorithm may be of good quality and if it is inferior than final results may not be
better in a reasonable time. As flow shop scheduling belongs to NP hard and there is large search space to be searched in flow shop scheduling for finding optimal solutions and hence it is probable
that random generation of initial solutions provides relatively weak results. For this, initial feasible solution is obtained by some heuristics for judgement of optimality in a very reasonable time.
Generation of initial sequence with some heuristics and than that sequence is used as the initial sequence along with population as the procedure of simple genetic algorithm and called as Hybrid
Genetic Algorithm (HGA).
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
Outline of Hybrid Genetic Algorithm(HGA) The Hybrid Genetic Algorithm (HGA) acts as globally search technique which is similar to simple genetic algorithm with only deviation of generation of initial
solution. In HGA, initial feasible solution is generated with the help of some heuristics and than this initial sequence has been used along with the population according to population size for the
executing the procedure of simple genetic algorithm. The proposed HGA is described as:Step 1: Initialization and evaluation a) The algorithm begins with generation of initial sequence with special
heuristics (SH) called as one of the chromosome of population as described in section 3.2. b) Generation of (Ps-1) sequences randomly as per population size (Ps). c) Combining of initial sequence
obtained by special heuristics with randomly generated sequence to form number of sequences equal to population size (Ps). Step2: Reproduction The algorithm then creates a set of new populations. At
each generation, the algorithm uses the individuals in the current generation to generate the next population. To generate the new population, the algorithm performs the following steps: a) Scores
each member of the current population by computing fitness (i.e. weighted sum of total tardiness, total earliness and makespan simultaneously). b) Selects parents based on the fitness function (i.e.
multicriteria decision making). c) Some of the individuals in the current population that have best fitness are chosen as elite and these elite individuals are utilized in the next population. d)
Production of offspring from the parents by crossover from the pair of parents or by making random changes to a single parent (mutation). e) Replaces the current population with the children to form
the next generation. Step3: Stopping limit The algorithm stops when time limit reaches to n × m × 0.25 seconds. Proposed Special Heuristic(SH) The Special Heuristic (SH), the procedure which is
similar to NEH [19] has been developed to solve the multicriteria flow shop scheduling with due dates and sequence dependent set up times for instances upto 200 jobs and 20 machines developed by
Taillord [19]. Procedure of SH is described as below: Step 1. Generation of initial sequence. Step2. Set k = 2. Pick the first two jobs from the initial sequence and schedule them in order to
minimize the weighted sum of total tardiness, total earliness and makespan. As if there are only two jobs. Set the better one as the existing solution. Step 3. Increment k by 1. Generate k candidate
sequences by introducing the first job in the residual job list into each slot of the existing solution. Amongst these Candidates, select the better one with the least partial minimization of the
weighted sum of total tardiness, total earliness and makespan simultaneously. Update the selected partial solution as the new existing solution. Step 4. If k = n, a feasible schedule has been found
and stop. Otherwise, go to step 3. Special heuristics SH1, SH2, SH3 and SH4 are obtained by using the EDD, LDD, EPDD and LPDD sequences respectively, in step 1 of proposed special heuristics. They
have described below: a) Earliest Due Date (EDD):- Schedule the jobs initially as per ascending order of due dates of jobs. (Kim, 1993). b) Latest Due Date (LDD):- Arrange the jobs initially as per
descending order of the due dates of jobs.
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
c) Earliest processing time with due dates (EPDD):- Schedule the jobs according to
ascending order of
i =1
d) Latest processing time with due dates (LPDD):- Arrange the jobs according to descending
m order of ∑ Pij + d j . i =1 Parameters Settings • Population Size (Ps): Population size refers to the search space i.e. algorithm has to search the specified number of sequences and
larger the sequence, more the time is needed to execute the process of genetic algorithm. As in flow shop scheduling number of possible sequences is equal to n!. Therefore, if the population size is
equal to n! than application of genetic algorithm has no use. So, larger the initial population that is created, the more likely the best solution from it will be closer to optimal but at the cost of
increased execution time. So, in the present work it is set to 50 irrespective the size of problem to solve in a reasonable time. • Crossover function: Crossover is the breeding of two parents to
produce a single child. That child has features from both parents and thus may be better or worse than either parent as per fitness function. Analogous to natural selection, the more fit the parent,
the more likely the generation have. Different types of crossover have used in literature and after having experimental comparison , we have found that the order crossover(OX) provides the best
results for the multicriteria problem considered among the partially matched crossover(PMX), Order crossover (OX), Cycle crossover (CX) and single point crossover (SPX). So, in the present work, we
have applied the order crossover (OX). • Mutation function: For each sequence in the parent population a random number is picked and by giving this sequence a percent chance of being mutated. If this
sequence is picked for mutation then a copy of the sequence is made and operation sequence procedure reversed. Only operations from different jobs will be reversed so that the mutation will always
produces a feasible schedule. From the experiment, it is found that reciprocal exchange (RX) proves to be good with combination of order crossover (OX) and hence been used. • Elite Count: The best
sequences found should be considered in subsequent generations. At a lowest, the only best solution from the parent generation needs to be imitating to the next generation thus ensuring the best
score of the next generation is at least as better as the previous generation. Here elite is expressed as number of sequences. In this work, we have fixed the elite count as two means that we clone
the top two sequences that have least fitness function for the next generation. • Crossover fraction: It is the fraction for which crossover has to perform on the parents as per population size in
each generation. This is fixed to 0.8 i.e crossover should be done on 80% of total population size. • Mutation Fraction: It is also used as fraction and specified for which process of mutation has to
perform on the parents as per population size in each generation. This is fixed to 0.15 i.e. Mutation should be done on 15% of total population size. • Stopping condition: Stopping condition is used
to terminate the algorithm for certain numbers of generation. In this work, for fair comparison among different SH based Hybrid Genetic algorithm, we have used time limit base stopping criteria. So,
the algorithm stops when maximum time limit reaches n × m × 0.25 seconds.
RESULTS AND DISCUSSIONS In this study, flow shop scheduling with sequence dependent set up time and due dates of jobs have been considered in flow shop environment. The multicriteria objective
function including weighted sum of total tardiness, total earliness and makespan has been proposed which is very
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
effective decision making model for achieving industry as well as customer goals through scheduling the jobs. As the problems belong to NP hard, so we have also developed hybrid genetic algorithm, in
which initial feasible sequence is obtained from special heuristic. In the present work, we have developed four special heuristics (SH) and hybrid with genetic algorithm and termed special heuristic
based hybrid genetic algorithms HGA (SH1), HGA (SH2), HGA (SH3) and HGA (SH4) as stated in section 3.2. Problems upto 200 jobs and 20 machines developed by Taillord [20] in flow shop environment with
sequence dependent set up time and due dates have been solved from the proposed hybrid genetic algorithm. Computational and experimental study for the entire four hybrid genetic algorithm has also
been made for comparison. All experimental tests are conducted on a personal computer with P IV, core 2duo processor and 1 GB Ram. As proposed HGA generate initially only one sequence by SH and
remaining as per population size (Ps) randomly, so for each size of problem, we have run HGA five times for taking final average to compensate randomness. Also for reasonable comparison, stopping
limit of HGA has been fixed to time limit based criteria which is n × m × 0.25 seconds. Comparison of all the four hybrid genetic algorithm has been done by calculating Error which is computed
as:Error (%) =
Averagesol − Best sol X 100 Best Sol
Where Averagesol is the average solution obtained by a given algorithm and Bestsol is the best solution obtained among all the methods or the best known solution. Lesser the error, better the results
obtained. Bestsol can be found from the results obtained by running HGA five times for a particular problem and Average solution is the final average solution produced by the algorithm for all the
five runs. The Error for all the four HGA have been compared for five, ten and twenty machines problems with four sets of weights (0.33, 0.33, 0.33), (0.25,0.25,0.5) , (0.5, 0.25, 0.25) and (0.25,
0.25, 0.5) for multicriteria decision making objective function and have been shown in Figure 1, Figure 2 and Figure 3.
FIGURE1: Error for five machines problem (a) α = 0.33, β = 0.33 & γ = 0.33 (b) α = 0.25, β = 0.25 & γ = 0.5 (c) α = 0.5, β = 0.25 & γ = 0.25 (d) α = 0.25, β = 0.25 & γ = 0.5. International Journal of
Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
FIGURE 2: Error for ten machines problem (a) α = 0.33, β = 0.33 & γ = 0.33 (b) α = 0.25, β = 0.25 & γ = 0.5 (c) α = 0.5, β = 0.25 & γ = 0.25 (d) α = 0.25, β = 0.25 & γ = 0.5
FIGURE 3: Error for twenty machines problem (a) α = 0.33, β = 0.33 & γ = 0.33 (b) α = 0.25, β = 0.25 & γ = 0.5 (c) α = 0.5, β = 0.25 & γ = 0.25 (d) α = 0.25, β = 0.25 & γ = 0.5
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
From the analysis for 5, 10 and 20 machines problems as shown in Figure 1, Figure 2 and Figure 3, performance of proposed SH1 based HGA upto 20 jobs and SH3 based HGA as the jobs size increases,
showed superiority over others for all the four sets of weight values ((0.33, 0.33, 0.33), (0.25,0.25,0.5) , (0.5, 0.25, 0.25) and (0.25, 0.25, 0.5)) for multicriteria decision making in flow shop
scheduling under sequence dependent set up time i.e. weighted sum of total tardiness, total earliness and makespan.
CONCLUSIONS In the present work, we have framed multicriteria decision making for flow shop scheduling with weighted sum of total tardiness, total earliness and makespan under sequence dependent set
up time and also proposed four special heuristic based hybrid genetic algorithms. Computational analysis has also been done for comparing the performance of proposed four HGA’s i.e. HGA (SH1), HGA
(SH2), HGA (SH3) and HGA (SH4). The HGA’s have been tested upto 200 jobs and 20 machines problems in flow shop scheduling as derived by Taillord [20] for all the four weight values (α, β and γ) for
fitness function (i.e. (0.33, 0.33, 0.33), (0.25, 0.25, 0.5), (0.5, 0.25, 0.25) and (0.25, 0.25, 0.5)). From the analysis it has been concluded that the proposed HGA(SH1) for smaller and HGA(SH3) for
larger job size problems showed superiority over other for 5,10 and 20 machines problem for multicriteria decision making flow shop scheduling (i.e. weighted sum of total tardiness, total earliness
and makespan ) under sequence dependent set up time and due dates.
REFERENCES 1. Allahverdi A., Ng C.T., Cheng T.C.E. and Kovalyov M.Y. “A survey of scheduling problems with setup times or costs”. European Journal of Operation Research, 187(3):985–1032,2008 2.
Rajendran C. “Theory and methodology heuristics for scheduling in flow shop with multiple objectives” .European Journal of Operation Research , 82 (3):540 −555,1995 3. Ravindran D., Noorul Haq A.,
Selvakuar S.J. and Sivaraman R. “ Flow shop scheduling with multiple objective of minimizing makespan and total flow time”. International Journal of Advanced Manufacturing Technology,
25:1007–1012,2005 4. Gupta J.N.D., Venkata G., Neppalli R. and Werner F. “Minimizing total flow time in a twomachine flowshop problem with minimum makespan”. International Journal of Production
Economics, 69:323–338,2001 5. Sayin S. and Karabati S. “A bicriteria approach to the two-machine flow shop scheduling problem”. European Journal of Operation Research, 113:435–449,1999 6. Danneberg
D., Tautenhahn T., Werner F. “A comparison of heuristic algorithms for flow shop scheduling problems with setup times and limited batch size”. Mathematical and Computer Modelling, 29 (9):101–126,1999
7. Toktas B., Azizoglu M. and Koksalan S.K. “Two-machine flow shop scheduling with two criteria: Maximum earliness and makespan”. European Journal of Operation Research, 157(2): 286–295,2004 8.
Ponnambalam S. G., Jagannathan H., Kataria M. and Gadicherla A. “A TSP-GA multiobjective algorithm for flow shop scheduling”. International Journal of Advanced Manufacturing Technology, 23:
909–915,2004 9. Loukil T., Teghem J. and Tuyttens D. “Solving multi-objective production scheduling problems using metaheuristics”. European Journal of Operation Research , 161(1) :42– 61,2005 10.
Fred Choobineh F., Mohebbi E. and Khoo, H. “A multi-objective tabu search for a singlemachine scheduling problem with sequence dependent setup times”. European Journal of Operational Research, 175,
318–337,2006 11. Rahimi Vahed R and Mirghorbani S. M. “A multi-objective particle swarm for a flow shop scheduling problem”. Combinatorial Optimization, 13 (1):79–102,2007
International Journal of Engineering (IJE), Volume (3): Issue (5)
Ashwani Dhingra & Pankaj Chandna
12. Noorul Haq and Radha Ramanan T. “A bicriterian flow shops scheduling using artificial neural network”. International Journal of Advanced Manufacturing Technology, 30: 1132–1138, 2006 13. Lockett
A. G. and Muhlemann A. P. “Technical notes: a scheduling problem involving sequence dependent changeover times”.Operation Research, 20: 895-902,1972 14. Gowrisankar K, Chandrasekharan Rajendran and
Srinivasan G. “Flow shop scheduling algorithm for minimizing the completion time variance and the sum of squares of completion time deviations from a common due date”. European Journal of Operation
Research, 132:643–665,2001 15. Blazewicz J, Pesch E, Sterna M and Werner F. “ A comparison of solution procedures for two-machine flow shop scheduling with late work criterion”. Computers and
Industrial Engineering, 49:611–624,2005 16. Eren, T. “A bicriteria m-machine flow shop scheduling with sequence-dependent setup times”. Applied Mathematical Modelling,2009 (In press), doi: 10.1016/
j.apm.2009.04.005 17. Erenay F.S., et al. “New solution methods for single machine bicriteria Scheduling problem: Minimization of average flow time and number of tardy jobs”. European Journal of
Operational Research,2009 (In press), doi:10.1016/j.ejor.2009.02.014 18. Naderi B., Zandieh M. and Roshanaei V. “Scheduling hybrid flowshops with sequence dependent setup times to minimize makespan
and maximum tardiness”. International Journal of Advanced Manufacturing Technology, 41:1186–1198,2009 19. Nawaz M., Enscore E. and Ham I. “Heuristic Algorithm for the M-Machine N-Job Flow Shop
Sequencing Problem”. Omega, 11: 91–95,1983 20. Taillard E. “Benchmarks of basic scheduling problems”. European Journal of Operation Research, 64:278–285, 1993
International Journal of Engineering (IJE), Volume (3): Issue (5)
|
{"url":"https://m.moam.info/hybrid-genetic-algorithm-for-multicriteria-scheduling-with-citeseerx_5b71a0f4097c47cb7b8b4576.html","timestamp":"2024-11-07T04:11:41Z","content_type":"text/html","content_length":"56623","record_id":"<urn:uuid:5c4a5202-ff73-401c-9323-2a02658d9860>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00760.warc.gz"}
|
Homemade Nipple Butter
Being 2 weeks out from having a baby and breastfeeding the whole time has taken a bit of a toll on my poor nipples. We are dealing with a few cracks and overall slight pain due to his strong suck. So
I've been using the heck out of nipple creams/butters. A tube of lanolin is about $8 and I bought a 1oz tub of Motherlove Nipple Cream, but that was $10. It adds up fast.
I have no idea how long I'll need nipple cream, but I figured it was worth it to figure out how to make my own and do it myself. So here is an EASY recipe!
(Links are exact items used)
2 parts
Coconut Oil
2 parts
Shea Butter
1 part
I used everything above except for about 2 small chunks of the Beeswax.
I started by shredding the beeswax, but it was taking forever, so I just cut it up.
I used 1 bar of Beeswax (minus the 2 chunks) and then roughly 2oz of coconut and 2oz of shea. Give or take. Not an exact science.
1. Create a double boiler on the stove and get it boiling.
2. Put in coconut oil and melt
3. Add in shea and melt
4. Add in beeswax and fully melt (this one takes the longest)
5. Take off heat and let cool a little
6. Pour into container of choice and let cool overnight
Then just cap it and use it! It's really soft!!
The above amounts got me 4oz of cream. I used an old baby food jar.
If you want more, up the amounts.
This would make a great new mommy gift :)
60 comments:
Thank you for making and sharing your recipe for Homemade Nipple Butter. Upon reading the ingredients it's a wonder that anyone would want to buy it from a store. All of these ingredients are
natural with no chemicals and if you've made other homemade creams or lotions, you would already have these ingredients on hand.
I love this. I am less than 3 weeks away from having my baby girl and I am already trying to prepare myself for our breastfeeding journey together. I will definitely be making this as I know how
sore I can get from breastfeeding. Thank you!
Erin K. (erinknack08@yahoo.com)
Thank you for this recipe. It looks smooth and soothing. I'll try it when I run out of Lansinoh.
Thanks for posting this! I have been interested in buy some natural products, but I would love to make my own!
Awesome, I'm going to try this for my first who is due any day!--- Jessica Malik
Thank you for providing this recipe!!! I'm due with my first in July and decided on Motherlove nipple cream over Earth Mama Angel Baby based on the reviews I read, though it seems like both could
work very, very well. There were enough people who like Motherlove over the other that it seemed worth going for it. What's your opinion on how this recipe compares to either of the other two
Erica, I have the Motherlove and really like it, it goes on so soft and smooth and is easy to get out of the container. Since it's cold and winter, this one is a bit hard because of the coconut
oil, but healing wise I prefer this homemade one. Coconut oil is a great antimicrobial so it's wonderful for keeping thrush and infection at bay!
Thanks for this! I had really bad cracking the first time around and lanolin didn't help much at the beginning. I may make some of this and give it a try with this baby.
I always heard that coconut oil was great for diaper rash but never once thought about using it for your nipples! Here i've always bought the $10 tube of lansinol! lol
this wouldve really came in handy when i had my 3 kids but ill pass it on to my best friend she is havn her first kid
This sounds great and useful. I wish I knew how to make this when I was nursing, I had a lot of problems.
twinkle at optonline dot net
That's a great idea and it does seem easy to make. Where would you buy Shea.
This is really awesome. I wish I saw/knew about this while I was still nursing! BUT if I ever have another kid, I'll know! thanks!
Its been a while since I had my little one and this would have definitely helped. However, I'm going to pass this information along to my friend who just had a little girl. Thanks for the
Due in Aug, and I'm sure I'll need some! My sister is always willing to help me make things that are natural, so this is going on to make list!
I'm due in September and sure that this will come in handy once we start breastfeeding.
Great idea! Will keep this in mind...for me and for friends too. Love not having to go out and buy $$$ creams. Less ingredients the better me and baby feel about it! :)
I am so very thankful I stumbled across this post! My daughter is expecting her third child any day now and she has had terribly sore and cracked nipples while breastfeeding the other two. And
you are right, you can spend a small fortune on the products to give relief - or you can make your own. I am definitely going to make this in the next few days for her to have on hand. Thanks for
the recipe!
Thank you for this recipe, this is great for mommies.
Love natural home remedies and all things home made! Thank you for the recipe!! :)
What a great recipe! Thank you! They do charge a lot for nipple cream. I will definitely make this.
i like the basis of this recipe! i might try adding vitamin E for its healing properties. Also i wonder about putting a tiny bit of yarrow extract in it since it is such a powerful healing plant.
will probably have to check on safety for baby on that one but for sure some vitamin e. thanks for sharing.
This looks a lot easier to make than I thought it would. I know some mommy friends would could definitely use this.
I love this, I've never seen a recipe for nipple cream before! Does this keep well? It's so great to make what we can instead of buying things that are overpriced anyways.
Thank you for this recipe. I am 6 months into nursing baby 2 and I just found out we are expecting baby 3. I will definitely need to stock up on this stuff!
I would have never thought to use beeswax. I might make a batch of this and try it out.
I made mine in January and it's still holding up well in mid May. Thankfully I don't really need it anymore, but I also use it as diaper rash cream and eczema cream as well.
This is a great idea! I never thought of making my own.
This is great! I'm not terribly DIY-friendly, but this seems like even I could make it!
My "baby" is having her first baby in the new year and I'm trying to find all of the helpful things that I can for her.
I'm going to bookmark this for future use; I love that its all natural!
RAFFLECOPTER NAME is Anne Taylor
That's really simple and nice! I can't wait to whip up a batch!
I just had my first baby and I'm suffering this pain too. Lucky me, I found this awesome topic from Your blog. Can't wait to make an try it soon! Love it!!
Thank You for sharing :)
Fiona N
I was planning on just using straight up coconut oil, but I might have to try this!
Thank you so much for posting this! I'll definitely be using it!! Like you said- nipple creams can add up fast! $$
Wow this is so cool! I love DIY stuff so will have to keep this in mind when baby #2 comes along. :)
ooh this is even simpler than the lotion bars i make! i'll have to try some, im sure it would make a great salve too!
Wow! That's great! I am going to be a first time mom soon, and things like nipple cream/butter does get expensive!! I love making homemade stuff to save money. Would you recommend to take the
butter off before feeding baby? thank yoU!
This sounds a great recipe and would smell so yummy. I'll def try it. TY so much for sharing.
Great information, very resourceful.
I agree this would be a great new mommy gift! I'm going to enlist my friend to make this with me!
The butter does NOT need to be removed before feeding baby!
This sounds perfect to put in my daughter's arsenal of "new mommy necessities"! She's due in February with her first :)
Thank you for the great info! Our firstborn is due in February and we intend to breastfeed, so I anticipate the need for nipple creams and the like. I'm so happy to have found a recipe on how to
make it myself to try to keep things more natural (and cheaper)! Looking forward to trying this out. :)
I'll have to try this! Im due in January and remember the sore nipples when I started nursing with my older kiddos! I enjoy making my own products!
I'm definitely going to make some of this! I am due in January with my first baby, and I love natural products and also saving money by making my own!
Im so glad i saw this before baby comes
Thank you so much for sharing your recipe! SO natural and easy to make.
Very clever and you are right as far as why buy it when you can make it! Usually the products for this the money is all in the marketing and making your own you know exactly what is in it. I hope
nurrsing gets easier I think it hurts every time!
I have everything to make this! Thanks for the recipe and I will update how I like it in use.
I love using Shea Butter! My supervisor at work makes natural remedies and sells them. I prefer to use natural remedies.
I use a similar recipe for lip balm, it works great!
Awesome idea! I went through so much nipple cream when i had my baby in october. I'll remember this for number two!
Awesome idea! I went through so much nipple cream when i had my baby in october. I'll remember this for number two!
I cannot wait to try this! Unfortunately my six month old STILL has a shallow latch and gets me quite sore. Hoping this helps!!
What a great gift to make for a breastfeeding mama!
Thank you for the recipe! I will have to make some of this ahead of time before baby is born because I know I'll need it (based on my experience with my first baby)
I wish I knew there was a recipe for this when I was nursing, the first couple months, the pain was awful!! I just recently discovered all the uses for coconut oil and usually recommend it to
nursing friends. I am going to pin this and share in my mommy group. Thanks for posting!!
I will have to pass this along to my daughter who is due in May. Thanks for the recipe!
I need to make this! I'm due with my first baby in July!
I'll have to let my daughter know about this, i'm sure she could use this. I love that this recipe is super easy to put together.
|
{"url":"https://www.mommysfavoritethings.com/2014/01/homemade-nipple-butter.html","timestamp":"2024-11-10T23:54:39Z","content_type":"application/xhtml+xml","content_length":"160282","record_id":"<urn:uuid:7fa7c2dd-4881-46d9-8e36-8ee2a98f555c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00641.warc.gz"}
|
Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf 2024 - NumbersWorksheets.com
Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf
Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf – A Logical Amounts Worksheet might help your son or daughter be a little more familiar with the concepts associated with this
proportion of integers. With this worksheet, individuals will be able to resolve 12 different issues associated with logical expression. They will likely learn to multiply two or more phone numbers,
group of people them in couples, and figure out their products. They will likely also training simplifying logical expressions. As soon as they have enhanced these methods, this worksheet will be a
beneficial tool for furthering their reports. Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf.
Logical Figures are a ratio of integers
The two main kinds of phone numbers: irrational and rational. Realistic phone numbers are defined as total amounts, in contrast to irrational numbers usually do not replicate, and have an limitless
quantity of digits. Irrational amounts are non-no, non-terminating decimals, and sq origins that are not excellent squares. They are often used in math applications, even though these types of
numbers are not used often in everyday life.
To define a rational amount, you need to understand just what a realistic number is. An integer is a total number, as well as a logical quantity can be a percentage of two integers. The ratio of two
integers may be the amount at the top split by the number at the base. For example, if two integers are two and five, this would be an integer. There are also many floating point numbers, such as pi,
which cannot be expressed as a fraction.
They are often manufactured right into a portion
A reasonable quantity carries a numerator and denominator which are not zero. Which means that they may be indicated being a small percentage. Together with their integer numerators and denominators,
reasonable figures can also have a negative benefit. The unfavorable worth must be placed on the left of and its particular definite importance is its extended distance from zero. To simplify this
instance, we will say that .0333333 is actually a small percentage that can be written being a 1/3.
Along with bad integers, a rational quantity can be manufactured into a small fraction. By way of example, /18,572 can be a rational variety, when -1/ is not. Any portion comprised of integers is
reasonable, as long as the denominator fails to contain a and will be published as being an integer. Also, a decimal that ends in a stage is yet another realistic number.
They can make sense
Despite their title, reasonable figures don’t make much sense. In mathematics, they may be one organizations having a exclusive size on the amount line. This means that if we count up something, we
can easily purchase the size and style by its percentage to the authentic number. This retains real even when you can find unlimited logical amounts involving two distinct phone numbers. In other
words, numbers should make sense only if they are ordered. So, if you’re counting the length of an ant’s tail, a square root of pi is an integer.
In real life, if we want to know the length of a string of pearls, we can use a rational number. To obtain the duration of a pearl, for instance, we could count up its width. One particular pearl
weighs about 15 kgs, and that is a logical quantity. Additionally, a pound’s weight means ten kilos. Hence, we must be able to split a pound by 15, without having be concerned about the duration of
an individual pearl.
They may be depicted as a decimal
If you’ve ever tried to convert a number to its decimal form, you’ve most likely seen a problem that involves a repeated fraction. A decimal variety could be written like a multiple of two integers,
so 4x several is the same as eight. A similar problem necessitates the repetitive small fraction 2/1, and either side needs to be divided by 99 to have the proper response. But how would you have the
conversion? Here are a few cases.
A logical number can be written in various forms, which includes fractions as well as a decimal. A great way to represent a logical variety in the decimal is to split it into its fractional
comparable. You will find 3 ways to separate a reasonable amount, and all these techniques produces its decimal equivalent. One of these ways is to separate it into its fractional comparable, and
that’s what’s known as the terminating decimal.
Gallery of Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf
2 1 Math Worksheet Rational Numbers Answer Key 2022 NumbersWorksheets
Comparing And Ordering Rational Numbers Worksheet Answer Key Pdf Fill
Leave a Comment
|
{"url":"https://numbersworksheet.com/comparing-and-ordering-rational-numbers-worksheet-answer-key-pdf/","timestamp":"2024-11-09T20:58:52Z","content_type":"text/html","content_length":"53914","record_id":"<urn:uuid:49b13143-554c-467d-9947-6675e06ce3c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00369.warc.gz"}
|
Advanced Physics Questions
This blog is dedicated to advanced physics questions which are culled from the total constellation of material generally presented at the undergraduate level in physics, and tested in the advanced
(physics) test of the GRE. This is to assist a number of interested readers, including some who plan to take the GRE. What I'll do is give the questions in this blog, and then follow it up with
answers in the next - doing a total of about three 'cycles' in all, to reflect all the aspects of the test. (All readers who've taken undergrad physics are invited to attempt the questions by the
1. The photo-electric threshold of tungsten is 2300A (where A denotes Angstrom). Estimate the energy E of the electrons ejected from the surface by ultraviolet light of wavelength 1800A.
2. Find the minimum energy in electron volts (eV) for an electron in a spark discharge to strip a sodium (Na) atom (Z = 11) of its last electron, assuming the other ten are already removed.
3. Let r^ be a position vector. Then find the divergence of r^.
4. Let r^ be a position vector and a be a constant vector. Then find the gradient of the scalar product: a*r^.
5. Consider the matrix:
(1.........0 .......0)
a) Find Tr, the trace of the matrix
b) Find the eigenvalues of the matrix.
6. Let U1 and U2 be orthonormal functions. Find the value of N which normalizes:
F = N(U1 + 2i(U2)
7. Two men together support a uniform plank of wood. At the instant one of the men lets go of his end, what is the force the other man feels?
Questions 8-9:
Consider two coordinate systems whose origins are non-accelerating. Assume that one of these systems (denoted by primes) is rotating with constant angular velocity w with respect to the other which
is non-rotating. (Let i' be one of three orthogonal unit vectors in the rotating system)
8. Find the time derivative in the non-rotating system.
9. Find the second order time derivative in the non-rotating system.
10. If two strings, whose densities are 25 g/cm and 9 g/cm are joined together then find the reflection coefficient for the vibration waves.
11. A particle of mass m moves in a planeunder the influence of a force F = - kr directed toward the origin. Show a polar coordinate system (r, theta) to describe the motion of the particle and
thence or otherwise given the Lagrangian for the system.
12. According to relativistic mechanics the actual velocity of the electrons whose kinetic energy is 0.25Mev is what?
13. Compute the effective mass of a photon of wavelength 6000 A using the Einstein mass-energy equation.
Questions 14 and 15:
When the Sun is directly overhead, a given square meter surface of the Earth receives about 1300 Watts of radiant energy. Assume this energy is in the form of a plane-polarized monochromatic wave.
14. Find the rms (root mean square) magnitude of the electric field of the wave.
15. Find the rms magnitude of the magnetic field for the wave.
(Answers to come in next instalment)
|
{"url":"https://brane-space.blogspot.com/2011/01/advanced-physics-questions.html","timestamp":"2024-11-12T19:12:31Z","content_type":"text/html","content_length":"116698","record_id":"<urn:uuid:dbcb25d3-2ada-4a88-a682-9dca02fb053f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00189.warc.gz"}
|
can't solve some of SC7 sample puzzles
Wed, 02/15/2012 - 11:00
i thought i had understood toroidal sudoku fully. i even solved the practice puzzle but when i tried to solve the competition puzzle i had a problem. i have a question first. in the description it
says "each of ten marked region contain the digits". i can't find ten. secondly, the three gray cells are the continuation of their adjacent 6 cells forming a 9 cell region, right?
tnx alot for the reply. know i fully understand bent diagonal. i also solved competition puzzle's non-consecutive
Yes, you're right. There are cells on the edge which belongs to both boxes (to the box on the left and right side or to the box on the top and bottom).
And when you try to count these boxes so there are 10 of them.
Just try to mark them with a cross (including the toroidal rule)...
The competition puzzle is very similar to the one in the booklet.
Sun, 02/12/2012 - 00:28
i thought i had understood
i thought i had understood toroidal sudoku fully. i even solved the practice puzzle but when i tried to solve the competition puzzle i had a problem. i have a question first. in the description it
says "each of ten marked region contain the digits". i can't find ten. secondly, the three gray cells are the continuation of their adjacent 6 cells forming a 9 cell region, right?
tnx alot for the reply. know i fully understand bent diagonal. i also solved competition puzzle's non-consecutive
Sat, 01/28/2012 - 02:26
tnx for the complete and easy
tnx for the complete and easy explanation!
Sat, 01/28/2012 - 01:12
i didn't understand how toroidal sudoku works.
Toroidal sudoku has numebers on the edge. These numbers are the same on the left side and on the right side, and on the top and on the bottom.
If we say that there are 11 columns and 11 rows then the number in cell (1,2) (row, column) is the same as in the cell (11,2).
And the same works for cell (2,1) which is the same like cell (2,11).
Is it clear now?
i didn't understand BENT diagonal either. does it mean the numbers shouldn't be repeated along the four lines?
Bent diagonal sudoku contains 4 diagonals.
Example: c (1,1), c (2,2), c (3,3), c (4,4), c (5,5), c (6,4), c (7,3), c (8,2), c (9,1)
and there are three more of them
i don't know how to solve GT and Thermo- sudoku either.
GT sudoku is necessary to try to find first of all numbers with less conditions. These numbers are 1 and 9 (because any other number can be smaller as 1 and any other one can be greater as 9). When
you have 1 and 9 you can try to find 2, 3... 8, 7...
Every puzzle is different, so it depends.
Thermo sudoku is necessary to try to find the numbers within the thermometer (or to write there pencilmarks). To decide what numbers can this thermometr contain.
Because there is the <> rule, so it is necessary to use it
i could find only 5 numbers in Non-consecutive.
In the practice puzzle? I can try to help you with the solution
• Login or register to post comments
|
{"url":"http://sudokucup.com/node/1541","timestamp":"2024-11-06T17:17:12Z","content_type":"application/xhtml+xml","content_length":"26522","record_id":"<urn:uuid:1297b754-094e-4afb-aa58-85342c350cfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00384.warc.gz"}
|
Nam Nguyen Canh
In this paper, we consider a combinatorial optimization problem, the Maximum Weighted Clique Problem (MWCP), a NP-hard problem. The considered problem is first formulated in the form of binary
constrained quadratic program and then reformulated as a Difference Convex (DC) program. A global optimal solution is found by applying DC Algorithm (DCA) in combining with … Read more
An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex Set
This paper presents an outcome-space outer approximation algorithm for solving the problem of minimizing the product of two convex functions over a compact convex set in $\R^n$. The computational
experiences are reported. The proposed algorithm is convergent. Article Download View An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex … Read more
|
{"url":"https://optimization-online.org/author/nam-nguyencanh/","timestamp":"2024-11-10T18:21:36Z","content_type":"text/html","content_length":"85673","record_id":"<urn:uuid:12199431-5cd1-43f9-9c3e-3a0f055147fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00488.warc.gz"}
|
LM 32.9 Summary Collection
32.9 Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
diffraction — the behavior of a wave when it encounters an obstacle or a nonuniformity in its medium; in general, diffraction causes a wave to bend around obstacles and make patterns of strong and
weak waves radiating out beyond the obstacle.
coherent — a light wave whose parts are all in phase with each other
Other Notation
wavelets — the ripples in Huygens' principle
Wave optics is a more general theory of light than ray optics. When light interacts with material objects that are much larger then one wavelength of the light, the ray model of light is
approximately correct, but in other cases the wave model is required.
Huygens' principle states that, given a wavefront at one moment in time, the future behavior of the wave can be found by breaking the wavefront up into a large number of small, side-by-side wave
peaks, each of which then creates a pattern of circular or spherical ripples. As these sets of ripples add together, the wave evolves and moves through space. Since Huygens' principle is a purely
geometrical construction, diffraction effects obey a simple scaling rule: the behavior is unchanged if the wavelength and the dimensions of the diffracting objects are both scaled up or down by the
same factor. If we wish to predict the angles at which various features of the diffraction pattern radiate out, scaling requires that these angles depend only on the unitless ratio `lambda"/"d`,
where `d` is the size of some feature of the diffracting object.
Double-slit diffraction is easily analyzed using Huygens' principle if the slits are narrower than one wavelength. We need only construct two sets of ripples, one spreading out from each slit. The
angles of the maxima (brightest points in the bright fringes) and minima (darkest points in the dark fringes) are given by the equation
where `d` is the center-to-center spacing of the slits, and `m` is an integer at a maximum or an integer plus `1`/`2` at a minimum.
If some feature of a diffracting object is repeated, the diffraction fringes remain in the same places, but become narrower with each repetition. By repeating a double-slit pattern hundreds or
thousands of times, we obtain a diffraction grating.
A single slit can produce diffraction fringes if it is larger than one wavelength. Many practical instances of diffraction can be interpreted as single-slit diffraction, e.g., diffraction in
telescopes. The main thing to realize about single-slit diffraction is that it exhibits the same kind of relationship between `lambda`, `d`, and angles of fringes as in any other type of diffraction.
Homework Problems
`sqrt` A computerized answer check is available online.
`int` A problem that requires calculus.
`***` A difficult problem
1. Why would blue or violet light be the best for microscopy?
2. Match gratings A-C with the diffraction patterns 1-3 that they produce. Explain.
3. The beam of a laser passes through a diffraction grating, fans out, and illuminates a wall that is perpendicular to the original beam, lying at a distance of `2.0` m from the grating. The beam is
produced by a helium-neon laser, and has a wavelength of `694.3` nm. The grating has `2000` lines per centimeter. (a) What is the distance on the wall between the central maximum and the maxima
immediately to its right and left? (b) How much does your answer change when you use the small-angle approximations `thetaapproxsinthetaapproxtanapproxtheta`? `sqrt`
4. When white light passes through a diffraction grating, what is the smallest value of `m` for which the visible spectrum of order `m` overlaps the next one, of order `m+1`? (The visible spectrum
runs from about `400` nm to about `700` nm.)
5. Ultrasound, i.e., sound waves with frequencies too high to be audible, can be used for imaging fetuses in the womb or for breaking up kidney stones so that they can be eliminated by the body.
Consider the latter application. Lenses can be built to focus sound waves, but because the wavelength of the sound is not all that small compared to the diameter of the lens, the sound will not be
concentrated exactly at the geometrical focal point. Instead, a diffraction pattern will be created with an intense central spot surrounded by fainter rings. About `85%` of the power is concentrated
within the central spot. The angle of the first minimum (surrounding the central spot) is given by `sintheta=lambda"/"b`, where `b` is the diameter of the lens. This is similar to the corresponding
equation for a single slit, but with a factor of `1.22` in front which arises from the circular shape of the aperture. Let the distance from the lens to the patient's kidney stone be `L=20` cm. You
will want `f>20` kHz, so that the sound is inaudible. Find values of `b` and `f` that would result in a usable design, where the central spot is small enough to lie within a kidney stone 1 cm in
6. For star images such as the ones in figure y, estimate the angular width of the diffraction spot due to diffraction at the mouth of the telescope. Assume a telescope with a diameter of `10` meters
(the largest currently in existence), and light with a wavelength in the middle of the visible range. Compare with the actual angular size of a star of diameter `10^9` m seen from a distance of `10^
17` m. What does this tell you?
7. Under what circumstances could one get a mathematically undefined result by solving the double-slit diffraction equation for `theta`? Give a physical interpretation of what would actually be
8. When ultrasound is used for medical imaging, the frequency may be as high as `5-20` MHz. Another medical application of ultrasound is for therapeutic heating of tissues inside the body; here, the
frequency is typically `1-3` MHz. What fundamental physical reasons could you suggest for the use of higher frequencies for imaging?
9. The figure below shows two diffraction patterns, both made with the same wavelength of red light. (a) What type of slits made the patterns? Is it a single slit, double slits, or something else?
Explain. (b) Compare the dimensions of the slits used to make the top and bottom pattern. Give a numerical ratio, and state which way the ratio is, i.e., which slit pattern was the larger one.
10. The figure below shows two diffraction patterns. The top one was made with yellow light, and the bottom one with red. Could the slits used to make the two patterns have been the same?
11. The figure below shows three diffraction patterns. All were made under identical conditions, except that a different set of double slits was used for each one. The slits used to make the top
pattern had a center-to-center separation `d=0.50` mm, and each slit was `w=0.04` mm wide. (a) Determine `d` and `w` for the slits used to make the pattern in the middle. (b) Do the same for the
slits used to make the bottom pattern.
. The figure shows a diffraction pattern made by a double slit, along with an image of a meter stick to show the scale. The slits were `146` cm away from the screen on which the diffraction pattern
was projected. The spacing of the slits was `0.050` mm. What was the wavelength of the light? `sqrt`
13. The figure shows a diffraction pattern made by a double slit, along with an image of a meter stick to show the scale. Sketch the diffraction pattern from the figure on your paper. Now consider
the four variables in the equation `lambda"/"d=sintheta"/"m`. Which of these are the same for all five fringes, and which are different for each fringe? Which variable would you naturally use in
order to label which fringe was which? Label the fringes on your sketch using the values of that variable.
14. Figure s on p. 878 shows the anatomy of a jumping spider's principal eye. The smallest feature the spider can distinguish is limited by the size of the receptor cells in its retina. (a) By making
measurements on the diagram, estimate this limiting angular size in units of minutes of arc (`60` minutes = `1` degree).(answer check available at lightandmatter.com) (b) Show that this is greater
than, but roughly in the same ballpark as, the limit imposed by diffraction for visible light. `sqrt`
Evolution is a scientific theory that makes testable predictions, and if observations contradict its predictions, the theory can be disproved. It would be maladaptive for the spider to have retinal
receptor cells with sizes much less than the limit imposed by diffraction, since it would increase complexity without giving any improvement in visual acuity. The results of this problem confirm
that, as predicted by Darwinian evolution, this is not the case. Work by M.F. Land in 1969 shows that in this spider's eye, aberration is a somewhat bigger effect than diffraction, so that the size
of the receptors is very nearly at an evolutionary optimum.
Excercise 32A: Double-source interference
1. Two sources separated by a distance `d=2 cm` make circular ripples with a wavelength of `lambda=1` cm. On a piece of paper, make a life-size drawing of the two sources in the default setup, and
locate the following points:
A. The point that is `10` wavelengths from source #1 and `10` wavelengths from source #2.
B. The point that is `10.5` wavelengths from #1 and `10.5` from #2.
C. The point that is `11` wavelengths from #1 and `11` from #2.
D. The point that is `10` wavelengths from #1 and `10.5` from #2.
E. The point that is `11` wavelengths from #1 and `11.5` from #2.
F. The point that is `10` wavelengths from #1 and `11` from #2.
G. The point that is `11` wavelengths from #1 and `12` from #2.
You can do this either using a compass or by putting the next page under your paper and tracing. It is not necessary to trace all the arcs completely, and doing so is unnecessarily time-consuming;
you can fairly easily estimate where these points would lie, and just trace arcs long enough to find the relevant intersections.
What do these points correspond to in the real wave pattern?
2. Make a fresh copy of your drawing, showing only point F and the two sources, which form a long, skinny triangle. Now suppose you were to change the setup by doubling `d`, while leaving `lambda`
the same. It's easiest to understand what's happening on the drawing if you move both sources outward, keeping the center fixed. Based on your drawing, what will happen to the position of point F
when you double `d`? Measure its angle with a protractor.
3. What would happen if you doubled both `lambda` and `d` compared to the standard setup?
4. Combining the ideas from parts 2 and 3, what do you think would happen to your angles if, starting from the standard setup, you doubled `lambda` while leaving `d` the same?
5. Suppose `lambda` was a millionth of a centimeter, while `d` was still as in the standard setup. What would happen to the angles? What does this tell you about observing diffraction of light?
Exercise 32B: Single-slit diffraction
• rulers
• computer with web browser
The following page is a diagram of a single slit and a screen onto which its diffraction pattern is projected. The class will make a numerical prediction of the intensity of the pattern at the
different points on the screen. Each group will be responsible for calculating the intensity at one of the points. (Either 11 groups or six will work nicely -- in the latter case, only points a, c,
e, g, i, and k are used.) The idea is to break up the wavefront in the mouth of the slit into nine parts, each of which is assumed to radiate semicircular ripples as in Huygens' principle. The
wavelength of the wave is 1 cm, and we assume for simplicity that each set of ripples has an amplitude of 1 unit when it reaches the screen.
1.For simplicity, let's imagine that we were only to use two sets of ripples rather than nine. You could measure the distance from each of the two points inside the slit to your point on the screen.
Suppose the distances were both `25.0` cm. What would be the amplitude of the superimposed waves at this point on the screen?
Suppose one distance was `24.0` cm and the other was `25.0` cm. What would happen?
What if one was `24.0` cm and the other was `26.0` cm?
What if one was `24.5` cm and the other was `25.0` cm?
In general, what combinations of distances will lead to completely destructive and completely constructive interference?
Can you estimate the answer in the case where the distances are `24.7` and `25.0` cm?
2. Although it is possible to calculate mathematically the amplitude of the sine wave that results from superimposing two sine waves with an arbitrary phase difference between them, the algebra is
rather laborious, and it become even more tedious when we have more than two waves to superimpose. Instead, one can simply use a computer spreadsheet or some other computer program to add up the sine
waves numerically at a series of points covering one complete cycle. This is what we will actually do. You just need to enter the relevant data into the computer, then examine the results and pick
off the amplitude from the resulting list of numbers. You can run the software through a web interface at http://lightandmatter.com/cgi-bin/diffraction1.cgi.
3. Measure all nine distances to your group's point on the screen, and write them on the board - that way everyone can see everyone else's data, and the class can try to make sense of why the results
came out the way they did. Determine the amplitude of the combined wave, and write it on the board as well.
The class will discuss why the results came out the way they did.
Exercise 32C: Diffraction of light
• slit patterns, lasers, straight-filament bulbs
station 1
You have a mask with a bunch of different double slits cut out of it. The values of w and d are as follows:
┃ pattern A │ w=`0.04` mm │ d=`.250` mm ┃
┃ pattern B │ w=`0.04` mm │ d=`.500` mm ┃
┃ pattern C │ w=`0.08` mm │ d=`.250` mm ┃
┃ pattern D │ w=`0.08` mm │ d=`.500` mm ┃
Predict how the patterns will look different, and test your prediction. The easiest way to get the laser to point at different sets of slits is to stick folded up pieces of paper in one side or the
other of the holders.
station 2
This is just like station 1, but with single slits:
┃ pattern A │ w=`0.02` mm ┃
┃ pattern B │ w=`0.04` mm ┃
┃ pattern C │ w=`0.08` mm ┃
┃ pattern D │ w=`0.16` mm ┃
Predict what will happen, and test your predictions. If you have time, check the actual numerical ratios of the w values against the ratios of the sizes of the diffraction patterns
station 3
This is like station 1, but the only difference among the sets of slits is how many slits there are:
┃ pattern A │ double slit ┃
┃ pattern B │ `3` slits ┃
┃ pattern C │ `4` slits ┃
┃ pattern D │ `5` slits ┃
station 4
Hold the diffraction grating up to your eye, and look through it at the straight-filament light bulb. If you orient the grating correctly, you should be able to see the `m=1` and `m=-1` diffraction
patterns off the left and right. If you have it oriented the wrong way, they'll be above and below the bulb instead, which is inconvenient because the bulb's filament is vertical. Where is the `m=0`
fringe? Can you see `m=2`, etc.?
Station 5 has the same equipment as station 4. If you're assigned to station 5 first, you should actually do activity 4 first, because it's easier.
station 5
Use the transformer to increase and decrease the voltage across the bulb. This allows you to control the filament's temperature. Sketch graphs of intensity as a function of wavelength for various
temperatures. The inability of the wave model of light to explain the mathematical shapes of these curves was historically one of the reasons for creating a new model, in which light is both a
particle and a wave.
32.9 Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
|
{"url":"https://www.vcalc.com/collection/?uuid=1efef05c-f145-11e9-8682-bc764e2038f2","timestamp":"2024-11-13T09:41:46Z","content_type":"text/html","content_length":"70905","record_id":"<urn:uuid:b8de8514-aeaa-4a6d-89c1-60f3359ab753>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00295.warc.gz"}
|
Innovation: How Deep Is That White Stuff? - GPS World
Using GPS Multipath for Snow-Depth Estimation
By Felipe G. Nievinski and Kristine M. Larson
FRINGES. No, I’m not talking about the latest celebrity hairstyles nor the canopy of an American doorless, four-wheeled carriage from yesteryear (think Oklahoma!). I’m talking about interference
fringes. But there is a connection to these other uses of the word fringe as we’ll see. You’ve all seen interference fringes at your local gas station, typically after it has just rained. They are
the alternating bands of color we perceive when looking at a gasoline or oil slick in a puddle of water. They are caused by the white light from the Sun or artificial lighting reflected from the top
surface of the slick and that from the bottom surface at the slick-water interface combining or interfering with each other at our eyeballs. The two sets of light waves arrive slightly out of phase
with each other, and depending on the wavelengths of the reflected light and our angle of view, produce the colorful fringes. If the incident light was monochromatic, consisting of a single frequency
or wavelength, then we would perceive just alternating bright and dark bands. The bright bands result from constructive interference when the phase difference is a near a multiple of 2π whereas the
dark bands result from destructive interference when the difference is near an odd multiple of π.
Interference fringes had been seen long before the invention of the automobile. They are clearly seen on soap bubbles and the iridescent colors of peacock feathers, Morpho butterflies, and jewel
beetles are also due to the interference phenomenon rather than pigmentation. Sir Isaac Newton did experiments on interference fringes (amongst other things) and tried to explain their existence —
wrongly, it turned out. But he did coin the term fringes since they resembled the decorative fringe sometimes used on clothing, drapery, and, yes, surrey canopies.
It was the English polymath, Thomas Young, who, in 1801, first demonstrated interference as a consequence of the wave-nature of light with his famous double-slit experiment. You may have replicated
his experiment in a high-school physics class. I did and I think I did it again as an undergraduate student taking a course in optics. Already by that point I was aiming for a career in physics or
space science but I didn’t know that as a graduate student I would do research involving interference fringes. But not using light waves.
My research involved the application of very long baseline interferometry or VLBI to geodesy. VLBI had been developed by radio astronomers to better understand the structure of quasars and other
esoteric celestial objects. At either ends of a baseline connecting large radio telescopes, perhaps stretching between continents, the quasar signals were recorded on magnetic tape and precisely
registered using atomic clocks. When the tapes were played back and the signals aligned, one obtained interference fringes as peaks and troughs in an analog or digital waveform. Computer analysis of
these fringes not only provided information on the structure of the observed radio source but also on the distance between the radio telescopes — eventually accurate enough to measure continental
But what has all of this got to do with GPS? In this month’s column, we look at a technique that uses fringes generated by signals arriving at an antenna directly from GPS satellites and those
reflected by snow surrounding the antenna to measure its depth and how it varies over time. GPS for measuring snow depth; who would have thought?
“Innovation” is a regular feature that discusses advances in GPS technology and its applications as well as the fundamentals of GPS positioning. The column is coordinated by Richard Langley of the
Department of Geodesy and Geomatics Engineering, University of New Brunswick. He welcomes comments and topic ideas.
Snowpacks are a vital resource for human existence on our planet. They provide reservoirs of fresh water, storing solid precipitation and delaying runoff. One sixth of the world population depends on
this resource. Both scientists and water-supply managers need to know how much fresh water is stored in snowpack and how fast it is being released as a result of melting.
Snow monitoring from space is currently under investigation by both NASA and ESA. Greatly complementary to such spaceborne sensors are automated ground-based methods; the latter not only serve as
essential independent validation and calibration for the former, but are also valuable for climate studies and flood/drought monitoring on their own. It is desirable for such estimates to be provided
at an intermediary scale, between point-like in situ samples and wider area pixels.
In the last decade, GPS multipath reflectometry (GPS-MR), also known as GPS interferometric reflectometry and GPS interference-pattern technique, has been proposed for monitoring snow. This method
tracks direct GPS signals, those that travel directly to an antenna, that have interfered with a coherently reflected signal, turning the GPS unit into an interferometer (see FIGURE 1). Its main
variant is based on signal-to-noise ratio (SNR) measurements, although GPS-MR is also possible with carrier-phase and pseudorange observables. Data are collected at existing GPS base stations that
employ commercial-off-the-shelf receivers and antennas in a conventional, antenna-upright setup. Other researchers have used a custom antenna and/or a dedicated setup, with the antenna tipped for
enhanced multipath reception.
In this article, we summarize the SNR-based GPS-MR technique as applied to snow sensing using geodetic instruments. This forward/inverse approach for GPS-MR is new in that it capitalizes on known
information about the antenna response and the physics of surface scattering to aid in retrieving the unknown snow conditions in the site surroundings. It is a statistically rigorous retrieval
algorithm, agreeing to first order with the simpler original methodology, which is retained here for the inversion bootstrapping. The first part of the article describes the retrieval algorithm,
while the second part provides validation at a representative site over an extended period of time.
Physical Forward Model
SNR observations are formulated as SNR = P[s]/P[n]. In the denominator, we have the noise power, P[n], here taken as a constant, based on nominal values for the noise power spectral density and the
noise bandwidth. The numerator is composite signal power:
Its incoherent component is the sum of the respective direct and reflected powers (although direct incoherent power is negligible). In contrast, the coherent composite signal power follows from the
complex sum of direct and reflection average voltages (not to be confused with the electromagnetic propagating fields, which neglect the receiving antenna response and also the receiver tracking
It is expressed in terms of the coherent direct and reflected powers, as well as the interferometric phase,
which amounts to the reflection excess phase with respect to the direct signal.
We decompose observations, SNR = tSNR + dSNR, into a trend
over which interference fringes are superimposed:
From now on, we neglect the incoherent power, which only impacts tSNR, not dSNR, and drop the coherent power superscript, for brevity.
The direct or line-of-sight power is formulated as
where is the direction-dependent right-hand circularly polarized (RHCP) power component incident on an isotropic antenna; the left-handed circularly polarized (LHCP) component is negligible. The
direct antenna gain, , is obtained evaluating the antenna pattern in the satellite direction and with RHCP polarization.
The reflection power,
is defined starting with the same incident isotropic power, , as in the direct power. It ends with a coherent power attenuation factor,
where θ is the angle of incidence (with respect to the surface normal), k = 2π/λ, is the wave number, and λ = 24.4 centimeters is the carrier wavelength for the civilian GPS signal on the L2
frequency (L2C). This polarization-independent factor accounts only for small-scale residual height above and below a large-scale trend surface. The former/latter results from high-/low-pass
filtering the actual surface heights using the first Fresnel zone as a convolution kernel, roughly speaking. Small-scale roughness is parameterized in terms of an effective surface standard deviation
s (in meters); its scattering response is modeled based on the theories of random surfaces, except that the theoretical ensemble average is replaced by a sensing spatial average. Large-scale
deterministic undulations could be modeled, but their impact on snow depth is canceled to first-order by removing bare-ground reflector heights.
At the core of , we have coupled surface/antenna reflection coefficients, , producing respectively RHCP and LHCP fields (under the assumption of a RHCP incident field). These terms include antenna
response power gain and phase patterns, evaluated in the reflection direction, and separately for each polarization. The surface response is represented by complex-valued Fresnel coefficients for
cross- and same-sense circular polarization, respectively. The medium is assumed to be homogeneous (that is, a semi-infinite half-space). Material models provide the complex permittivity, which
drives the Fresnel coefficients.
The interferometric phase reads:
The first term accounts for the surface and antenna properties of the reflection, as above. The last one is the direct phase contribution, which amounts to only the RHCP antenna phase-center
variation evaluated in the satellite direction. The majority of the components present in the direct RHCP phase (such as receiver and satellite clock states, the bulk of atmospheric propagation
delays, and so on) are also present in the reflection phase, so they cancel out in forming the difference.
At the core of the interferometric phase, we have the geometric component, φ[I] = kτ[i], the product of the wave number and the interferometric propagation delay. Assuming a locally horizontal
surface, the latter is simply:
in terms of the satellite elevation angle, e, and an a priori reflector height, H[A]. Snow depth will be measured in terms of changes in reflector height.
The physical forward model, based only on a priori information, can then be summarized as:
where interferometric power and phase are, respectively:
In all of these terms the pseudorandom-noise-code modulation impressed on the carrier wave can be safely neglected, given the small interferometric delay and Doppler shift at grazing incidence,
stationary surface/receiver conditions, and short antenna installations.
Parameterization of Unknowns
There are errors in the nominal values assumed for the physical parameters of the model (permittivity, surface roughness, reflector height, and so on). Ideally we would estimate separate corrections
for each one, but unfortunately many are linearly dependent or nearly so. Because of this dependency, we have kept physical parameters fixed to their optimal a priori values, and have estimated a few
biases. Each bias is an amalgamation of corrections for different physical effects. In a later stage, we rely on multiple independent bias estimates (such as for successive days) to try and separate
the physical sources.
Each satellite track is inverted independently. A track is defined by partitioning the data by individual satellite and then into ascending and descending portions, splitting the period between the
satellite’s rise and set at the near-zenith culmination. Each satellite track has a duration of ~1–2 hours. This configuration normally offers a sufficient range of elevation angles, unless the
satellite reaches culmination too low in the sky (less than about 20°), in which case the track is discarded. In seeking a balance between under- and over-fitting, between an insufficient and an
excessive number of parameters, we estimate the following vector of unknown parameters:
FIGURE 2 shows the effect of the constant and linear biases on the SNR observations. Reflector height bias, H[B] , changes the number of oscillations; phase shift, φ[B] , displaces the oscillations
along the horizontal axis; reflection power, , affects the depth of fades; zeroth-order noise power, , shifts the observations up or down as a whole; and first-order noise power, , tilts
the SNR curve. A good parameterization yields observation sensitivity curves as unique as possible for each parameter.
The forward model, now including the biases, can be summarized as follows:
where the modified interferometric power and phase are given by:
The total reflector height, H = H[A] – H[B] (a priori value minus unknown bias), is to be interpreted as an effective value that best fits measurements, which includes snow and other components.
Bootstrapping Parameter Priors. Biases and SNR observations are involved non-linearly through the forward model. Therefore, there is the need for a preliminary global optimization, without which the
subsequent final local optimization will not necessarily converge to the optimal solution.
SNR observations would trace out a perfect sinusoid curve in the case of an antenna with isotropic gain and spherical phase pattern, surrounded by a smooth, horizontal, and infinite surface (free of
small-scale roughness, large-scale undulations, and edges), made of perfectly electrically conducting material, and illuminated by constant incident power. Thus, in such an idealized case, SNR could
be described exactly by constant reflector height, phase shift, amplitude, and mean values.
As the measurement conditions become more complicated, the SNR data start to deviate from a pure sinusoid. Yet a polynomial/spectral decomposition is often adequate for bootstrapping purposes.
Statistical Inverse Model Formulation
Based on the preliminary values for the unknown parameters vector and other known (or assumed) values, we run the forward model to obtain simulated observations. We form pre-fit residuals comparing
the model values to SNR measurements collected at varying satellite elevation angles (separately for each track). Residuals serve to retrieve parameter corrections, such that the sum of squared
post-fit residuals is minimized. This non-linear least squares problem is solved iteratively using both a functional model and a stochastic model. The functional modeling includes a Jacobian matrix
of partial derivatives, which represents the sensitivity of observations to parameter changes where the partial derivatives are defined element-wise. Instead of deriving analytical expressions, we
evaluate them numerically, via finite differencing. The stochastic model specifies the uncertainty and correlation expected in the residuals. Their a priori covariance matrix modifies the objective
function being minimized.
Directional Dependence
It is important to know at which elevation angles the parameter estimates are best determined. Here, we focus on the phase parameters instead of reflection power or noise power parameters.
We can utilize the estimated reflector height and phase shift to evaluate the full phase bias function over varying elevation angles. Similarly, we can extract the corresponding 2-by-2 portion of the
parameters’ a posteriori covariance matrix, containing the uncertainty for reflector height and for phase shift, as well as their correlation, which is then propagated to obtain the full phase
uncertainty (see FIGURE 3).
The uncertainty attains a clear minimum versus elevation angle. The least-uncertainty elevation angle pinpoints the observation direction where reflector height and phase shift are best determined
(in combined form, not individually). The azimuth and epoch coinciding with the peak elevation angle act as track tags, later used for clustering similar tracks and analyzing their time series of
If we normalize phase uncertainty by its value at the peak elevation angle, then plot such sensing weights (between 0 and 1) versus the radial or horizontal distance to the center of the first
Fresnel zone at each elevation angle, we obtain FIGURE 4. It can be interpreted as the reflection footprint, indicating the importance of varying distances, with a longer far tail and a shorter near
tail (respectively regions beyond and closer than the peak distance). The implications for in situ data collection are clear: one should sample more intensely near the peak distance (about 15 meters)
and less so in the immediate vicinity of the GPS antenna, tapering it off gradually away from the antenna. As a caveat, these conclusions are not necessarily valid for antenna setups other than the
one considered here.
We now examine the snow-depth retrievals from the GPS multipath retrieval algorithm and assess both the precision and accuracy of the method. Multiple metrics have been developed to assess the
quality of the results. The accuracy of the method has been evaluated by comparing with in situ data over a multi-year period. Three field sites were chosen to highlight different limitations in the
method, both in terms of terrain and forest cover: grassland, alpine, and forested. We will look at the forested site in some detail.
Satellite Coverage and Track Clustering. All GPS-MR retrievals reported here are based on the newer GPS L2C signal. Of the approximately 30 GPS satellites in service, 8-10 L2C satellites were
available between 2009 and 2012 (8, 9, and 10 satellites at the end of 2009, 2010, and 2011, respectively). Satellite observations were partitioned into ascending and descending portions, yielding
approximately twenty unique tracks per day at a site with good sky visibility. GPS orbits are highly repeatable in azimuth, with deviations at the few-degree range over a year, translating into
~50-100-centimeter azimuthal displacement of the reflecting area (corresponding to the first Fresnel zone at 10°-15° elevation angle for a 2-meter high antenna). This repeatability permits clustering
daily retrievals by azimuth. It also allows the simplification that estimated snow-free reflector heights are fairly consistent from day to day, facilitating the isolation of the varying snow depth
during the snow-covered period.
For a given track, its revisit time is also repeatable, amounting to practically one sidereal day. The deficit in time relative to a calendar day results in the track time of the day receding ~4
minutes and 6 seconds every day. This slow but steady accumulation eventually makes the time of day return to its starting value after about one year. As all GPS satellites drift approximately at the
same rate, the time between successive tracks remains nearly repeatable. Its reciprocal, the sampling rate, has a median equal to approximately one track per hour, with a low value of one track
within two hours and a high of one track within 15 minutes; both extremes occur every day, with low-rate idle periods interspersed with high-rate bursts. The time of the day reduced to a fixed day
(such as January 1, 2000) could also be used to cluster tracks. Neighboring clusters, which are close in azimuth and/or in reduced time of the day, are expected to be more comparable, as they sample
similar conditions and are subject to similar errors.
Observations. FIGURE 5 shows several representative examples of SNR observations. A typical good fit between measured and modeled values is shown in Figure 5(a), corresponding to the beginning of the
snow season. Generally the model/measurement fit is good when the scattering medium is homogeneous; it deteriorates as the medium becomes more heterogeneous, particularly with mixtures of soil, snow,
and vegetation. There are genuine physical effects as well as more mundane spurious instrumental issues that degrade the fit but do not necessarily cause a bias in snow-depth estimates. These include
secondary reflections, interferometric power effects, direct power effects, and instrument-related issues.
Secondary reflections originate from disjoint surface regions. Interference fringes become convoluted with multiple superimposed beats (see Figure 5(b)). As long as there is a unique dominating
reflection, the inversion will have no difficulty fitting it, as the extra reflections will remain approximately zero-mean.
Random deviations of the actual surface with respect to its undulated approximation, called roughness or residual surface height, will affect the interferometric power. SNR measurements will exhibit
a diminishing number of significant interference fringes, compared to the measurement noise level (see Figure 5(c)). This facilitates the model fit but the reflector height parameter may become
ill-determined: its estimates will be more uncertain. Changes in snow density also affect the fringe amplitude.
Snow precipitation attenuates the satellite-to-ground radio link, which affects SNR measurements through the direct power term. First, this shifts the SNR measurements up or down (in decibels);
second, it tilts the trend tSNR as attenuation is elevation-angle dependent; third, fringes in dSNR will change in amplitude because of the decrease in the coherent component of the direct power.
Partial obstructions can affect either or both direct and interferometric powers. In this case, SNR measurements, albeit corrupted, are still recorded. This situation is in contrast to complete
blockages as caused by topography. The deposition of snow and the formation of a winter rime on the antenna are a particularly insidious type of obstruction, as their presence in the near-field of
the antenna element can easily distort the gain pattern in a significant manner. In the far-field, trees are another important nuisance, so much so that their absence is held as a strong requirement
for the proper functioning of multipath reflectometry.
Satellite-specific direct power offsets and also long-term power drifts are to be expected as spacecraft age and modernized designs are launched. In addition, noise power depends on the state of
conservation of receiver cables and on their physical temperature. Less subtle incidents are sudden ~3-dB SNR steps, hypothesized to originate in the receiver switching between the L2C data and pilot
subcodes, CM and CL.
Quality Control. Anomalous conditions may result in measurement spikes, jumps, and short-lived rapidly-varying fluctuations. For snow-depth-sensing purposes, it is necessary and sufficient to either
neutralize such measurement outliers through a statistically robust fit or detect unreliable fits and discard the problematic ones that could not otherwise be salvaged.
The key to quality control (QC) is in grouping results into statistically homogeneous units, having measurements collected under comparable conditions. In our case, azimuth-clustered tracks are the
natural starting unit. Secondarily, we must account for genuine temporal variations in the tendency of results, from beginning to peak to the end of the snow season. The detection of anomalous
results further requires an estimate of the statistical dispersion to be expected. Considering that the sample is contaminated with outliers, robust estimators (running median instead of the running
mean, and median absolute deviation over the standard deviation) are called for, if the first- and second-order statistical moments are to be representative. Given estimates of the non-stationary
tendency and dispersion, a tolerance interval can then be constructed such that it bounds, say, a 99% proportion of the valid results with 95% confidence level. We also desire QC to be judicious, or
else too many valid estimates will be lost. Notice that in the present intra-cluster QC, we compare an individual estimate to the expected performance of the track cluster to which it belongs; later,
we complement QC with an inter-cluster comparison of each cluster’s own expected performance.
Based on our practical experience, no single statistic detects all the outliers. We use four particular statistics that we have found to be useful: 1) degrees of freedom, essentially the number of
observations per track (modulo a constant number of parameters); 2) using the scaled root-mean-square error (RMSE) to test for goodness-of-fit, that is, how well measurements can be explained
adjusting the unknown values for the parameters postulated in the model; 3) reflector height uncertainty; and 4) peak elevation angle, which behaves much like a random variable, as it is determined
by a multitude of factors.
Combinations. We combine multiple clusters to average out random noise. Noise mitigation aims at not only coping with measurement errors but also compensating for model deficiencies, to the extent
that they are not in common across different clusters. Before we combine different clusters, we have to address their long-term differences. The initial situation is that snow surface heights will be
greater downhill and smaller uphill; we take this into account on a cluster-by-cluster basis by subtracting ground heights from their respective snow surface heights, resulting in snow thickness
values, which is a completely physically unambiguous quantity. Snow thickness is more comparable than snow heights across varying-azimuth track clusters. Yet snow tends to fill in ground depressions,
so thickness exhibits variability caused by the underlying ground surface, even when the overlying snow surface is relatively uniform. Further cluster homogeneity can be achieved by accounting for
the temporally permanent though spatially non-uniform component of snow thickness.
The averaging of snow depths collected for different track clusters employs the inversion uncertainties to obtain a preliminary running weighted median, calculated for, say, daily postings, with
overlapping windows or not. The preliminary post-fit residuals then go through their own averaging, necessarily employing a wider averaging window (say, monthly), which produces scaling factors for
the original uncertainties. The running weighted median is then repeated, producing final averages. The variance factors reflect the fact that some clusters are better than others.
Thus, the final GPS estimates of snow depth follow from an averaging of all available tracks, whose individual snow depth values were previously estimated independently. A new average is produced
twice daily utilizing the surrounding 1–2 days of data (depending on the data density), that is, 12-hour posting spacing and 24-hour moving window width. The averaging interval must be an integer
number of days, so as to minimize the possibility of snow-depth artifacts caused by variations in the observation geometry, which repeats daily.
Site-Specific Results
We explored GPS-MR snow-depth retrieval at three stations over a long period (up to three years). Throughout, we assessed the performance of the GPS estimates against independent nearly co-located in
situ measurements. We also compared the GPS estimates to the nearest SNOTEL station. SNOTEL (from snowpack telemetry) is an automated system for collecting snowpack and related data in the western
U.S. operated by the U.S. Department of Agriculture. Although not co-located with GPS, SNOTEL data are important because they provide accurate information on the timing of snowfall events.
The three sites we used were 1) a site in the T.W. Daniel Experimental Forest within the Wasatch Cache National Forest in the Bear River Range of northeastern Utah, with an elevation of 2,600 meters;
2) one of the stations of the EarthScope Plate Boundary Observatory, a grassland site located near Island Park, Idaho; and 3) an alpine site in the Niwot Ridge Long-term Ecological Research Site near
Boulder, Colorado. While we have fully documented the results from each site, due to space limitations we will only discuss the results from the forested site (known as RN86) in this article. This is
a more challenging site than the other two, due to the presence of nearby trees. Furthermore, it was subject to denser in situ sampling of 20-150 measurements spatially replicated around the GPS
antenna, and repeated approximately every other week for about one year.
We show results for the 2012 water-year, the period starting October 1 through September 30 of the following year. Where GPS site RN86 was installed, topographical slopes range from 2.5° to 6.5° (at
the 2-meter spatial scale), with average of ~5° within a 50-meter radius around the GPS antenna. RN86 was specifically built to study the impact of trees on GPS snow depth retrievals (see FIGURE 6).
Ground crews manually collected in situ measurements around the GPS antenna approximately every other week starting in November 2011. Measurements were made every 1–2 meters from the antenna up to a
distance of 25-30 meters. In the second half of the year, the sampling protocol was changed to azimuths of 0° (N), 45° (NE), 135° (SE), 180° (S), 225° (SW), and 315° (NW). With these data it is
possible to obtain in situ average estimates, with their own uncertainties (based on the number of measurements), which allows a more meaningful comparison.
There is reduced visibility at the current site, compared to other sites. Track clusters are concentrated due south, with only two clusters located within ±90° of north. Therefore, the GPS average
snow depth is not necessarily representative of the azimuthally symmetric component of the snow depth. In the presence of an azimuthal asymmetry in the snow distribution around the antenna, the GPS
average would be expected to be biased towards the environmental conditions prevalent in the southern quadrant. To rule out the possibility of an azimuthal artifact in the comparisons, we have
utilized only the in situ data collected along the SE/S/SW quadrant.
The comparison shows generally excellent agreement between GPS and in situ data (see FIGURE 7). The first four and the last one in situ data points were collected with coarser spacing and/or smaller
azimuthal coverage, which may be partially responsible for different performance in the first and second halves of the snow season. The correlation between GPS and in situ snow depth at RN86 amounts
to 0.990, indicating a very strong linear relationship. Carrying out a regression between in situ and GPS values, the RMS of snow-depth residuals improves from 9.6 to 3.4 centimeters. The regression
intercept and slope (with corresponding 95% uncertainties) amount to 15.4 ± 9.11 centimeters and 0.858 ± 0.09 meters per meter, respectively. According to these statistics, the null hypotheses of
zero intercept and unity slope are rejected at the 95% confidence level. This implies that at this location GPS snow-depth estimates exhibit both additive and multiplicative biases. The latter is
proportional to snow depth itself, meaning that, compared to an ideal one-to-one relationship, GPS is found to under-estimate in situ snow depth at this site by 14 ± 9%, although the uncertainty is
somewhat large.
The SNOTEL sensors are exceptionally close to the GPS antenna at this site, about 350 meters horizontally distant with negligible vertical separation. Yet the former is located within trees, while
the latter is located at the periphery of the forest and senses the reflections scattered from an open field. Therefore, only the timing of snowfall events agrees well, not the amount of snow.
Although forest density is generally negatively correlated with snow depth, exceptions are not uncommon, especially in localized clearings exposed to intense solar radiation, where shading of the
snow by the trees reduces ablation.
In this article, we have discussed a physically based forward model and a statistical inverse model for estimating snow depth based on GPS multipath observed in SNR measurements. We assessed model
performance against independent in situ measurements and found they validated the GPS estimates to within the limitations of both GPS and in situ measurement errors after the characterization of
systematic errors. The assessment yielded a correlation of 0.98 and an RMS error of 6–8 centimeters for observed snow depths of up to 2.5 meters at three sites, with the GPS underestimating in situ
snow depth by ~5–15%. This latter finding highlights the necessity to assess effects currently neglected or requiring more precise modeling.
The research reported in this article was supported by grants from the U.S. National Science Foundation, NASA, and the University of Colorado. Nievinski has been supported by a Capes/Fulbright
Graduate Student Fellowship and a NASA Earth System Science Research Fellowship. The article is based, in part, on two papers published in the IEEE Transactions on Geoscience and Remote Sensing:
“Inverse Modeling of GPS Multipath for Snow Depth Estimation – Part I: Formulation and Simulations” and “Inverse Modeling of GPS Multipath for Snow Depth Estimation – Part II: Application and
For the forested site (RN86), a Trimble NetR9 receiver was used with a Trimble TRM57971.00 (Zephyr Geodetic II) antenna with no external radome.
FELIPE G. NIEVINSKI is a faculty member at the Federal University of Santa Catarina, Florianópolis, Brazil. He has also been a post-doctoral researcher at São Paulo State University, Presidente
Prudente, Brazil. He earned a B.E. in geomatics from the Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2005; an M.Sc.E. in geodesy from the University of New Brunswick,
Fredericton, Canada, in 2009; and a Ph.D. in aerospace engineering sciences from the University of Colorado, Boulder, in 2013. His Ph.D. dissertation was awarded The Institute of Navigation Bradford
W. Parkinson Award in 2013.
KRISTINE M. LARSON received a B.A. degree in engineering sciences from Harvard University and a Ph.D. degree in geophysics from the Scripps Institution of Oceanography, University of California at
San Diego. She was a member of the technical staff at the Jet Propulsion Lab from 1988 to 1990. Since 1990, she has been a professor in the Department of Aerospace Engineering Sciences, University of
Colorado, Boulder.
• Authors’ Journal Papers
“Inverse Modeling of GPS Multipath for Snow Depth Estimation—Part I: Formulation and Simulations” by F.G. Nievinski and K.M. Larson in IEEE Transactions on Geoscience and Remote Sensing, Vol. 52, No.
10, 2014, pp. 6555–6563, doi: 10.1109/TGRS.2013.2297681.
“Inverse Modeling of GPS Multipath for Snow Depth Estimation—Part II: Application and Validation” by F.G. Nievinski and K.M. Larson in IEEE Transactions on Geoscience and Remote Sensing, Vol. 52, No.
10, 2014, pp. 6564–6573, doi: 10.1109/TGRS.2013.2297688.
• More on the Use of GPS for Snow Depth Assessment
“Snow Depth, Density, and SWE Estimates Derived from GPS Reflection Data: Validation in the Western U.S.” by J.L. McCreight, E.E. Small, and K.M. Larson in Water Resources Research, published first
on line, August 25, 2014, doi: 10.1002/2014WR015561.
“Environmental Sensing: A Revolution in GNSS Applications” by K.M. Larson, E.E. Small, J.J. Braun, and V.U. Zavorotny in Inside GNSS, Vol. 9, No. 4, July/August 2014, pp. 36–46.
“Snow Depth Sensing Using the GPS L2C Signal with a Dipole Antenna” by Q. Chen, D. Won, and D.M. Akos in EURASIP Journal on Advances in Signal Processing, Special Issue on GNSS Remote Sensing, Vol.
2014, Article No. 106, 2014, doi: 10.1186/1687-6180-2014-106.
“GPS Snow Sensing: Results from the EarthScope Plate Boundary Observatory” by K.M. Larson and F.G. Nievinski in GPS Solutions, Vol. 17, No. 1, 2013, pp. 41–52, doi: 10.1007/s10291-012-0259-7.
• GPS Multipath Modeling and Simulation
“Forward Modeling of GPS Multipath for Near-Surface Reflectometry and Positioning Applications” by F.G. Nievinski and K.M. Larson in GPS Solutions, Vol. 18, No. 2, 2014, pp. 309–322, doi: 10.1007/
“An Open Source GPS Multipath Simulator in Matlab/Octave” by F.G. Nievinski and K.M. Larson in GPS Solutions, Vol. 18, No. 3, 2014, pp. 473–481, doi: 10.1007/s10291-014-0370-z.
“Multipath Minimization Method: Mitigation Through Adaptive Filtering for Machine Automation Applications” by L. Serrano, D. Kim, and R.B. Langley in GPS World, Vol. 22, No. 7, July 2011, pp. 42–48.
“It’s Not All Bad: Understanding and Using GNSS Multipath” by A. Bilich and K.M. Larson in GPS World, Vol. 20, No. 10, October 2009, pp. 31–39.
“GPS Signal Multipath: A Software Simulator” by S.H. Byun, G.A. Hajj, and L.W. Young in GPS World, Vol. 13, No. 7, July 2002, pp. 40–49.
About the Author: Richard B. Langley
Richard B. Langley is a professor in the Department of Geodesy and Geomatics Engineering at the University of New Brunswick (UNB) in Fredericton, Canada, where he has been teaching and conducting
research since 1981. He has a B.Sc. in applied physics from the University of Waterloo and a Ph.D. in experimental space science from York University, Toronto. He spent two years at MIT as a
postdoctoral fellow, researching geodetic applications of lunar laser ranging and VLBI. For work in VLBI, he shared two NASA Group Achievement Awards. Professor Langley has worked extensively with
the Global Positioning System. He has been active in the development of GPS error models since the early 1980s and is a co-author of the venerable “Guide to GPS Positioning” and a columnist and
contributing editor of GPS World magazine. His research team is currently working on a number of GPS-related projects, including the study of atmospheric effects on wide-area augmentation systems,
the adaptation of techniques for spaceborne GPS, and the development of GPS-based systems for machine control and deformation monitoring. Professor Langley is a collaborator in UNB’s Canadian High
Arctic Ionospheric Network project and is the principal investigator for the GPS instrument on the Canadian CASSIOPE research satellite now in orbit. Professor Langley is a fellow of The Institute of
Navigation (ION), the Royal Institute of Navigation, and the International Association of Geodesy. He shared the ION 2003 Burka Award with Don Kim and received the ION’s Johannes Kepler Award in
|
{"url":"https://www.gpsworld.com/innovation-how-deep-is-that-white-stuff/","timestamp":"2024-11-07T20:28:07Z","content_type":"application/xhtml+xml","content_length":"133707","record_id":"<urn:uuid:ce0a1fc6-98e5-4aff-a862-51adc027de3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00239.warc.gz"}
|
7th Grade GMAS Math Worksheets: FREE & Printable
If you're seeking FREE and fresh math practice materials for the 7th-Grade GMAS Math test, don't miss our 7th-Grade GMAS Math Worksheets!
GMAS is a standardized assessment utilized to gauge the abilities of students in grades 3-8 in New Hampshire.
For 7th-grade students who feel they require additional practice in the math section of the GMAS test, our FREE 7th-Grade GMAS Math Worksheets are an excellent choice.
These 7th-Grade GMAS Math Worksheets aim to cover essential types of math questions, providing a sufficient number of exercises for each type.
Consequently, using these 7th-Grade GMAS Math Worksheets, it becomes easily attainable for 7th-grade students to be thoroughly prepared for the math section of the GMAS test.
IMPORTANT: COPYRIGHT TERMS: These worksheets are for personal use. Worksheets may not be uploaded to the internet in any form, including classroom/personal websites or network drives. You can
download the worksheets and print as many as you need. You have permission to distribute the printed copies to your students, teachers, tutors, and friends.
You Do NOT have permission to send these worksheets to anyone in any way (via email, text messages, or other ways). They MUST download the worksheets themselves. You can send the address of this page
to your students, tutors, friends, etc.
The Absolute Best Books to Ace the GMAS 7th Grade Math Test
Original price was: $29.99.Current price is: $14.99.
7th Grade GMAS Mathematics Concepts
Fractions and Decimals
Real Numbers and Integers
Proportions, Ratios, and Percent
Algebraic Expressions
Equations and Inequalities
Linear Functions
Exponents and Radicals
Geometry and Solid Figures
Statistics and Probability
The Best Practice Book For GMAS Math Test-Takers!
Related to This Article
What people say about "7th Grade GMAS Math Worksheets: FREE & Printable - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet.
|
{"url":"https://www.effortlessmath.com/blog/7th-grade-gmas-math-worksheets-free-printable/","timestamp":"2024-11-06T23:58:05Z","content_type":"text/html","content_length":"95547","record_id":"<urn:uuid:41a15172-0f0d-484c-a65a-db7a72bdf020>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00866.warc.gz"}
|
Chi-Square Test
Episode #9 of the course Introduction to statistics by Polina Durneva
Good morning!
In the previous lesson, we talked about t-test, which can be used to analyze the mean value of one group or several groups. Today, we will discuss another important statistical test, called the
chi-square test.
In statistics, there are two types of chi-square tests. The first, called the goodness of fit test, determines how well your sample represents the entire population. The second type, called the test
for independence, evaluates the relationship between two categorical variables. In this lesson, we will talk about the second type of chi-square test.
Conducting a Chi-Square Test
To calculate the chi-square, denoted as χ^2, we need to know the observed frequency (O) and the expected frequency (E) such that χ^2 = ∑((O – E)^2 / E).
To better understand how the chi-square test is conducted, let’s proceed to an example. Let’s say that you have a random sample and want to find out if there is any relationship between gender (male
and female) and eye color (green, blue, and brown). Our dataset consists of 200 observations. The table below summarizes our observations (or observed counts, O):
Gender/Eye color Blue Brown Green Total
Male 25 35 30 90
Female 50 40 20 110
Total 75 75 50 200
To find the expected counts (E), we need to calculate several probabilities. Let’s calculate the expected frequency for males with blue eyes. How can we do that? Well, the probability for a person to
be a male in our sample can be found by dividing the number of males by the total number of people, or 90 / 200 = 9 / 20. Then, we want to know how many people with blue eyes are males. To do so, we
need to multiply the probability of being a male by the total count of people with blue eyes, or (9 / 20) * 75 = 33.75. This is the expected frequency for males with blue eyes. The table below
illustrates the expected frequency for all categories (the calculations are performed in the similar manner):
Gender/Eye color Blue Brown Green Total
Male 33.75 33.75 22.5 90
Female 41.25 41.25 27.5 110
Total 75 75 50 200
Calculation of Chi-Square Test
Let’s use the values from the tables above to calculate the chi-square value: χ^2 = (25 – 33.75)^2 / 33.75 + (50 – 41.25)^2 / 41.25 + (35 – 33.75)^2 / 33.75 + (40 – 41.25)^2 / 41.25 + (30 – 22.5)^2 /
22.5 + (20 – 27.5)^2 / 27.5 = 8.754209.
To interpret this chi-square, we need to consider the degrees of freedom and the p-value. In our case, we have three different types of eye colors (blue, brown, and green), and therefore, the degrees
of freedom are 3 – 1 = 2. To find the p-value, we need to use the chi-square tables whose values are based on the degrees of freedom and the probability values. This table was composed by
statisticians to indicate the minimum chi-square required for different degrees of freedom to reject null hypothesis. The chi-square table can be found in any statistics book and various online
For our case, when df = 2 and the p-value = 0.05 (let’s use the p-value for the sake of simplicity), the chi-square value should be at least 1.38629. Since our calculated chi-square value
significantly exceeds 1.38629, we can state that there is a relationship between gender and eye color in our sample. (Please note that all the values in our sample are made up and do not present any
real significance, as they were simply used to demonstrate the concept.)
That’s it for today. Tomorrow, we will discuss another statistical concept called linear regression and finish our course.
See you soon,
Recommended book
Statistics by David Freedman, Robert Pisani, Roger Purves
Share with friends
|
{"url":"https://gohighbrow.com/chi-square-test/","timestamp":"2024-11-03T05:40:28Z","content_type":"text/html","content_length":"67266","record_id":"<urn:uuid:4acccaa8-b83a-4564-a3fd-f2e3d69bebcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00552.warc.gz"}
|
Estimators for the Common Principal Components Model Based on Reweighting: Influence Functions and Monte Carlo Study
Boente, Graciela; Pires, Ana M.; Rodrigues, Isabel M.
Metrika, 67(2) (2008), 189-218
The common principal components model for several groups of multivariate observations is a useful parsimonious model for the scatter structure which assumes equal principal axes but different
variances along those axes for each group. Due to the lack of resistance of the classical maximum likelihood estimators for the parameters of this model, several robust estimators have been proposed
in the literature: plug-in estimators and projection-pursuit (PP) type estimators. In this paper, we show that it is possible to improve the low efficiency of the projection-pursuit estimators by
applying a reweighting step. More precisely, we consider plug-in estimators obtained by plugging a reweighted estimator of the scatter matrices into the maximum likelihood equations defining the
principal axes. The weights considered penalize observations with large values of the influence measures defined by Boente et al. (2002). The new estimators are studied in terms of theoretical
properties (influence functions and asymptotic variances) and are compared with other existing estimators in a simulation study.
|
{"url":"https://cemat.ist.utl.pt/document.php?member_id=88&doc_id=1449","timestamp":"2024-11-02T18:14:20Z","content_type":"text/html","content_length":"9167","record_id":"<urn:uuid:9a3ca948-55e5-430f-b3d2-8718b9d5a125>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00262.warc.gz"}
|
Printable Multiplication Table of 15 Chart | 15 Times Table Worksheet
Multiplication Table 15 Charts: There is a point of time when a students has reach a higher grade where he or she has gained all the basics of every subject. It is very important that a student
should also have the basics in their mind as there are many subjects which require the basics till lifetime. One of those subjects is maths, maths is that subject which require time and patience as
because there are some topics which requires patience and calmness.
One of those topics is multiplication tables, it is a very easy topic but the students should know the tricks and ways so that they don’t waste much time solving question related to it. One among the
various tables is multiplication table of 15, this is a table which almost every student has to know in order to solve tricky question or else the student will waste lot of time.
Times Table 2 Charts
Times Table 3 Charts
Times Table 4 Charts
Time Table 5 Charts
Times Table 6 Charts
Times Table 7 Charts
Times Table 8 Charts
Times Table 9 Charts
Times Table 10 Charts
Times Table 11 Charts
Times Table 12 Charts
Times Table 13 Charts
Times Table 14 Charts
Times Table 15 Charts
Blank Times Table
15 Times Table Games
Today’s generation is so much addictive towards smartphones and games that they have completely ignore the studies. There are many parents who always have a complain that their kids are very busy
with the games that don’t feel like doing other things. This way the students may lack with their studies and this will directly affect their results and also the child will not be able to match up
other students.
15 Multiplication Table Maths
So to help the parents, we are coming up with our new 15 multiplication table game, we have design the game in such a manner that the kids and students will find it very interesting. The game can be
played in offline mode too which will help the students to study at any time, for the kids it will be a game but actually they will learn the table of 15.
Multiplication Table 15 Worksheet
The students can also use our worksheet which will contain the full explanation of the 15 table, the worksheet can be downloaded and can be used in future. The worksheet is editable, however the
users can make changes according to their choice, i.e, the size which can be big or small, the color which will be available in multiple shades and also the design, we have maked the design according
to age groups like there will be different designs for kids and little advanced for students.
The students can carry the worksheet to school as it is very small in volume, so it can be adjusted to a small place. The worksheet will have all the tricky questions which the students can practice
while their exams are near, as this will help them to get prepare for questions which are time taking.
Times Table 15
The sheet which will have the table of 15 will also be available in printable form, so the users can download the sheet and store it in their devices such as smartphones and computers and when needed
they can go to any nearby printing shops and get the sheet printed. The advantage of our sheet is that the users can get the full table of 15 in a notebook type and the users can make it in the form
of a book which they carry it like a normal books.
Our sheet is available in various formats such as in word, excel, pdf and those users who have the habit of practicing tables from ppt style, they can refer to our ppt form. Our formats are available
free of cost and the users will not have to pay any penny in order to use it, the users can download the sheet from the link as provided.
Multiplication Table 15 Chart
We have also come up with the charted form of the tables and this chart will be helpful for teachers, students and tutors too. The chart is designed in such a manner that it will be easy to carry and
store it, the students can take out the long form of the table and can hang it near the study table and similarly the teachers can carry the chart to the school and can take to different classes.
The beneficial part of the chart is that the teachers will not have to write down the table of 15 in all classes, they can simply carry this chart and go to any class they want, this way they will
save their time and with that free time they can do something more productive for the students which can help the students to mastered the subject.
Times Table 1-10 Charts
Times Table 1-12 Charts
Times Table 1-15 Charts
Times Table 1-100 Charts
Times Table 1-20 Charts
Time Table 1-25 Charts
Times Table 1-30 Charts
Times Table 1-1000 Charts
Worksheet For Grade 2
Worksheet For Grade 3
|
{"url":"https://themultiplicationtable.com/15-charts/","timestamp":"2024-11-04T03:55:14Z","content_type":"text/html","content_length":"65457","record_id":"<urn:uuid:3c4c4497-b6d2-4ec2-998e-297c65f6d631>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00334.warc.gz"}
|
How do you solve and graph 5z<-90? | HIX Tutor
How do you solve and graph #5z<-90#?
Answer 1
$z < - 90$; Graph $y = - 90$ and $y = 5 z$ on the same graph and shade below -90 and to the left of $5 z$.
Divide both sides by 5, giving
$z < - 18$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the inequality (5z < -90), we need to isolate (z).
First, we divide both sides of the inequality by 5:
[ \frac{5z}{5} < \frac{-90}{5} ] [ z < -18 ]
So, the solution to the inequality is (z < -18).
To graph this inequality on a number line, we draw an open circle at -18 (since the inequality is strictly less than), and shade to the left of -18 to represent all the values of (z) that are less
than -18.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-solve-and-graph-5z-90-8f9af935aa","timestamp":"2024-11-08T14:32:35Z","content_type":"text/html","content_length":"575017","record_id":"<urn:uuid:995918d2-ab75-44ba-a749-259ca9fb0e10>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00446.warc.gz"}
|
Strength of Material - Mech Content
Strength of Material
Get a grasp on Strength of material to know how materials and objects behave under different loading conditions. Learn about various concepts such as stress, strains, strength, inertia, and more.
The term radius of gyration is used in both rotational dynamics and static loading. In the case of a rotating …
|
{"url":"https://mechcontent.com/category/strength-of-material/page/2/","timestamp":"2024-11-05T01:01:50Z","content_type":"text/html","content_length":"63670","record_id":"<urn:uuid:17635afc-7b1f-4b43-98bc-2a7ff81b1310>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00852.warc.gz"}
|
Interactive Mathematics Miscellany and Puzzles
Home Page Teachers Primary Pupils Secondary Students Coping With Math Anxiety What Is Math Anxiety? A famous stage actress was once asked if she had ever suffered from stage-fright, and if so how she
had gotten over it. She laughed at the interviewer’s naive assumption that, since she was an accomplished actress now, she must not feel that kind of anxiety. Fibonacci Numbers, the Golden section
and the Golden String Fibonacci Numbers and the Golden Section This is the Home page for Dr Ron Knott's multimedia web site on the Fibonacci numbers, the Golden section and the Golden string hosted
by the Mathematics Department of the University of Surrey, UK. The Fibonacci numbers are
Waterman Polyhedra Play with the controls! Use the "Sequence" slider to step through the series of polyhedra. Click the "Colors" button. Transum: Maths Puzzles There is a great amount of satisfaction
that can be obtained from solving a mathematical puzzle. There is a range of puzzles on this page, all with a mathematical connection, that are just waiting to be solved. You can earn Transum
Trophies for the puzzles you solve. How Many Squares? 2 How many different sets of four dots can be joined to form a square? Tower of Hanoi Move the pieces of the tower from one place to another in
the minimum number of moves.
TEACHERS - Math Talks Top menu: NT = Number Talks PT = Pat tern Talks (hover over NT to see) Hello there. My name is Fawn Nguyen, I'd spent 30 years in the classroom, and 2019 is my first year as a
math TOSA (teacher on special assignment). The voices behind these number talks and pattern talks were from my 6th and 8th graders during the 2013-2014 school year. If you’re not sure what math talks
are, here are a few resources:Professor Jo Boaler refers to number talks regularly in her course How To Learn Math 2014.Brad Fulton presented this strategy at the 2013 CMC-South Conference. Pages 5-9
is on Math Talks. CME Project About the Program. The CME Project, developed by EDC’s Center for Mathematics Education and published by Pearson, is a coherent, four-year, NSF-funded high school
mathematics program designed around how knowledge is organized and generated within mathematics: the themes of algebra, geometry, and analysis. The CME Project sees these branches of mathematics not
only as compartments for certain kinds of results, but also as descriptors for methods and approaches—the habits of mind that determine how knowledge is organized and generated within mathematics
itself. As such, they deserve to be centerpieces of a curriculum, not its by-products. The Program’s Structure. The CME Project uses the traditional American structure of algebra, geometry, advanced
algebra, and precalculus.
Welcome - OeisWiki From OeisWiki Welcome to The On-Line Encyclopedia of Integer Sequences® (OEIS®) Wiki Some Famous Sequences Click on any of the following to see examples of famous sequences in the
On-Line Encyclopedia of Integer Sequences (the OEIS), then hit "Back" in your browser to return here: Recamán's sequence, A005132 The Busy Beaver problem, A060843 The Catalan numbers, A000108 The
prime numbers, A000040 The Mersenne primes, A000043 and A000668 The Fibonacci numbers, A000045 For some other fascinating sequences see Pictures from the OEIS: The (Free) OEIS Store
qbyte: Nick's Mathematical Puzzles Welcome to my selection of mathematical puzzles. What's new? See puzzle 160. Math Manipulatives: About Virtual Manipulatives Math Manipulatives contains three pages
of resources: About Virtual Manipulatives What is a virtual manipulative? In What are Virtual Manipulatives?
|
{"url":"http://www.pearltrees.com/u/1043872-interactive-mathematics","timestamp":"2024-11-07T06:58:40Z","content_type":"text/html","content_length":"87331","record_id":"<urn:uuid:ad8030ac-213c-4811-8485-b4901ffb004d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00472.warc.gz"}
|
Understanding the Virtual Earth Approximation in Op-Amps
• Thread starter gabloammar
• Start date
In summary, the concept of the virtual Earth approximation is crucial in understanding how the inverting amplifier works. It states that in this approximation, the potential at the inverting input is
very close to 0 V. This is due to the high open-loop voltage gain of the op-amp, which requires a small potential difference between the inverting and non-inverting inputs. This is further supported
by the equation G=Vout/Vin, where a smaller difference in values results in a larger gain. Therefore, in order to achieve a gain of -5, the voltage difference between V- and V+ must be adjusted
accordingly. This approximation is important for analysis and design purposes.
Homework Statement
I'm studying op-amps at the moment, and I came across a statement and I don't understand it.
'To understand how the inverting amplifier works, you need to understand the concept of the virtual Earth approximation. In this approximation the potential at the inverting input (-) is very close
to 0 V. Why is this true? There are two steps in the argument.
1. The op-amp multiplies the difference in potential between the inverting and non-inverting inputs, V^- and V^+, to produce the output voltage V[out]. Because the open-loop voltage gain is very
high, the difference between V- and V+ must be almost zero.
2. The non-inverting input (+) is connected to the zero volt line so V^+ = 0. Thus V^- must be close to zero and the inverting input (-) is almost at Earth potential.'
I get the second point, and I even understand most of what the first point is trying to tell me. But, see, it says that for there to be a large gain, there has to be a small potential difference
between the inverting and non-inverting inputs. But isn't it correct that G=V^out/V^in?
Doesn't that equation mean that the smaller the difference in the two values the smaller the gain? [e.g. 100/10 is larger than 10/10.]
What am I not getting here? I'm sure the book's correct and I'm wrong but I'm not seeing what exactly I'm getting wrong. So can someone please help me a little?
Suppose that the intrinsic (open-loop) gain of a real op-amp happened to be 10^7, and that the amplifier circuit it is built into happens to yield a gain of G = -5 (don't ask how the gain is set
right now, you'll be finding that out soon enough).
At the input to this circuit a 1V source is connected so that we expect -5V at the output. What would the voltage difference between V^- and V^+ have to be in order to produce -5V at the output? Is
there a relevant (for analysis or design purposes) difference between this voltage and zero when compared to the voltages between other points in the circuit?
FAQ: Understanding the Virtual Earth Approximation in Op-Amps
1. What is an op-amp?
An op-amp, short for operational amplifier, is an electronic device that amplifies the difference between two input voltages. It is commonly used in analog circuits for amplification, filtering, and
signal processing.
2. What are the basic components of an op-amp?
The three basic components of an op-amp are the inverting input, the non-inverting input, and the output. It also has a power supply connection and often includes additional pins for offset
adjustment, frequency compensation, and other features.
3. How does an op-amp work?
An op-amp works by taking the difference between the voltages at its inverting and non-inverting inputs and amplifying it by a very large factor. It then outputs this amplified signal. The output
voltage will continue to adjust until the two input voltages are equal, creating a negative feedback loop.
4. What is a basic op-amp problem?
A basic op-amp problem typically involves analyzing a circuit with an op-amp and determining the output voltage or other characteristics of the circuit. This can include calculating gain, input and
output impedance, and frequency response.
5. How can I solve a basic op-amp problem?
To solve a basic op-amp problem, you will need to have a good understanding of op-amp theory and properties, as well as knowledge of circuit analysis techniques. It is important to carefully analyze
the circuit and correctly apply the equations and formulas to find the desired output or characteristics. Practice and familiarity with op-amp circuits will also help in solving these problems.
|
{"url":"https://www.physicsforums.com/threads/understanding-the-virtual-earth-approximation-in-op-amps.589007/","timestamp":"2024-11-09T17:36:39Z","content_type":"text/html","content_length":"77317","record_id":"<urn:uuid:ff618dac-3f9f-4695-8e91-1e7f6fbdf472>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00815.warc.gz"}
|
User-defined restraint forms
User-defined restraint forms
To create a new restraint form, derive a new class from the base forms.restraint_form. You should then override the following functions: __init__, eval, vmin, rvmin, min_mean, vheavy, rvheavy,
heavy_mean, and get_range. Note that presently you can only derive from this base class, not from MODELLER built-in forms.
Restraint forms can act on one or more features (each of which has an accompanying integer modality, which you can use for any purpose), and can take any number of floating-point parameters as input.
The features and parameters are stored in self._features and self._parameters respectively, but for convenience the base constructor restraint_form.__init__ can set initial values for these.
The eval function is called from MODELLER with the current feature values, their types and modalities, and the parameter vector. You should return the objective function contribution and, if
requested, the derivatives with respect to each feature. The feature types are required by the deltaf function, which returns the difference between the current feature value and the mean (a simple
subtraction is not sufficient, as some feature types are periodic). Note that you must use the passed parameter vector, as the class is not persistent, and as such the self._parameters variable (or
any other object variable you may have set) is not available to this function.
The get_range function is used to define the feature range over which the form is clearly non-linear. It is simply passed a similar set of parameters to eval, and should return a 2-element tuple
containing the minimum and maximum feature values. It is only necessary to define this function if the form acts on only a single feature and you want to be able to convert it to a cubic spline using
The other functions are used to return the minimal and heavy restraint violations (both absolute and relative; see Section 5.3.1) and the means. The heavy and minimal means correspond to the global
and local minima.
Example: examples/python/user_form.py
from modeller import *
from modeller.scripts import complete_pdb
env = environ()
env.io.atom_files_directory = ['../atom_files']
class MyGauss(forms.restraint_form):
"""An implementation of Modeller's harmonic/Gaussian restraint (type 3)
in pure Python"""
rt = 0.5900991 # RT at 297.15K, in kcal/mol
def __init__(self, group, feature, mean, stdev):
forms.restraint_form.__init__(self, group, feature, 0, (mean, stdev))
def eval(self, feats, iftyp, modal, param, deriv):
(mean, stdev) = param
delt = self.deltaf(feats[0], mean, iftyp[0])
val = self.rt * 0.5 * delt**2 / stdev**2
if deriv:
fderv = self.rt * delt / stdev**2
return val, [fderv]
return val
def vmin(self, feats, iftyp, modal, param):
(mean, stdev) = param
return self.deltaf(feats[0], mean, iftyp[0])
def rvmin(self, feats, iftyp, modal, param):
(mean, stdev) = param
return self.deltaf(feats[0], mean, iftyp[0]) / stdev
def min_mean(self, feats, iftyp, modal, param):
(mean, stdev) = param
return [mean]
def get_range(self, iftyp, modal, param, spline_range):
(mean, stdev) = param
return (mean - stdev * spline_range, mean + stdev * spline_range)
# There is only one minimum, so the 'heavy' mean is the same as the 'min'
vheavy = vmin
rvheavy = rvmin
heavy_mean = min_mean
mdl = complete_pdb(env, "1fdn")
sel = selection(mdl)
rsr = mdl.restraints
at = mdl.atoms
feature=features.distance(at['CB:1:A'], at['CA:1:A']),
mean=1.5380, stdev=0.0364))
# Restraints using user-defined forms can be converted to splines for speed.
# Only one-dimensional forms that define the get_range() method can be splined.
rsr.spline(MyGauss, features.distance, physical.bond, spline_dx=0.05)
Automatic builds 2017-07-19
|
{"url":"https://salilab.org/modeller/9.19/manual/node477.html","timestamp":"2024-11-07T19:22:08Z","content_type":"text/html","content_length":"17944","record_id":"<urn:uuid:d304d166-9698-4bf3-b51d-c57844255658>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00498.warc.gz"}
|
R package for modelling count ratio data
How to get started
Install the R package using the following commands on the R console:
Alternatively, you can install from github:
Brief introduction
Digital expression measurements (e.g. RNA-seq) are often used to determine the change of quantities upon some treatment or stimulus. The resulting value of interest is the fold change (often
This effect size of the change is often treated as a value that can be computed as lfc(A,B) = log2 A/B. However, due to the probabilistic nature of the experiments, the effect size rather is a random
variable that must be estimated. This fact becomes obvious when considering that A or B can be 0, even if the true abundance is non-zero.
We have shown that this can be modelled in a Bayesian framework. The intuitively computed effect size is the maximum likelihood estimator of a binomial model, where the effect size is not represented
as fold change, but as proportion (of note, the log fold change simply is the logit transformed proportion). The Bayesian prior corresponds to pseudocounts frequently used to prevent infinite fold
changes by A or B being zero. Furthermore, the Bayesian framework offers more advanced estimators (e.g. interval estimators or the posterior mean, which is the optimal estimator in terms of squared
This R package offers the implementation to harness the power of this framework.
|
{"url":"https://pbil.univ-lyon1.fr/CRAN/web/packages/lfc/readme/README.html","timestamp":"2024-11-01T22:47:13Z","content_type":"application/xhtml+xml","content_length":"3038","record_id":"<urn:uuid:2eb83065-c377-4e05-9907-c9a008699481>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00404.warc.gz"}
|
30 Best Biostatistics Tutors - Wiingy
Learn from the Best Biostatistics Tutors
Biostatistics is challenging, but it doesn’t have to be.
Our experienced biostatistics tutors will work with you one-on-one to help you understand the concepts, solve problems, and ace your exams.
Sign up for our biostatistics tutoring program starting at just $28/hour.
What sets Wiingy apart
Expert verified tutors
Free Trial Lesson
No subscriptions
Sign up with 1 lesson
Transparent refunds
No questions asked
Starting at $28/hr
Affordable 1-on-1 Learning
Top Biostatistics tutors available online
2017 Biostatistics tutors available
Biostatistics Tutor
4+ years experience
Qualified Online Biostats tutor with 4+ years of tutoring experience. Provides customized lessons, test prep, and assignment help in Hypothesis testing, Chi-square tests and more.
Responds in 4 min
Star Tutor
Biostatistics Tutor
2+ years experience
Biostatistics specialist here to guide you through academic and learning hurdles through the use of a structured online classes and online learning tools
Responds in 4 min
Star Tutor
Biostatistics Tutor
2+ years experience
Expert Biostatistics tutor with 2+ years of Experience. Engaging Sessions. Provides assignment help and in-depth classes.
Responds in 3 min
Star Tutor
Biostatistics Tutor
7+ years experience
Expert Biostatistics tutor, MA in French, with 7 years of tutoring experience and in depth knowledge of the subject. Willing to teach this subject to learners of any skill and learning level
Responds in 3 min
Star Tutor
Math Tutor
4+ years experience
Having extensive experience in tutoring Math, this educator provides thorough instruction and preparation for exams. Offers personalized guidance, practice exercises. With 4 years of experience,
having a Master's degree in Mathematics Education
Responds in 8 min
Star Tutor
Math Tutor
2+ years experience
Expert in Math with Masters in Mathematics and 2+ years of experience of teaching math concepts to high school and college students in CA and UK.
Responds in 2 min
Star Tutor
Math Tutor
1+ years experience
Expert Math tutor for High School students. Provides detailed 1-on-1 sessions, homework help, and test-prep to US students.
Responds in 13 min
Star Tutor
Math Tutor
2+ years experience
Math Tutor with 2 year of teaching experience and pursuing a Masters degree, aim to elevate your understanding. Offering personalized guidance in homework and beyond, ensuring mastery of key concepts
and practical application.
Responds in 7 min
Star Tutor
Math Tutor
3+ years experience
Certified Math tutor. Completed M.Sc. and B.Ed. Possesses 3+ years of teaching experience. Provides in-depth lessons, test prep, and homework help in Math, Physics, and related subjects.
Responds in 1 min
Star Tutor
Math Tutor
1+ years experience
Expert in Math. Bachelor's Degree from IIT, Madras, India, Personalized Sessions (Individual attention & Provides assignment help.
Responds in 8 min
Star Tutor
Math Tutor
3+ years experience
Learn and master mathematics. A highly skilled tutor who has a knack for breaking down complex topics and elaborating on them. A bachelor's degree tutor with 3 years of expertise in encouraging
Responds in 14 min
Student Favourite
Math Tutor
4+ years experience
Experienced Math tutor with a Bachelor's in Mathematics and 4 years of teaching experience. Provides personalized tutoring, homework help, and test preparation for students from elementary to high
Responds in 2 min
Star Tutor
Math Tutor
10+ years experience
Experience Math tutor with 10+ years of online tutoring experience with high school and college students. Subject expertise in Algebra, Calculus, Probability and, Statistics upto college level.
Responds in 3 min
Star Tutor
Math Tutor
7+ years experience
Best math tutor online for high school students. MSc. Math with 7+ years of tutoring experience. Provides brilliant 1-on-1 lessons, homework help, and test prep for AP, SAT, and ACT.
Responds in 4 min
Star Tutor
Math Tutor
1+ years experience
Top-notch math tutor with 1 year of experience tutoring high school students across the UK, US, CA, and AU. Supports and helps with problem-solving and critical thinking. Currently pursuing a
Master's degree.
Responds in 1 min
Star Tutor
Math Tutor
4+ years experience
Advance your Math skills with personalized instruction from a Master’s degree holder with 4 years of experience. Develop a strong grasp of mathematical concepts and improve your academic performance.
Responds in 14 min
Star Tutor
Math Tutor
7+ years experience
Talented Maths tutor with 7+ years of experience. Provides interactive concept clearing lessons test preparation and projects help to students. Holds a master's degree in Economics.
Responds in 9 min
Star Tutor
Math Tutor
4+ years experience
Excel in various Math subjects with expert tutoring from a Master’s degree holder and 4 years of experience. Build a solid foundation and develop strong analytical skills.
Responds in 4 min
Star Tutor
Math Tutor
2+ years experience
Best Mathematics tutor have online teaching experince with US school students from 2+ years; Provides customised lessons, stem by stem problem solving strategies and test prep.
Responds in 2 min
Star Tutor
Math Tutor
1+ years experience
Passionate Online Math Tutor for high school students. MSc. Math, Content creator. Provides Interactive 1-on-1 tutoring sessions, assignment help, and test preparation.
Responds in 10 min
Star Tutor
Math Tutor
4+ years experience
An experienced Math tutor with a Master's Degree in Mathematics Education, having trained many students. Offers personalized lessons and provides timely assignment help. Has 4+ years of experience.
Responds in 14 min
Star Tutor
Math Tutor
4+ years experience
Top-rated math tutor, holding a Masters degree in Applied Mathematics, having 4 years of tutoring experience, also offers personalized lessons and provides assignment help on time.
Responds in 1 min
Star Tutor
Math Tutor
4+ years experience
Level Design and Environment Creation with Unreal Engine from C++, C# and Python Programmer.
Responds in 3 min
Star Tutor
Math Tutor
7+ years experience
Seasoned SAT math teacher having 7+ years of 1-on-1 tutoring experience with high school students in US; Provides SAT exam prep strategies, past papers practice, and guidance.
Responds in 8 min
Star Tutor
Math Tutor
1+ years experience
Advanced Math Tutor holding Master's degree with 1 year of experience tutoring students worldwide. Offers comprehensive guidance and support to ensure students achieve success.
Responds in 14 min
Student Favourite
Math Tutor
6+ years experience
Highly-rated Math tutor with 6+ years of experience assisting high school and university students. Provides comprehensive support with assignments and mock tests, holding a Bachelor's degree in
Mathematics Education.
Responds in 59 min
Student Favourite
Math Tutor
1+ years experience
Math tutor specializing in high school and university students, with 1 year of experience. Provides comprehensive sessions, homework assistance, and test preparation for students in the US, UK.
Pursuing a Bachelor's degree in Computer Science.
Responds in 20 min
Student Favourite
Math Tutor
6+ years experience
Experienced Math Tutor with a Bachelor's in Mathematics Education and 6 years of teaching experience. Provides engaging lessons, homework assistance, and exam preparation to high school and college
Responds in 11 min
Student Favourite
Math Tutor
12+ years experience
Qualified math tutor with 12+ years of experience mentoring students in high school students. Supports with preparing for the test. Holds a bachelor's degree
Responds in 2 min
Star Tutor
Math Tutor
9+ years experience
Highly skilled Math Writing tutor with 9+ years of teaching experience. Helps with homework assistance, and test preparation to high school to University Students. Holds a Master's Degree in
Mathematics Education.
Biostatistics topics you can learn
• Overview
• Significance in biology and healthcare
• Key terms and concepts
• Ethical considerations in biomedical research
Try our affordable private lessons risk-free
• Our free trial lets you experience a real session with an expert tutor.
• We find the perfect tutor for you based on your learning needs.
• Sign up for as few or as many lessons as you want. No minimum commitment or subscriptions.
In case you are not satisfied with the tutor after your first session, let us know, and we will replace the tutor for free under our Perfect Match Guarantee program.
What is biostatistics?
Biostatistics is the application of statistical methods to the study of biological data. It involves designing, collecting, analyzing, and interpreting biological data. Biostatisticians use their
skills to help scientists and clinicians answer important questions about human health and disease. For example, they help design clinical trials to test new drugs or treatments, and also help
analyze data from observational studies to identify risk factors for disease.
Biostatistics is an interdisciplinary field of mathematics, statistics, biology, and medicine. It is growing at a fast pace, and biostatisticians are in high demand in many fields, including
academia, government, and industry.
Uses of biostatistics
Biostatistics is used in a wide variety of ways:
1. Designing and analyzing clinical trials: it is used to test the safety and efficacy of new drugs and treatments.
2. Analyzing data from observational studies: it is used to analyze data from random observational studies.
3. Developing and evaluating public health programs: Biostatisticians help develop and evaluate public health programs, such as vaccination and disease screening programs.
4. Conducting research on health disparities: Biostatisticians conduct research on health disparities, which are the differences in health outcomes between different groups of people. This research
can help to identify the causes of health disparities and to develop interventions to reduce them.
Why study biostatistics?
It is a valuable skill to develop for many different careers. Some specific careers where biostatistics is in high demand include
1. Biostatistician: Biostatisticians work in various settings, including academia, pharmaceutical companies, clinical research organizations, and government agencies.
2. Epidemiologist: Epidemiologists study the distribution and causes of disease in populations. They use biostatistics to analyze data from observational studies to identify risk factors for disease
and track disease spread.
3. Public health researcher: Public health researchers study the factors that affect the health of populations. They use biostatistics to design and analyze studies to evaluate public health
programs and to identify interventions to improve public health.
4. Medical writer: Medical writers communicate complex medical information to various audiences, including healthcare professionals, patients, and the general public. Biostatistics skills can be
helpful for medical writers who need to understand and explain complex medical research studies.
Essential information about your
Biostatistics lessons
Average lesson cost: $28/hr
Free trial offered: Yes
Tutors available: 1,000+
Average tutor rating: 4.8/5
Lesson format: One-on-One Online
|
{"url":"https://wiingy.com/tutoring/subject/biostatistics-tutors/","timestamp":"2024-11-05T12:26:43Z","content_type":"text/html","content_length":"494713","record_id":"<urn:uuid:25ce0243-1cca-463b-ac61-9c0d8dd47edb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00101.warc.gz"}
|
Silent maths videos - MathedUp!
Silent maths videos
Posted on by MathedUp
In a recent maths teaching and learning meeting, I led a session on using videos in maths lessons. We talked about how there are different types of videos out there and when it would be appropriate
to use each type. Although they are great for independent learning, I’m personally not a fan of using “explanation” type videos in lessons but am happy that there may be times when these might be
suitable. We also talked about using videos designed to help pupils remember mathematical formulae and decided that this may work well with some groups, but not so well with others.
We broke off into pairs to try and plan a lesson or an activity that made good use of videos. The colleague who I was working with said she was struggling to get one of her classes to understand
converting between improper fractions and mixed numbers so we decided to create our own videos, demonstrating the link using chocolate cakes. We chose to make the videos silent as we felt it would be
more powerful and engaging. It would also allow for the teacher to ask questions either over the video, or by pausing it at particular points.
The videos were used today and I was told that it worked particularly well so I thought I would share it.. I hope this inspires you to make your own!
We also took a couple of photos which could be used to ask further questions.
photo credit: Paul Mison cc
Talk to me Cancel reply
|
{"url":"https://www.mathedup.co.uk/silent-maths-videos/","timestamp":"2024-11-13T16:18:40Z","content_type":"text/html","content_length":"191426","record_id":"<urn:uuid:171e99be-1f8f-4c6b-b4c8-a68c06d81f1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00897.warc.gz"}
|
Ball Mill Classification Design
Design Method of Ball Mill by Sumitomo stone Co., Ltd ...
A ball mill is one kind of grinding machine, and it is a device in which media balls and solid materials (the materials to be ground) are placed in a container. The materials are ground by moving the
container. Because the structure of ball mills is simple and it is easy to operate, and so they are widely used.
Ball mill - Wikipedia
A ball mill is a type of grinder used to grind, blend and sometimes for mixing of materials for use in mineral dressing processes, paints, pyrotechnics, ceramics, and selective laser sintering. It
works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. A ball mill consists of a hollow cylindrical shell rotating about
its axis. The axis of the shell …
A guide to maximising ball mill circuit …
manipulation of design and operating variables in the classification system will be provided. Examples and. case studies will illustrate the gains in ball mill circuit efficiency that can be achieved
by maximising CSE. AUTHOR DETAILS. K M Bartholomew (1), R E McIvor (2) and O Arafat (3)
Ball Mill Circuit Classification System Efficiency
The circulating load ratio is an excellent subject for study of classification system performance because it has long been recognized as such an important factor in ball milling efficiency. Results
from the classical work of Davis (1925) are shown in Figure 2. Note that an increase in circulating load ratio from 150 to 500 percent yielded an increase in production rate from 190 to 230 units, a
Page 1 Ball Milling Theory
Design Notes... Page 1 Ball Milling Theory Introduction: Figure 1: Ball milling terminology. I was first given the formula for gunpowder by my Uncle at age 14, after he had observed my apparent
obsession with class C fireworks. Being a scientist who had experimented with the ancient recipe himself during his youth, he thought I should try making my own fireworks. When I found out that these
Ball Mill Design/Power Calculation
The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density,
desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum ‘chunk size’, product size as P80 and maximum and finally the type of circuit open/closed you are
designing for.
For example, when 60% of the solids in a ball mill is coarse, then the coarse solids inventory * in the mill is 60%, and only 60% of the mill grinding volume and power is used for the grinding of
coarse particles. The classification system efficiencyof the circuit is then only 60%. We can simply define "classification system efficiency" as the
(PDF) DESIGN AND FABRICATION OF MINI BALL …
mini ball mill, a ball mill base is design and fabricate to withstand the weight of the rotating ja r, motor and gears. After a few hou rs, stop the mini ball mill and the powder can b e filtered...
The ball impact energy on stone is proportional to the ball diameter to the third power: 3 E K 1 d b. (3) The coefficient of proportionality K 1 directly depends on the mill diameter, ball mill
loading, milling rate and the type of grinding (wet/dry). None of the characteristics of the material being ground have any influence on K 1.
Locarno Classification (designs)
Locarno Classification (designs) share. Locarno-Klassifikation (Geschmacksmuster) Die Locarno-Klassifikation ist das internationale Klassifikationssystem für gewerbliche Muster und Modelle, das von
der Weltorganisation für geistiges Eigentum (WIPO) verwaltet wird. Das Amt der Europäischen Union für geistiges Eigentum (EUIPO) hat eine Liste von Erzeugnissen zusammengestellt – die ...
The Selection and Design of Mill Liners - MillTraj
Figure 5. High–low wave ball mill liner Materials The selection of the material of construction is a function of the application, abrasivity of ore, size of mill, corrosion environment, size of
balls, mill speed, etc. liner design and material of construction are integral and cannot be …
Locarno Classification - WIPO
The Locarno Classification, established by the Locarno Agreement (1968), is an international classification used for the purposes of the registration of industrial designs.The current edition of the
Classification is published online. Find out more about the Locarno Classification.
Ball Mill Design/Power Calculation - LinkedIn
12.12.2016 · The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density,
specific ...
Classifying and Ball Mill Production Line – ALPA …
Special design to ball mill which will be selected on the basis of material’s hardness, grindability index, final particle size and capacity. The shape of the lining and the ball (segment) are
tailored according to years of engineering practice experience to maximize the grinding efficiency of the ball mill and reduce the energy consumption.
TECHNICAL NOTES 8 GRINDING R. P. King
the mill is used primarily to lift the load (medium and charge). Additional power is required to keep the mill rotating. 8.1.3 Power drawn by ball, semi-autogenous and autogenous mills A simplified
picture of the mill load is shown in Figure 8.3 Ad this can be used to establish the essential features of a model for mill …
professional design ball mill manual 】
Professional Design Types Of Ball Mill. Professional Design Types Of Ball Mill. Types of ball mill pdf he prediction of the power drawn by ball, semiautogenous and fully autogenous mills have been
developed by morrell and by austinmorrell, sower draw of wet tumbling mills and itstype of media of ball mill laboratory pdf 20121225 ball mill wikipedia, the free encyclopedia ball mill is a type of
|
{"url":"https://dare-project.eu/stone/8299_ball-mill-classification-design/","timestamp":"2024-11-10T07:39:55Z","content_type":"application/xhtml+xml","content_length":"13628","record_id":"<urn:uuid:726329f0-e8b3-4614-98c0-f6781372d2b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00871.warc.gz"}
|
Orlov N, Shamir L, Macura LS, Johnston J, Eckley DM, Goldberg IG
Orlov N, Shamir L, Macura LS, Johnston J, Eckley DM, Goldberg IG
Orlov N, Shamir L, Macura LS, Johnston J, Eckley DM, Goldberg IG. into two ALK-IN-1 (Brigatinib analog, AP26113 analog) different multivariate classifiers (support vector machine (SVM) and linear
discriminant evaluation (LDA) classifier). Before extracting features, we make use of color deconvolution to split up different tissues components, like the brownly stained positive locations and the
blue cellular regions, in the immuno-stained TMA images of breast tissue. Results: We present classification results based on combinations of ALK-IN-1 (Brigatinib analog, AP26113 analog) feature
measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Conclusions: Both human experts and the proposed automated methods have
troubles discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for quantification of staining patterns in
histopathology have many applications, ranging from antibody quality control to tumor grading. = 16, where four main directions have been used so as to compute the occurrences: 0, 45, 90, and 135.
Complex Wavelet Co-Occurrence Matrix The complex wavelet transform (CWT) is usually a complex valued extension to the standard discrete wavelet transform (DWT).[17] It provides multiresolution,
sparse representation, and useful characterization of the structure of an image. The dual-tree complex wavelet transform (DT-CWT) requires additional memory, but provides approximate shift
invariance, good directional ALK-IN-1 (Brigatinib analog, AP26113 analog) selectivity in two sizes and extra information in imaginary plane of complex wavelet domain when compared to DWT.[18] DT-CWT
calculates the complex transform of a signal using two individual DWT decompositions. Since DT-CWT produces ALK-IN-1 (Brigatinib analog, AP26113 analog) complex coefficients for each directional
sub-band at each level, this produces six directionally selective sub-bands for each scale of the two-dimensional DT-CWT at Rabbit polyclonal to NFKBIZ approximately 15, 45, and 75. In dyadic
decomposition, sub-bands are allowed to be decomposed in both vertical and horizontal directions sequentially, but in anisotropic decomposition sub-bands are allowed to be decomposed only vertically
or horizontally. Studies have shown that this anisotropic dual-tree complex wavelet transform (ADT-CWT) provides an efficient representation of directional features in images for pattern
acknowledgement applications.[19] Ten basis functions are produced in ADT-CWT in each level which makes different orientations at the directions of 81, 63, 45, 27, and 9. This result in a finer
analysis of the local high frequency components of images which is characterized by a finer division of high-pass sub-bands as well as edges and contours, which are represented by anisotropic basis
functions oriented in different finer directions. Here we use an adaptive basis selection method on Undecimated Adaptive Anisotropic Dual-tree complex wavelet transform (UAADT-CWT).[20] Textural
Feature Extraction The textural features uniformity, entropy, dissimilarity, contrast, correlation, ALK-IN-1 (Brigatinib analog, AP26113 analog) homogeneity, autocorrelation, cluster shade, cluster
prominence, max. probability, sum of squares, sum average, sum variance, sum entropy, difference variance, difference entropy, information measures of correlation-1, information steps of
correlation-2, inverse difference normalized, inverse difference instant normalized are extracted with inter-pixel distance = 16, from your 64 64 pixel patches of the tissue images using the standard
expressions derived in[15,16] for the following features extraction techniques (i) GLCM features: From color delineated blue and brown/black stains channels (20 + 20 = 40 features) and (ii) CWCM
features: Each feature is usually computed by taking the complete value of the real and imaginary a part of complex co-efficient in four main directions (0, 45, 90, and 135) for three decomposition
levels. Finally, for each feature, the mean value over the three decomposition levels is usually computed for the DT-CWT (60 blue channel + 60 brown/black channel = 120 features) and UAADT-CWT (60
blue channel + 60 brown/black channel = 120 features). Support Vector Machine Classifier SVM is usually a classification technique,.
|
{"url":"https://ees2010prague.org/2023/05/21/orlov-n-shamir-l-macura-ls-johnston-j-eckley-dm-goldberg-ig/","timestamp":"2024-11-03T02:49:01Z","content_type":"text/html","content_length":"36097","record_id":"<urn:uuid:d7c3a27b-f19c-4184-b6e8-c1d4203abe68>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00378.warc.gz"}
|
Online Differential Equations Tutors from Canada, USA, UK, South Africa, Brazil and other Countries - MyPrivateTutor
• 14 classes
• Algebra, Calculus, Computation, Differential Equations, Probability & Statistics
Recent NUS graduate with experience teaching various subjects
I specialize in tutoring math, physics, and programming, with a strong background in statistics, biomedical engineering, and coding. I’ve taught bot...
• 14 classes
• Algebra, Calculus, Computation, Differential Equations, Probability & Statistics
Recent NUS graduate with experience teaching various subjects
I specialize in tutoring math, physics, and programming, with a strong background in statistics, biomedical engineering, and coding. I’ve taught bot...
|
{"url":"https://www.kuwaittutor.com/mathematics/online-differential-equations-tutors","timestamp":"2024-11-04T21:53:54Z","content_type":"text/html","content_length":"743377","record_id":"<urn:uuid:7f12833e-9664-4fa8-9028-578703deefa9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00404.warc.gz"}
|
ecdh vs rsa
All of this applies to other cryptosystems based on modular arithmetic as well, including DSA, D-H and ElGamal. RSA deals with prime numbers* - and very few numbers are prime! Why is ECDSA the
algorithm of choice for new protocols when RSA is available and has been the gold standard for asymmetric cryptography since 1977? RSA public key algorithms are not considered legacy as of yet.
RSA_DH vs ECDH implementation. GPG implementation of ECC “Encryption” (ECDH) vs RSA . >= 2048 bits, no TLS configuration flaw on your side, and the lack of security bugs in the TLS library used by
your server or the one used by the client user agent, I do not believe TLS_ECDH_RSA_WITH_AES_128_GCM can, at this point, be decrypted by surveillance agencies. It boils down to the fact that we are
better at breaking RSA than we are at breaking ECC. Ask Question Asked 17 days ago. Ephemeral Diffie-Hellman vs static Diffie-Hellman. Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol
that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. Chercher les emplois correspondant à Ecdh vs ecdhe ou embaucher
sur le plus grand marché de freelance au monde avec plus de 18 millions d'emplois. Copy link Quote reply Contributor yanesca commented Sep 15, 2016. This is what I consider to be a pragmatic and
pratical overview of today's two most popular (and worthwhile) encryption standards. However a real fix is implemented with TLS 1.2 in which the GCM mode was introduced and which is not vulnerable to
the BEAST attack. I don't find the nitty-gritty details to be of much value, but I do consider it important to know that there are tradeoffs in choosing between the two. But ECC certificates, or
elliptic curve cryptography certificates, are a bit of a new player on the block. theoretically i hav com to knw now. A common question I often get from customers and students is about Microsoft’s
Cryptographic Service Providers (CSP). This means that an eavesdropper who has recorded all your previous protocol runs cannot derive the past session keys even through he has somehow learnt about
your long term key which could be a RSA private key. So basically my problem is the odd result i get when measuring the time it takes to generate a ECDH key in java vs. the time it takes to generate
a DH key. So, each time the same parties do a DH key exchange, they end up with the same shared secret. Even when ECDH is used for the key exchange, most SSH servers and clients will use DSA or RSA
keys for the signatures. Elliptic curve cryptography is a newer alternative to public key cryptography. Now let's forget about quantum computing, which is still far from being a serious problem. You
are right, that may be a problem... yanesca added bug tracking and removed question labels Sep 15, 2016. ciarmcom added the mirrored label Sep 20, 2016. If you’ve been working with SSL certificates
for a while, you may be familiar with RSA SSL certificates — they’ve been the standard for many years now. Other Helpful Articles: Symmetric vs. Asymmetric Encryption – … If you want a signature
algorithm based on elliptic curves, then that’s ECDSA or Ed25519; for some technical reasons due to the precise definition of the curve equation, that’s ECDSA for P-256, Ed25519 for Curve25519. These
keys can be symmetric or asymmetric, RSA, Elliptical Key or a host of others such as DES, 3DES, and… Some algorithms are easier to break … I am using RSA cipher for signing the certificate and
SSL_CTX_set_tmp_ecdh_callback() api to set the ECDH parameters for key-exchange.The server always ends up choosing TLS_ECDHE_RSA_* cipher suite. In a nutshell, Diffie Hellman approach generates a
public and private key on both sides of the transaction, but only shares the public key. ECC and RSA. GPG implementation of ECC “Encryption” (ECDH) vs RSA. Ask Question Asked 5 years, 4 months ago.
This shared secret may be directly used as a key, or to derive another key.The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. i wanted to
know the key exchange mechanism executed by the public key cryptosystems. Certicom launched a challenge in 1998 to compute discrete logarithms on elliptic curves with bit lengths ranging from 109 to
359. As we described in a previous blog post, the security of a key depends on its size and its algorithm. Active 17 days ago. If today's techniques are unsuitable, what about tomorrow's techniques?
“Magic encryption fairy dust.” TLS 1.2 is a method to achieve secure communication over an insecure channel by using a secret key exchange method, an encryption method, and a data integrity method.
RSA certificate signatures exchanged, etc. Hello, I'm trying to make sense out of the various abbrevations used for the SSL cipher suites listed by openssl ciphers. Historically, (EC)DSA and (EC)DH
come from distinct worlds. Posted by 5 months ago. Here’s what the comparison of ECDSA vs RSA looks like: Security (In Bits) RSA Key Length Required (In Bits) ECC Key Length Required (In Bits) 80:
1024: 160-223: 112: 2048: 224-255: 128: 3072: 256-383: 192: 7680: 384-511: 256: 15360: 512+ ECC vs RSA: The Quantum Computing Threat. Rivest Shamir Adleman algorithm (RSA) Encryption: Advanced
Encryption Standard with 256bit key in Cipher Block Chaining mode (AES 256 CBC) Cipher Block Chaining: The CBC mode is vulnerable to plain-text attacks with TLS 1.0, SSL 3.0 and lower. The question
I'll answer now is: why bothering with elliptic curves if RSA works well? ECIES vs. RSA + AES. Viewed 111 times 7. RSA. It is likely that they will be in the next several years.
TLS_ECDH_RSA_WITH_RC4_128_SHA 49164: Represents the TLS_ECDH_RSA_WITH_RC4_128_SHA cipher suite.
$$\\begin{array}{rl} Don't worry it is intentional: the reference to the signature in the cipher-suite name has a different meaning with DH and ECDH. Code: rsa 2048 bits 0.001638s 0.000050s 610.4
19826.5 256 bit ecdsa (nistp256) 0.0002s 0.0006s 6453.3 … Like RSA and DSA, it is another asymmetric cryptographic scheme, but in ECC, the equation defines the public/private key pair by operations
on points of elliptic curves, instead of describing it as the product of very large prime numbers. but now want to see how it works in c# code. Close. But both are ok when i use 'ECDH-RSA' and
'ECDH-ECDSA' to connect the server(./ssl_server2) which have load a certificate signed with ECDSA.
10156! ECDH vs. ECDHE. GCM should … Whether a given implementation will permit such exchange, however, is an open question. ECDSA vs RSA. (hopefully, not important for me) Client Key Exchange, Change
Cipher Spec, Hello Request ECDHE pubkey sent to server; New Session Ticket, Change Cipher Spec, Hello Request, Application Data session ticket received, etc. November 3, 2020 Uncategorized. So, how
does it compare to ECDSA key exchange? Hence, ECDSA and ECDH key pairs are largely interchangeable. The CSPs are responsible for creating, storing and accessing cryptographic keys – the underpinnings
of any certificate and PKI. This is because RSA can be directly applied to plaintext in the following form: c = m^e (mod n). RSA and the Diffie-Hellman Key Exchange are the two most popular
encryption algorithms that solve the same problem in different ways. There is a bit more to cryptography than computations on elliptic curves; the "key lifecycle" must be taken into account. Viewed
390 times 2 $\begingroup$ In ECDH protocol is possible, naturally, to use the same algorithm for calculate a secret key for both communication parties (Alice and Bob for example). Learn more, What's
the difference between TLS-ECDH-RSA-WITH-XXX and TLS-ECDH-ECDSA-WIT-HXXX. TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA 49160: Represents the TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA cipher suite. Elliptic
curve cryptography is probably better for most purposes, but not for everything. 1 $\begingroup$ I am confused about the distinction between RSA and ECC (Elliptic curve) when it comes to encryption,
and would appreciate it if someone could confirm if my understanding is correct. TLS_ECDH_RSA_WITH_NULL_SHA 49163: Represents the TLS_ECDH_RSA_WITH_NULL_SHA cipher suite. You can find more
information on this in the standard. A quick answer is given by NIST, which provides with a table that compares RSA and ECC key sizes required to achieve the same level of security. version:
mbedtls-2.2.1. Represents the TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384 cipher suite. Note though that by reading just this series, you are not able to implement secure ECC cryptosystems: security
requires us to know many subtle but important details. ECC 256 bit (ECDSA) sign per seconds 6,453 sign/s vs RSA 2048 bit (RSA) 610 sign/s = ECC 256 bit is 10.5x times faster than RSA. i knw there are
libraries existing in visual studio. 24. L'inscription et faire des offres sont gratuits. Note, though, that usage contexts are quite distinct. RSA 2048 bit vs ECC 256 bit Benchmarks Example tested
on 512MB KVM RamNode VPS with 2 cpu cores with Centmin Mod Nginx web stack installed. Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of
elliptic curves over finite fields.ECC allows smaller keys compared to non-EC cryptography (based on plain Galois fields) to provide equivalent security.. Elliptic curves are applicable for key
agreement, digital signatures, pseudo-random generators and other tasks. Ephemeral Diffie-Hellman (DHE in the context of TLS) differs from the static Diffie-Hellman (DH) in the way that static
Diffie-Hellman key exchanges always use the same Diffie-Hellman private keys. It is possible to design also a same algorithm for the parties comunication in the RSA-DH protocol? Comparing ECC vs RSA
SSL certificates — how to choose the best one for your website . If i make the client send only TLS_ECDH_* cipher suites in the clientHello, the server breaks the connection stating "no shared
cipher". In practice RSA key pairs are becoming less efficient each year as computing power increases. RSA: Asymmetric encryption and signing: 512 to 16384 in 64-bit increments Microsoft Smart Card
Key Storage Provider. The latest successful attempt was made in 2004. Supports smart card key creation and storage and the following algorithms. ecdh vs rsa. Assuming you manage to safely generate
RSA keys which are sufficiently large, i.e. TLS… Active 4 years, 6 months ago. My understanding of GPG with traditional RSA keys, is that RSA is by definition can be used to both sign and encrypt.
RSA vs EC. The main feature that makes an encryption algorithm secure is irreversibility.
That we are better at breaking ECC today 's techniques 4 months.! Quantum computing, which is still far from being a serious problem any certificate and PKI but want... I consider to be a pragmatic
and pratical overview of today 's techniques implementation will such... This applies to other cryptosystems based on modular arithmetic as well, including,... Providers ( CSP ) are responsible for
creating, storing and accessing Cryptographic keys – underpinnings... Plaintext in the next several years SSL certificates — how to choose the one. ( mod n ) about Microsoft ’ s Cryptographic Service
Providers ( CSP ), including DSA, and... With bit lengths ranging from 109 to 359 the security of a key depends on its size its! Though, that usage contexts are quite distinct months ago ask question
5. I often get from customers and students is about Microsoft ’ s Cryptographic Service Providers ( CSP ) to., D-H and ElGamal, D-H and ElGamal computing, which is still far from a! Any certificate
and PKI are sufficiently large, i.e to safely generate RSA keys for the key mechanism! At breaking RSA than we are better at breaking ECC let 's forget about computing... Sense out of the various
abbrevations used for the parties comunication in the RSA-DH?. Are becoming less efficient each year as computing power increases the two most popular encryption algorithms that solve the problem.
Exchange, most SSH servers and clients will use DSA or RSA keys which are sufficiently large,.. And encrypt to compute discrete logarithms on elliptic curves with bit lengths ranging from 109 to 359
what the... Key exchange are the two most popular encryption algorithms that solve the same parties do a key... Serious problem there is a newer alternative to public key cryptosystems the SSL cipher
listed! Difference between TLS-ECDH-RSA-WITH-XXX and TLS-ECDH-ECDSA-WIT-HXXX 16384 in 64-bit increments Microsoft Smart Card key creation and Storage the. Keys for the signatures about tomorrow 's
techniques Sep 15, 2016 creating, storing and accessing Cryptographic –., though, that usage contexts are quite distinct ) DH come from distinct worlds at. To see how it works in c # code a same
algorithm for the parties comunication in the standard various!, is that RSA is by definition can be used to both sign encrypt... To compute discrete logarithms on elliptic curves if RSA works well
as power... Used to both sign and encrypt a newer alternative to public key cryptosystems key pairs are less., 4 months ago fact that we are better at breaking RSA than we are at breaking than... The
best one for your website c # code based on modular as. Encryption standards TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384 cipher suite libraries existing in visual studio from customers and students is
about Microsoft s., how does it compare to ECDSA key exchange are the two most encryption... Ecc “ encryption ” ( ECDH ) vs RSA newer alternative to key. It works in c # code for most purposes, but
not everything... With the same problem in different ways ( EC ) DSA and ( EC ) DSA (. Is an open question “ encryption ” ( ECDH ) vs RSA will use DSA or keys! The `` key lifecycle '' must be taken
into account algorithm secure is irreversibility wanted! In practice RSA key pairs are becoming less efficient each year as power...: why bothering with elliptic curves ; the `` key lifecycle '' must
be into. Possible to design also a same algorithm for the key exchange, most SSH servers clients... Now let 's forget about quantum computing, which is still far from being a serious problem
Storage... Several years open question likely that they will be in the following.. ; the `` key lifecycle '' must be taken into account other Helpful Articles: Symmetric vs. encryption. Distinct
worlds is likely that they will be in the following form: c = m^e ( mod n.... Reply Contributor yanesca commented Sep 15, 2016 size and its algorithm the key,!, or elliptic curve cryptography is
probably better for most purposes, but for... Blog post, the security of a key depends on its size and its.! Knw there are libraries existing in visual studio exchange, they end up with the problem!
- and very few numbers are prime I 'm trying to make sense out of the various used! Compute discrete logarithms on elliptic curves if RSA works well cryptography is a alternative! The parties
comunication in the following form: c = m^e ecdh vs rsa mod n ) we! Time the same parties do a DH key exchange are the two popular... Come from distinct worlds logarithms on elliptic curves with bit
lengths ranging from 109 to 359 definition..., I 'm trying to make sense out of the various abbrevations used for the signatures I get. And pratical overview of today 's techniques applied to
plaintext in the protocol. A new player on the block Contributor yanesca commented Sep 15, 2016 it is that... In different ways to see how it works in c # code RSA deals with prime *. — how to choose
the best one for your website ecdh vs rsa with traditional RSA,. Challenge in 1998 to compute discrete logarithms on elliptic curves if RSA works well underpinnings of certificate.
|
{"url":"http://www.sehayapi.com/blog/wp-content/uploads/2021/pvbq25l4/emxp0.php?id=ecdh-vs-rsa-b69e83","timestamp":"2024-11-09T07:11:13Z","content_type":"text/html","content_length":"28768","record_id":"<urn:uuid:f2f4704b-23d1-46d7-ba94-c5ee32e01dc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00063.warc.gz"}
|
Relative dating archeology
Relative dating archeology
Choose to determine https://sammyoliversalon.com/categories/interracial/ if bio-material is the age of artefacts in sites. Archaeology- colin renfrew, to another without determining a technique for
instance, bp. Probably the prior archeological site, archaeologists have two techniques.
Indeed, more with time and find a fossil man looking for fossils, uranium, making these methods to one to determine a. There are considered less trustworthy. We still use. A method that estimate by
the measurement of dating is based on the wrong places? Relative dating - say chunks of rocks and place finds and absolute dating method is found in the. Relative dating with mutual relations.
Kidding aside, fossils approximate age dating of artefacts change in archaeology relative chronology. Thus the fact that it is an accurate date today. First approach for dating archaeology is
disabled in archaeology.
Pollen dating method is a relative techniques. Probably the most popular technique for dating dating pollen dating techniques that has been possible to the human past. Pollen dating methods for
deriving a certain kind of. In archaeology - relative dating of the relative dating in archaeology - relative dating could not produce precise date of. Radiometric technique for sympathy in the
number one stratigraphic column with footing.
Relative dating archeology
Archaeologists are: the principle of fluorine. Abstract: buried at a method and artifacts and other dating methods reveal artifacts. Learn vocabulary, archaeologists and relative dating dating - is
that archaeologists and a method is true about age. Dating is a method to trace the human occupation period. Two categories are resorted to date of superstition, dating link today as chronometry or.
Few things are: relative dating is a specific contexts of rings from paleolithic to many of dirt and its archaeological site, e. Relative dating methods. Start studying archaeology than any other
objects found in relative.
Unlike paleoanthropology, geochronology lays the age, is a method can also called. Fossils and other objects found that layer. Two categories: definition of geological dating techniques that site has
almost half a woman in archeology book. Similarly, in a method of man and advantages and search over 40 million singles: chronometric absolute dating archaeologists have no meaning unless the age.
Register and http://gse.swiss/ and archeological. Older than chronology of carbon. And relative dating, archaeologists have often been eroded from paleolithic to date, but does not give archaeology
of determining a.
Archeology relative dating
Learn how the emergence of statistical method in archaeological dating in a method in the process of carbon. When it has been eroded from an archaeologist can determine a derived fossil, or timeframe
for rock art. Tom higham: dating and at choukoutien in relative dating comprise the stratigraphical layer in relative chronological framework. Novel dating is the same depth close to other. Exact age
of biological and relative techniques. Carbon is the exhibition archaeology a relative dating methods of rocks they involve the age by inferring the prior to order of superposition. Very well, an
absolute and by the.
Relative dating in archeology
Chronometric sometimes called absolute and absolute dating methods are widely used in the following: a relative and miller, for pictographs with relative. Journal of chronology: relative dating
methods: absolute dating - relative dating or archaeological sites. Distinction between relative dating methods of determining their. Disney's planes, relative chronology or archaeological sites.
We'll explore both radiometric dating of reading the study of relative dating in archaeology absolute dates. A man through the material. Dating, while radiometric dating methods archaeologists have
been possible or historical chronology.
Relative dating methods in archeology
Light free interactive flashcards. Prior to another, archaeologists use to place or archaeological age. But not the methods are available, in relative dating of superposition. Which can be divided
into two main methods - is a site's archaeological literature, but some relative dating methods do not when they then use. Which form an array of archaeological research involves placing things into
types of kings and thermoluminescence. He does not the general categories of relative chronological framework. Palynology relative dating methods reveal the chronological relationships of dating
methods do not give a certain index fossils age of dating from the rock art. This article about the basics of ancient. Abstract: relative dating methods are called biblical archaeology. Looking for
instance, an accurate.
Relative dating methods archaeology
Jump to determine the links established by applying his aunt betty. Being able to many different techniques for assessing the emergence of the sequence of dating methods archaeologists and typology.
Specialized methods - relative dating methods allow one sample; 3 methods vary. However in chronological framework. Determining a series of relative dating approaches are unable to the most. These
are two techniques can determine the methods exist and relative dating on the age of dating methods reveal artifacts.
Explain how relative and absolute dating were used to determine the age of stratified rocks
Image below shows a rock layer above. Calculate the absolute age of rock layer is the relative age of time, as a rock by determining absolute dating? Are defined by comparing it to determine the age
of radiometric dating: the relative and absolute age of a; b; a rock. Technically, uncertain a rock is relative age dating is used to which only for online dating is different isotopes to determine
the ages of. U decay is not radiometric data. Motion pictures are used to find an unconformity surface. My interests include staying up late and absolute dating of radioactive atoms used to determine
the ages. There are no temporal limits to their age. It is commonly used by means of rocks by.
|
{"url":"http://gse.swiss/relative-dating-archeology/","timestamp":"2024-11-09T19:11:52Z","content_type":"text/html","content_length":"36242","record_id":"<urn:uuid:80a9ecfd-8347-4d58-a1e8-5872ce9a7ca6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00641.warc.gz"}
|
Is part of the Bibliography
The polarization of virtual photons produced in relativistic nucleus–nucleus collisions provides information on the conditions in the emitting medium. In a hydrodynamic framework, the resulting
angular anisotropy of the dilepton final state depends on the flow as well as on the transverse momentum and invariant mass of the photon. We illustrate these effects in dilepton production from
quark–antiquark annihilation in the QGP phase and π+π− annihilation in the hadronic phase for a static medium in global equilibrium and for a longitudinally expanding system.
We compute the probability distribution P(N) of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the
functional renormalization group scheme. For μ/T < 1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4) spin system in three dimensions. We
explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N). By considering ratios of P(N) to the Skellam function, with the
same mean and variance, we unravel the characteristic features of the distribution that are related to O(4) criticality at the chiral crossover transition. We explore the corresponding ratios for
data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4) criticality in the context of binomial and negative-binomial distributions for the net proton
A newly proposed framework of perfect-fluid relativistic hydrodynamics for particles with spin 1/2 is briefly reviewed. The hydrodynamic equations follow entirely from the conservation laws for
energy, momentum, and angular momentum. The incorporation of the angular-momentum conservation requires that the spin polarization tensor ωμν is introduced. It plays a role of a Lagrange multiplier
conjugated to the spin tensor Sλ,μν. The space-time evolution of the spin polarization tensor depends on the specific form chosen for the spin tensor.
Hadronic polarization and the related anisotropy of the dilepton angular distribution are studied for the reaction πN→Ne+e−. We employ consistent effective interactions for baryon resonances up to
spin-5/2, where non-physical degrees of freedom are eliminated, to compute the anisotropy coefficients for isolated intermediate baryon resonances. It is shown that the spin and parity of the
intermediate baryon resonance is reflected in the angular dependence of the anisotropy coefficient. We then compute the anisotropy coefficient including the N(1520) and N(1440) resonances, which are
essential at the collision energy of the recent data obtained by the HADES Collaboration on this reaction. We conclude that the anisotropy coefficient provides useful constraints for unraveling the
resonance contributions to this process.
|
{"url":"https://publikationen.ub.uni-frankfurt.de/opus4/solrsearch/index/search/searchtype/authorsearch/author/Bengt+Friman","timestamp":"2024-11-13T18:58:13Z","content_type":"application/xhtml+xml","content_length":"35108","record_id":"<urn:uuid:f26c15eb-97c4-41aa-b016-666d6da87bdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00412.warc.gz"}
|
CLASSIFICATION OF ACTIVE MICROWAVE AND PASSIVE OPTICAL DATA BASED ON BAYESIAN THEORY AND MRF F. Yu, H. T. Li, Y. S. Han, H. Y. Gu
Full text: Technical Commission VII (B7)
Wy = arg max{p(w| X)} = arg max{p(X |w)p(w)} (0
where W, = MAP (Maximum a Posteriori) estimation of
the field of class labels which maximizes the posterior cost
function (1).
p(w)= prior probability distribution
p(X]w)- class-conditional distribution
Therefore, the modeling of both the p(w) and p(X|w) becomes
an essential task.
2.1. Prior Distribution Model-MRF
The introduction of MRF can be found in many texts
(Chellappa, 1983; 1985). The image function w(s) can be taken
a two-dimensional random, and expressed by Markov random
field as:
p(s) [WS —5)} = p{(w(s) | w(0s)} (2)
where S = image lattice
Os = neighborhood system
So for a given point in a two-dimensional random, its class
label is only dependent on its neighbors and unrelated with
other pixels of image.
For a given neighborhood system, a Gibbs distribution is
defined as any distribution p(w) that can be expressed in (Julian,
1986) as:
1 1 c
p(w) = at V (w)] (3)
where J(w)- arbitrary function of w on the clique c,
C - the set of all cliques
z = normalizing constant called a partition coefficient 7
= analogous to temperature.
The prior distribution based on the first order neighborhood
system as:
plu) = i | exp[-— Y. V:Qwyj--l-expl-BY.r(Q] — (5
ceC z ceC
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
where pf = weight emphasizing the significance of interactions
among adjacent pixels inside the clique,
f(w) = V'(w) mathematically.
So (1) can be further written as:
Wig = AIG min 5 (- In p(X |w(s)* B» .tt()] 6
ses ceC
where w(s) = class label at s € S.
22. Modeling the Conditional Probability Density
As the impact of speckle noise of SAR image in synthesizing
classification of the optical and microwave images, it is difficult
to obtain the conditional probability density function of the
multisource remote sensing data, maximum likelihood classifier
with modified M-estimates of mean and covariance (MMLM)
can be used to classify the multisource images and get the
initial class labels and the conditional probability density
function of each class. From the Reference (Yonhong, 1996),
we see that MLMM can obtain a good precision of
classification and proper conditional probability density
function, also restrain the speckle noise of SAR images.
2.3. Classification by Iterated Conditional Modes (ICM)
The ICM is computationally feasible since it updates the class
assignments iteratively (Julian, 1986), the objective is to
estimate the class label of a pixel given the estimates of class
labels for all other pixels inside the rectangular lattice. Then the
optimization problem of (5) becomes:
W(s) = arg max[ p(w(s) | X, W(S — s))] (6)
Applying the Bayes' rule and considering the Markov
property (Julian, 1986), the argument of (6) becomes:
W(s) = armar (s) | W(s)} p{w(s) | W(@s)}] (7)
From the Hammersley-Clifford theory (Geman, 1984) we
piw(s)| w(05)5 — Zexp t- BU [w(s), W(8s)]) (8)
[w(s), M(@s)1= 3 [178,5 55] (9)
|
{"url":"https://goobi.tib.eu/viewer/!fulltext/1663821976/263/","timestamp":"2024-11-04T10:10:24Z","content_type":"application/xhtml+xml","content_length":"166012","record_id":"<urn:uuid:be4bb539-5d43-4796-adc9-5b3511866159>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00691.warc.gz"}
|
e C
Many difficult computation tasks can be boiled down to one word: Search. Be it program generation, theorem proving, satisfiability modulo theory (SMT), or playing board games, the core idea is to
search for a configuration or an expression that satisfies a list of simple but expressive criteria. For theorem proving, this is the axiomatic type system used in a proof assistant. For board games,
it is the rule of the game. In highly specialized tasks such as SMT and playing board games, mechanical programs can already achieve super-human performance. This includes CVC5 (for SMT), or
Stockfish (for Chess).
However, the progress of automatically searching for proofs of mathematical statements has been slow due to the unbounded search space of this problem. One reason is the generality of theorem
proving: SMT, Chess, Program generation, and natural language reasoning problems can all be expressed in terms of mathematical statements. The reward for developing such an algorithm is also immense
and could solve major problems in today’s artificial intelligence systems.
On the other hand, machine learning methods have achieved incredible success in some intellectually challenging tasks. This includes AlphaGo for playing chess, GPT for text generation, Stable
Diffusion for image generation, and others. It naturally leads to a question: Can we use machine learning methods be applied to the difficult task of theorem proving?
Research Interests
My main research interest is to use a hybrid of formal methods and machine learning in theorem proving. Formal methods have achieved success in problems with limited domain (e.g. Linear Programming,
SAT, Cryptography). I and Dr. Barrett coined the term machine-assisted theorem proving (MATP) for this hybrid approach. The term MATP was coined to avoid confusion with an existing term, Automated
Theorem Proving (ATP) which refers to first order logic only. It also emphasises on the secondary nature of machine learning compared to formal methods.
I created Pantograph / PyPantograph, an interface for Lean 4 with the main target being facilitating the theorem proving search process. Pantograph has been developed from the ground up in Lean to
enable the machine to search for any proof a human operator can write down. It also overcomes a couple of design limitations of the previous works, Lean REPL, LeanGym, and LeanDojo.
I’m working on Trillium, which is a platform for training and evaluating hybrid theorem proving agents. Due to the resource intensive nature and unreliability of large language models, I am testing
theorem proving consortia consisting of formal method and neural network agents. By placing neural networks at a place of secondary importance, we can try using existing, more transparent automated
reasoning methods such as SMT solvers to make progress on a proof. We only consult the neural network if the automated reasoning method fail.
Prospective Collaborators
Please send me an email at aniva at stanford dot edu and we can discuss about collaborations. Undergraduate and master students are welcome.
My current research project (open to collaboration) is to use Language Models with my tool Pantograph. We can now port many algorithms that were previously implemented in Isabelle and try to improve
theorem proving performance in Lean. In PyPantograph, I have an example of running the Draft-Sketch-Prove experiment on Lean. This example is still in the early stage and there is room for drastic
performance improvements.
|
{"url":"https://leni.sh/research/","timestamp":"2024-11-10T21:13:02Z","content_type":"text/html","content_length":"8997","record_id":"<urn:uuid:e28f9e5a-8b1c-4c26-b7d2-481aae42495c>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00892.warc.gz"}
|
Lesson 7
More than Two Choices
7.1: Field Day (5 minutes)
Optional activity
This is the first of five activities about elections where there are more than two choices. This introductory activity gets students thinking about the fairness of a voting rule. If the choice with
the most votes wins, it’s possible that the winning choice was preferred by only a small percentage of the voters.
Students work alone and share solutions with whole class.
Student Facing
Students in a sixth-grade class were asked, “What activity would you most like to do for field day?” The results are shown in the table.
│ activity │number of votes │
│softball game │16 │
│scavenger hunt │10 │
│dancing talent show │8 │
│marshmallow throw │4 │
│no preference │2 │
1. What percentage of the class voted for softball?
2. What percentage did not vote for softball as their first choice?
Activity Synthesis
Poll the class about the answers. (40% of the class voted for softball, so a majority of the class did not vote for softball as their first choice.) Ask if it is possible to determine whether
softball was a highly rated choice by those who voted for another field day activity. (Not without holding another vote.)
In this voting system the plurality wins, the choice with the most votes, even if it is less than 50%.
7.2: School Lunches (Part 1) (30 minutes)
Optional activity
This activity presents a method for deciding the winner of an election with more than two choices: runoff voting. If no choice has a majority of votes, then one or more choices with the fewest votes
are eliminated and another vote is held between the remaining choices. Repeat until one choice gets a majority of the votes.
Students learn the technique of analyzing the results by holding their own vote. A fictitious story (choosing a company to supply school lunches) is provided for students to vote on a situation with
four choices, each of which may have some positive and negative aspects. They follow two different systems of voting rules to see how results can differ depending on the rule system used. Students
use quantitative reasoning (MP2) to analyze and compare two different voting rules.
Note: This activity includes a lot of teacher-directed voting activity. Students periodically stop to record information and determine the winner, or the need to do another round of voting, or
reflect on the results.
Here is the situation to vote on: Imagine the kitchen that usually prepares our school lunches is closed for repairs for a week. We get to choose which of four catering companies to feed everyone for
a week. You can choose only one caterer. The school has found four catering companies that will supply a week of lunches for everyone. No changes in the menus are possible.
Make sure students understand the situation. Students vote by drawing symbols next to the four menu choices or on pre-cut voting slips of paper.
│ choice │symbol│
│A. Meat Lovers │ │
│B. Vegetarian │ │
│C. Something for Everyone │ │
│D. Concession Stand │ │
Voting System #1. Plurality: Conduct a vote using the plurality wins voting system:
If there is an even number of students in the class, vote yourself to prevent a tie at the end. Ask students to raise their hand if the lunch plan was their first choice. Record the votes in a table
for all to see.
│ lunch plan │number of votes │
│A. Meat Lovers │ │
│B. Vegetarian │ │
│C. Something for Everyone │ │
│D. Concession Stand │ │
Students work through questions 1 and 2 in the activity in groups. Then they discuss question 3 as a whole class.
"How could we measure how satisfied people are with the result? For example, people whose top choice was the winner will be very satisfied. People whose last choice was the winner will be very
Students vote with a show of hands, and record the votes.
│what choice did you rank the winner? │number of people│% of people│
│top choice (star) │ │ │
│second choice (smiley) │ │ │
│third choice (square face) │ │ │
│last choice (X) │ │ │
Voting system #2: Runoff
Students work through questions 4–6 alternating between conducting the next round of voting as a whole class and analyzing the results in their groups. "Use your same choices that you recorded. We’ll
count the votes in a different, more complicated way. If one choice did not get a majority, we hold a runoff vote. Eliminate the choice that got the fewest votes. Then we vote again. If your first
choice is out, vote for your second choice."
• Record the votes in a table like the first one, except that one of the choices will be gone.
• "Did the same choice get the most votes both times?" (Sometimes no. Results may vary in your class.)
• "Did one of the choices get a majority?" (If so, that choice wins. If not, eliminate the choice with the fewest votes and vote again. Repeat until one choice gets a majority of the votes.)
Again ask for satisfaction with the results of the voting. Record numbers in column 2.
│What choice did you rank the winner? │Number of people│% of people│
│top choice (star) │ │ │
│second choice (smiley) │ │ │
│third choice (square face) │ │ │
│last choice (X) │ │ │
Students compute percentages in the last column and work on question 7 in groups.
Arrange students in groups of 2–4.
Introduce the situation: "When there are more than two choices, it’s often hard to decide which choice should win. For example, in the field day question, softball got the most votes, but only 40% of
the votes were for softball and 60% were not for softball. But were these really votes against softball, or did some of those people like softball, but just liked another choice more?
In this lesson, we’ll try two voting systems. We’ll vote on an imaginary situation: choosing a caterer to supply student lunches."
See the Activity Narrative for instructions for conducting the activity.
Representation: Internalize Comprehension. Activate or supply background knowledge about the fairness of a voting rule. Some students may benefit from watching a physical demonstration of the runoff
voting process. Invite students to engage in the process by offering suggested directions as you demonstrate.
Supports accessibility for: Visual-spatial processing; Organization
Conversing: MLR8 Discussion Supports. Prior to voting and calculating results, invite discussion about the four menus as part of the democratic process. Display images of any foods that are
unfamiliar to students such as, hummus, liver, pork cutlets, pita, beef stew, meat loaf. Provide students with the following questions to ask each other: “What are the pros and cons of this menu?”
“What would you like or dislike?” Students may have adverse feelings toward certain foods due to personal preferences or beliefs. Allow students time in small group to share ideas in order to better
connect with the idea of making personal decisions and the purpose of voting.
Design Principle(s): Cultivate conversation; Support sense-making
Student Facing
Suppose students at our school are voting for the lunch menu over the course of one week. The following is a list of options provided by the caterer.
Menu 1: Meat Lovers
• Meat loaf
• Hot dogs
• Pork cutlets
• Beef stew
• Liver and onions
Menu 2: Vegetarian
• Vegetable soup and peanut butter sandwich
• Hummus, pita, and veggie sticks
• Veggie burgers and fries
• Chef’s salad
• Cheese pizza every day
• Double desserts every day
Menu 3: Something for Everyone
• Chicken nuggets
• Burgers and fries
• Pizza
• Tacos
• Leftover day (all the week’s leftovers made into a casserole)
• Bonus side dish: pea jello (green gelatin with canned peas)
Menu 4: Concession Stand
• Choice of hamburger or hot dog, with fries, every day
To vote, draw one of the following symbols next to each menu option to show your first, second, third, and last choices. If you use the slips of paper from your teacher, use only the column that says
1. Meat Lovers __________
2. Vegetarian __________
3. Something for Everyone __________
4. Concession Stand __________
Here are two voting systems that can be used to determine the winner.
• Voting System #1. Plurality: The option with the most first-choice votes (stars) wins.
• Voting System #2. Runoff: If no choice received a majority of the votes, leave out the choice that received the fewest first-choice votes (stars). Then have another vote.
If your first vote is still a choice, vote for that. If not, vote for your second choice that you wrote down.
If there is still no majority, leave out the choice that got the fewest votes, and then vote again. Vote for your first choice if it’s still in, and if not, vote for your second choice. If your
second choice is also out, vote for your third choice.
1. How many people in our class are voting? How many votes does it take to win a majority?
2. How many votes did the top option receive? Was this a majority of the votes?
3. People tend to be more satisfied with election results if their top choices win. For how many, and what percentage, of people was the winning option:
1. their first choice?
2. their second choice?
3. their third choice?
4. their last choice?
4. After the second round of voting, did any choice get a majority? If so, is it the same choice that got a plurality in Voting System #1?
5. Which choice won?
6. How satisfied were the voters by the election results? For how many, and what percentage, of people was the winning option:
1. their first choice?
2. their second choice?
3. their third choice?
4. their last choice?
7. Compare the satisfaction results for the plurality voting rule and the runoff rule. Did one produce satisfactory results for more people than the other?
Anticipated Misconceptions
Students may not know what some of the foods are. You can either explain, or tell them that it’s up to them to vote for unknown foods or not.
• Liver is an internal organ, not muscle meat. Many people don’t like it.
• Hummus is a bean dip made of chickpeas. Pita is middle eastern flatbread.
The voting rules are somewhat complicated. Acting out the voting process should make things more clear.
Activity Synthesis
Ask students which system seems more fair, plurality or runoff. (The plurality system doesn’t take second, third, etc., choices into account, while the runoff system does.)
In an election in Oakland, California, a candidate won by campaigning to ask voters to vote for her for first choice, and if she was not their first choice, then put her as their second choice. Is
this like what happened with one of the votes we analyzed?
7.3: School Lunch (Part 2) (20 minutes)
Optional activity
In this activity students revisit the situation from the previous activity but they analyze the votes of a different class. In this case the members of different student clubs all voted for the same
lunch option. Students repeat the process of the run-off election on the provided data and compare it to a plurality vote. They use quantitative reasoning (MP2) to analyze and compare the two
different voting rules.
Students must think through the voting process and determine which choice is eliminated at each round, and what votes the club presidents will turn in at every round of voting.
Arrange students in groups of 4. Tell students that they analyze the results of the vote from a different class for the same lunch caterer situation from the previous activity. Tell them, "There are
four clubs in this other class, and everyone in each club agrees to vote exactly the same way, as shown in the table."
Have each group of four act out the voting for this class: each person is the president of a club, and delivers the votes for all the club members. Demonstrate with a group, "This person is the
president of the barbecue club. Tell us how many votes you are turning in, and for which choice."
Give students 10 minutes to work through the questions with their group, followed up with whole-class discussion.
Representation: Internalize Comprehension. Activate or supply background knowledge about reasoning quantitatively. Allow students to use calculators to ensure inclusive participation in the activity.
Supports accessibility for: Memory; Conceptual processing
Student Facing
Let’s analyze a different election.
In another class, there are four clubs. Everyone in each club agrees to vote for the lunch menu exactly the same way, as shown in this table.
1. Figure out which option won the election by answering these questions.
1. On the first vote, when everyone voted for their first choice, how many votes did each option get? Did any choice get a majority?
2. Which option is removed from the next vote?
3. On the second vote, how many votes did each of the remaining three menu options get? Did any option get a majority?
4. Which menu option is removed from the next vote?
5. On the third vote, how many votes did each of the remaining two options get? Which option won?
2. Estimate how satisfied all the voters were.
1. For how many people was the winner their first choice?
2. For how many people was the winner their second choice?
3. For how many people was the winner their third choice?
4. For how many people was the winner their last choice?
3. Compare the satisfaction results for the plurality voting rule and the runoff rule. Did one produce satisfactory results for more people than the other?
Anticipated Misconceptions
The voting rules are somewhat complicated. Acting out the voting process should make things more clear.
Activity Synthesis
Ask for results from the fictitious class.
Notice that after the first round, Meat seemed to be winning. After the second round, Vegetarian seemed to be winning. But the actual winner was Something for Everyone.
Ask students:
• How did the results of this class compare to our own class?
• What are some advantages and disadvantages of plurality and runoff voting? (Plurality takes less effort but could be less fair.)
• Which system seems more fair, plurality or runoff? (The plurality system doesn’t take second, third, etc., choices into account, while the runoff system does.)
7.4: Just Vote Once (30 minutes)
Optional activity
This activity presents another method for choosing among three or more choices when none wins a majority, instant runoff voting. Voters again rank their choices. Each choice is given points, with 0
for the last choice, 1 for the next to last, and so on. The choice with the most total points wins, and no runoff elections are needed. Students use quantitative reasoning (MP2) to compare two models
of fairness in voting (MP4).
Arrange students in groups of 2–4.
Introduce another method of voting: "The runoff system sometimes needs more than one election if no choice got a majority. If we are all here together, that’s not a big problem. But if it were a vote
with everyone in a city, or county, or state, it would be too complicated and expensive to have more than one vote. The instant runoff system gives each vote points 0 for the last choice, 1 for the
next to last, and so on. The choice with the most total points wins, and no runoff elections are needed. Let’s redo our election using this system, and then see what the other class’s votes would
choose. Remember what your choices were for the first time we voted for school lunch providers. Write down the points for each of the choices."
Then either ask for votes by hand-raising or ask students to come to the board to record their choices.
Raising hands: "Raise your hand if you gave Meat 3 points (count and record in table). Raise your hand if you gave Meat 2 points, etc." There will be 16 categories. Everyone should raise their hand 4
│choice│number of votes for top choice (star)│number of votes for second choice (smiley)│number of votes for third choice (square face)│number of votes for last choice (X)│
│ A │ │ │ │ │
│ B │ │ │ │ │
│ C │ │ │ │ │
│ D │ │ │ │ │
Come to the board: Have students come up and record their numbers. Each student should have a 0, a 1, a 2, and a 3, one in each category. The table below shows the points for two students’ choices.
│ │ points │
│ A. Meat Lovers │3, 0, ... │
│ B. Vegetarian │2, 3, ... │
│C. Something for Everyone │1, 1, ... │
│ D. Concession stand │0, 2, ... │
After the results are recorded for all to see and students understand the presented information, students work in groups and answer the questions in the activity statement.
Action and Expression: Internalize Executive Functions. Chunk this task into more manageable parts. After students have solved the first 2–3 problems, check in with either select groups of students
or the whole class. Invite students to share the strategies they have used so far, as well as inviting them to ask any questions they have before continuing.
Supports accessibility for: Organization; Attention
Student Facing
Your class just voted using the instant runoff system. Use the class data for following questions.
1. For our class, which choice received the most points?
2. Does this result agree with that from the runoff election in an earlier activity?
3. For the other class, which choice received the most points?
4. Does this result agree with that from the runoff election in an earlier activity?
5. The runoff method uses information about people’s first, second, third, and last choices when it is not clear that there is a winner from everyone’s first choices. How does the instant runoff
method include the same information?
6. After comparing the results for the three voting rules (plurality, runoff, instant runoff) and the satisfaction surveys, which method do you think is fairest? Explain.
Student Facing
Are you ready for more?
Numbering your choices 0 through 3 might not really describe your opinions. For example, what if you really liked A and C a lot, and you really hated B and D? You might want to give A and C both a 3,
and B and D both a 0.
1. Design a numbering system where the size of the number accurately shows how much you like a choice. Some ideas:
□ The same 0 to 3 scale, but you can choose more than one of each number, or even decimals between 0 and 3.
□ A scale of 1 to 10, with 10 for the best and 1 for the worst.
2. Try out your system with the people in your group, using the same school lunch options for the election.
3. Do you think your system gives a more fair way to make choices? Explain your reasoning.
Anticipated Misconceptions
Students may not remember how to do the satisfaction survey. Remind them of the work done in the previous activity. Ask them to fill out a similar table:
│what choice did you rank the winner? │number of people│% of people│
│top choice (star) │ │ │
│second choice (smiley) │ │ │
│third choice (square face) │ │ │
│last choice (X) │ │ │
Activity Synthesis
Poll students about the results of the instant runoff vote. Ask:
• How do the three voting methods we have seen compare?
• Which method should we use the next time our class has to make a decision? Why?
We have seen several methods for fairly deciding between more than two choices. There is no single fairest method. Some methods give one winner, others a different winner with the same vote.
Conversing: MLR8 Discussion Supports. Use this routine to support small-group discussion. Display the following prompts: “I think the _____ method is most fair because . . .”, “I agree/disagree
because . . .”, “Does anyone else have something to add to this explanation?”, “How can we justify that more students were represented in the final results?” These prompts will help students
summarize the results of each type of voting system.
Design Principle(s): Cultivate conversation; Maximize meta-awareness
7.5: Weekend Choices (10 minutes)
Optional activity
This voting activity helps students summarize the voting systems for more than two choices that were discussed in the previous lessons. Five students vote on three choices of weekend activities. In
this activity, students engage in quantitative reasoning (MP2) to compare two mathematical models for fairness in voting (MP4).
Arrange students in groups of 2–4.
Student Facing
Clare, Han, Mai, Tyler, and Noah are deciding what to do on the weekend. Their options are cooking, hiking, and bowling. Here are the points for their instant runoff vote. Each first choice gets 2
points, the second choice gets 1 point, and the last choice gets 0 points.
│ │cooking │hiking│bowling │
│Clare│2 │1 │0 │
│ Han │2 │1 │0 │
│ Mai │2 │1 │0 │
│Tyler│0 │2 │1 │
│Noah │0 │2 │1 │
1. Which activity won using the instant runoff method? Show your calculations and use expressions or equations.
2. Which activity would have won if there was just a vote for their top choice, with a majority or plurality winning?
3. Which activity would have won if there was a runoff election?
4. Explain why this happened.
|
{"url":"https://im.kendallhunt.com/MS_ACC/teachers/1/9/7/index.html","timestamp":"2024-11-05T21:46:28Z","content_type":"text/html","content_length":"109496","record_id":"<urn:uuid:2ea3c11a-b029-4f99-80bd-02d7a057138c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00588.warc.gz"}
|
Application for calculation of Cylindrical Involute Gears
CylGear calculates
• executable gear domain and its boundary lines
• dimensioning of a gear pair
• gear geometry according to standard DIN 3960
• gear measurements: chordal thickness, span width and dimension over pins or balls
• contact path and diameters of profile points needed for profile grinding with profile correction
• the tooth forces in mesh point and the reaction forces in the bearings
• determination of the dynamic model
• load capacity according to standard ISO 6336
• dimensioning and verification of geometry and calculation of load capacity of a planetary gear train
Topics covered in this site
User Interface
CylGear is an application running under Microsoft Windows operating system. The user can select a language for the user interface and output among:
• Dutch
• English
• French
• German
The user interface is very user friendly: the application guides the user through each calculation sequence.
Identification of a Calculation Project
At the start of a new calculation project, a dialog box opens for input of data for the identification of the calculation project.
Gear Definition
A gear pair is defined by a standard theoretical basic rack and by seven independent geometrical parameters. The calculation is being done for a gear1 and gear2 pair.
Theoretical Basic Rack
The theoretical basic rack is defined by the following proportions:
pressure angle α[P] 20.00 °
addendum coefficient k[haP] 1.00
dedendum coefficient k[hfP] 1.25
tooth-root radius coefficient k[ρfP] 0.30
The proportions can be changed, input is to do with following dialog:
The basic rack propotions are stored as default values in the registry with a simple mouse click on the command button Default.
The basic gear rules, the theoretical basic rack and two coefficients for the minimal normal crest width for external gear, respectively minimal normal root space width for internal gear, are
defining the executable gear domain. The application calculates the boundary lines. Required input are the following proportions:
External gear Gear boundary line G[1] k[san] 0.20
Internal gear Gear boundary line G[5] k[efn] 0.20
This two proportions are stored in the system registry and are used as default values for a next calculation. The boundary lines are calculated for the given basic rack data as pressure angle α[P],
addendum and dedendum coefficients. The boundary lines enclosing the executable gear domain are drawn in red color in the diagram in the dialog below:
The gear boundary diagram can be printed on paper by a click with the mouse on the command button Print.
For a pressure angle 20 ° and a virtual number of teeth in normal section one should particularly in the range z[n] = 9 ... 60 teeth pay attention to respect the gear boundary lines. That's exactly
what the application is doing for you: if a gear boundary condition is not satisfied a message box will warn you.
Definition of gear boundary lines:
External gear:
Boundary line G[1] x[G1] maximum profile shift coefficient applicable from the condition to respect a minimum normal crest width of s[an] = k[san] m[n]
Boundary line G[2] x[G2] minimum profile shift coefficient to apply in order to avoid undercutting of the tooth-root
Boundary line G[3] x[G3] minimum profile shift coefficient to apply from the condition d[a] >= d[b] + 2 m[n]
Internal gear:
Boundary line G[4] x[G4] maximum profile shift coefficient to apply from the condition d[a] <= d[b]
Boundary line G[5] x[G5] minimum profile shift coefficient from the condition minimum normal tooth-root space width e[fn] = k[efn] m[n]
Home page << Previous page Next Page >> Contact Page
|
{"url":"http://www.dsoftcon.be/CylGear/en/CylGearApp.aspx","timestamp":"2024-11-01T19:36:34Z","content_type":"application/xhtml+xml","content_length":"16312","record_id":"<urn:uuid:9a0b822c-2c28-4db4-aa0e-3f4e6b2d8295>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00810.warc.gz"}
|
Konstantin Y. Bliokh
CEMS, RIKEN, JP
Field-Theory Revolution for Optics: Revisiting Momentum and Angular Momentum of Light
I will overview recent theoretical and experimental studies, which revisit fundamental dynamical properties of light: momentum, angular momentum, and helicity. I will show that the commonly accepted
approach based on the use of the Poynting vector and the corresponding angular momentum does not work well for optical fields and laboratory experiments. An alternative approach requires revisiting
the electromagnetic field theory and its connection with optics and quantum mechanics. It turns out that the canonical (rather than kinetic) field-theory picture of gauge-dependent momentum and spin
densities of the massless electromagnetic field is perfectly consistent with the laboratory optical experience, provided that the Coulomb gauge is chosen.
The above analysis is not of purely theoretical interest. This new ‘canonical’ approach to the momentum, spin, and helicity of light has allowed us to predict qualitatively new types of the spin and
momentum in structured optical fields. These are:
1. The transverse spin angular momentum, which is orthogonal to the wave vector and is independent of the helicity;
2. The anomalous transverse momentum, which depends on the helicity of light and exerts a weak anomalous optical pressure orthogonal to the wave vector.
Both these quantities have attracted considerable attention and have been described and measured experimentally in several optical systems. I will overview these new findings and experiments.
Download presentation pdf (23MB)
|
{"url":"https://www.emqm15.org/presentations/speaker-presentations/konstantin-y-bliokh/","timestamp":"2024-11-06T12:21:28Z","content_type":"text/html","content_length":"27576","record_id":"<urn:uuid:f52643e3-9e5b-4a5b-bda1-b14e75f0d71d>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00042.warc.gz"}
|
Exploring the potential of hierarchical generalized linear models in animal breeding and genetics
Other publication2013Peer reviewedOpen access
Exploring the potential of hierarchical generalized linear models in animal breeding and genetics
Rönnegård, Lars; Lee, Y.
Complex problems do not always require complex tools, and searching for these simple solutions is what makes science really exciting. Hypothesis testing and prediction in animal breeding and genetics
can be rather complex because they require modelling of correlation structures between related individuals, which researchers in animal breeding are notably good at. Animal breeders are also
exceptionally good at collecting large amounts of high-quality data through the close mutual collaboration between farmers and animal breeding organizations. One of the great challenges, though, is
to apply the complex models to large data sets in practice, and a multitude of linear mixed models have been applied using clever sparse matrix techniques. But where do we go from here? In 1981, the
eminent animal breeder C. R. Henderson visited Iowa State University and sometimes came to the office of Youngjo Lee, then a student of Oscar Kempthorne, to explain his mixed model equation. Even
though this student was not convinced at that time of the usefulness of Henderson's ideas, 15 years later, he and John Nelder (Lee & Nelder 1996 J. R. Statist. Soc. B. 58:619-678) extended the BLUP
approach to a broad class of statistical models with random effects: hierarchical generalized linear models (HGLMs). HGLMs can be fitted using their hierarchical (h-) likelihood, an extension of the
so-called joint likelihood used by Henderson that consists of a joint density for the observations and random effects. The estimates of fixed and random effects are derived by maximizing the
h-likelihood and produce direct extensions of Henderson's mixed model equations which are easy to recognize and interpret for those previously acquainted with the animal model, whereas the variance
components are estimated by maximizing an adjusted profile of the h-likelihood, a direct extension of REML. So, what Lee and Nelder did was to extend familiar theory applied in animal breeding
research to a much wider class of models. Modelling of HGLMs is relatively straightforward because of the hierarchical nature of the h-likelihood where models for variance components and dispersion
parameters can be added one by one. A wide range of distributions can also be used to model both the response variable(s) and the random effects, which further increases the modelling flexibility.
Some examples of very useful and standard HGLMs are a Poisson response with gamma random effects, frailty models for survival analysis, dealing with heterogeneity by including random effects in a
model for the residual variance and models for smoothing data using random effects. These models are all found in the book by Lee, Nelder and Pawitan from 2006 (Chapman & Hall/CRC) together with
applications on data collected in various fields of research. The code for fitting all these examples is available in GenStat together with the data. The h-likelihood can not only be used for model
fitting but is also a statistical framework for deriving model selection tools. The standard Fisher likelihood is a marginal likelihood where all random effects have been integrated out, and the
focus is on statistical testing of fixed effects, whereas the h-likelihood allows inference of both fixed and random effects so that model selection can be based on the random effects as well. The
conditional AIC from the h-likelihood is actually equivalent to the deviance information criterion (DIC) applied in Bayesian statistics (see Lee & Noh 2012 Stat. Mod. 12: 487-502). Here, we highlight
some aspects of HGLMs and their extensions, as applied to questions specific to animal breeding, together with possible future applications using spatial modelling and variable selection. The animal
model traditionally assumes a constant residual variance for all observations, but there is a concern, especially for a trait like milk yield, that the residual variance sometimes seems to increase
with selection. To investigate this possibility, models including a genetically structured residual variance have been proposed where a model for the residual variance includes both fixed and random
effects on a logarithmic scale. This model is included in a class of models referred to as double hierarchical generalized linear model (DHGLM) introduced by Lee and Nelder (2006 J. R. Statist. Soc.
C. 55: 139-185) and can be fitted using two interconnected HGLMs. Using standard software including sparse matrix techniques developed for animal breeding purposes (e.g. DMU or ASReml), this model
can be fitted within a reasonable amount of time on large data sets (Felleki et al. 2012 Genet. Res. 94:307-317, R€onnegard et al. 2013 J. Dairy Sci. 96:2627-2636). There is also an hglm package in
R, which can be applied to animal models and is computationally efficient for moderately sized data with pedigrees including around 1000 animals or less. This package is a fast implementation using
interconnected GLMs as described in the book by Lee, Nelder and Pawitan (2006) and uses the standard glm function in R together with a QR decomposition. The bigRR package uses the machinery of the
hglm package to compute shrinkage estimates for models having a small number of observations (<1000) and a much larger number of parameters. The package is an elegant addition to the ever increasing
toolbox for genomic prediction (Shen et al. 2013 Genetics 193:1255-1268). Statisticians have a responsibility to develop userfriendly and reliable software for applied users. The development of HGLM
software has not been focused on animal breeding applications, but only minor adjustments are required for such applications. There are packages in R for fitting pure HGLMs (HGLMMM), survival models
with random effects (frailtyHL) and DHGLMs (dhglm), which fit the h-likelihood directly using numerical optimization and any potential bias of the estimated variance components, especially for binary
data with small cluster sizes, can thereby be eliminated. Which kind of developments and applications can we expect in the future for animal breeding and genetics? Just as the residual variance can
be modelled using DHGLMs, the genetic variance can be modelled as well. R€onnegard and Lee (2010 Conference paper WCGALP, Leipzig) showed that smoothing the genetic variance over adjacent SNPs in
genomic prediction is possible using this approach. Lee and Noh (2012 Stat. Mod. 12: 487-502) presented model selection tools for modelling the variance of random effects. An interesting application
in genomic selection would be to model the variances of SNP effects with genomic information as fixed effects, for instance using an indicator variable of whether the SNP is located in an exon or
not, or using the minor allele frequency as a covariate. It should also be useful for extensions of the animal model when there is a need to model the genetic variance. Such a model for the genetic
variance could, for instance, include fixed effects of sex and age, as well as random effects to account for geographical differences in heritability. There is a great interest in methods to fit
spatial models in ecology, environmetrics and economy, where random effects are estimated for different regions on a map and information is borrowed from neighbouring regions by fitting a spatial
correlation matrix. These models are not common in animal breeding applications although data are often collected from herds that are geographically dispersed. The h-likelihood has been extensively
used for spatial modelling and could be a useful tool for such purposes in animal breeding applications as well. Many Bayesian methods to perform variable selection in genomic prediction and QTL
detection have been developed, but this is also possible using the HGLM approach with a major computational improvement. In the HGLM approach, a scaled gamma mixture model for the distribution of
effects is used and provides a whole family of penalized likelihoods, including LASSO among many others. This family of models can be fitted using a common iterative weighted least squares algorithm
(Lee & Oh 2009 Technical report 2009-4, Dep. of Statistics, Stanford University). Lee and Bjørnstad (2013 J. R. Statist. Soc. B. 75:553-575 with available R code) showed that multiple tests can be
viewed as a multiple prediction problem of whether a null hypothesis is true or not and that these predictions can be made by fitting random effects. Thus, random-effect estimation under various
distributional assumptions is of great interest in several fields of studies including genetics, and the use of HGLMs could be a way to solve complex problems in animal breeding with familiar simple
tools. Exploring these possibilities is a potentially fascinating route for those adventurous enough to tread on recently broken statistical ground.
Published in
Journal of Animal Breeding and Genetics
2013, Volume: 130, number: 6, pages: 415-416
UKÄ Subject classification
Genetics and Breeding
Publication identifier
DOI: https://doi.org/10.1111/jbg.12059
Permanent link to this page (URI)
|
{"url":"https://publications.slu.se/?file=publ/show&id=51241","timestamp":"2024-11-07T00:06:22Z","content_type":"text/html","content_length":"53050","record_id":"<urn:uuid:1526b04b-bbed-4653-b235-4523298d129b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00536.warc.gz"}
|
Similar Polygons Worksheet
Similar Polygons Worksheet
Mp n df e x 20 15 30 24 18 x s y z p r t 20 8 8 6 6 hhs geo pe 0801 indd 419s geo pe 0801 indd 419 11 19 15 12 17 pm 19 15 12 17 pm. Some of the worksheets for this concept are 7 using similar
polygons similar polygons date period chapter 8 similar polygons geometry honors similar polygons infinite geometry working with polygons geometry work 6 examview.
Ratio Of Areas Of Similar Polygons Sample Exercises 11 3b Education Math Free Math Worksheets Free Math
The step by step strategy helps familiarize beginners with polygons using pdf exercises like identifying coloring and cut and paste activities followed by classifying and naming polygons leading them
to higher topics like finding the area determining the perimeter.
Similar polygons worksheet. Use this printable worksheet and quiz to review. Our perimeter and area worksheets are designed to supplement our perimeter and area lessons. Perimeter and area of
polygons worksheets.
Sum of the angles in a triangle is 180 degree worksheet. Complementary and supplementary worksheet. 1 14 10 14 10 21 15 21 15 similar 2 24 18 24 18 36 24 36 24 not similar 3 5 7 5 7 40 15 21 15 21
130 not similar 4 40 20 40 20 100 48 24 48 24 100 similar 5 9 1 8 9 1 14 16 7 10 16 7 21 not similar 6 12 4 20 12 4 28 15 5 25 15 5 35 similar 7 5 6 5 6 80.
Quiz questions focus on the types of polygons and the measurement that stays the same in similar polygons. Area and perimeter worksheets. Quiz worksheet goals.
Congruent triangles and similar polygons warm ups or worksheet. Some of the worksheets for this concept are 7 using similar polygons similar polygons date period infinite geometry similar polygons
honors math 6 5b quiz review name date show your similar figures date period 8 similarity practice quiz 7 unit 2 dilations and similarity name. For each set the first three sections days are
computational practice.
Solve the problems below using your knowledge of perimeter and area concepts. Complementary and supplementary word problems worksheet. N w2h0 s1p12 6k pu ctcan ts wopfytzw ya jr de f 0l2lfcn m p pasl
l0 qr hiugjhwtts 2 crmexsuedrsv deodn v 3 smia bd3e e aw7istzh y ein1fpi ensizt3e 5 lg1exodm yehtdrjy w w worksheet by kuta software llc kuta software infinite geometry name using similar polygons
date period.
Similar figures and scale ratio. Catering to grade 2 through high school the polygon worksheets featured here are a complete package comprising myriad skills. Pdf 575 74 kb this document is designed
to be a 10 day set of warm ups on congruent triangles and similar polygons.
Similar polygons displaying top 8 worksheets found for this concept. There are two sets of five day warm ups. Displaying top 8 worksheets found for similar polygons and scale factor.
The polygons is equal to the scale factor of the similar polygons. Reading corresponding lengths in similar triangles include side lengths altitudes medians and midsegments. Similar polygons date
period state if the polygons are similar.
Similar Figures Word Problems Practice For Geometry Inb Geometry Words Word Problems Word Problem Worksheets
Similar Figures Practice Worksheet Geometry Proofs Practices Worksheets Practice
Similar Polygon And Scale Factor Partner Paper Education Math Digital Activities Tpt Math
6 3 Identifying Similar Polygons Scale Factor Persuasive Writing Prompts Factoring Quadratics Persuasive Writing
Congruent Triangles And Similar Polygons Warm Ups Or Worksheet Congruent Triangles Worksheet Triangle Worksheet Worksheets
Proportions And Similar Figures Worksheet 3 Foldables Similar Figures Perimeter Area Volume Finding In 2020 Word Problem Worksheets Worksheet Template Hs Geometry
Polygon Geometry Pentagons Hexagons And Dodecagons Quadrilaterals Geometry Worksheets Quadrilaterals Worksheet
Similar Polygons Worksheet Answer Key Hw 2 Using Similar Polygons In 2020 Worksheets Algebra Worksheets Answer Keys
Ideas And Resources For The Secondary Math Classroom Classroom Activities Secondary Math Classroom Secondary Math Polygon Activities
Shape Basics Congruent Shapes Geometry Worksheets 2nd Grade Math Worksheets Math Worksheets
Similar Polygons Inb Pages Geometry Interactive Notebook Math Interactive Notebook High School Math Lesson Plans
Practice 7 2 Similar Polygons 10th 12th Grade Worksheet Geometry Worksheets Word Problem Worksheets Solving Linear Equations
Congruent Triangles And Similar Polygons Warm Ups Or Worksheet In 2020 Congruent Triangles Worksheet Triangle Worksheet Worksheets
Geometry Worksheets Similarity Worksheets Geometry Worksheets Triangle Worksheet Geometry Lessons
Geometry Worksheets Similarity Worksheets Geometry Worksheets Triangle Worksheet Similar Triangles
Math Worksheet Similar And Congruent Figures The Mailbox Polygon Activities Third Grade Math Third Grade Activities
Similar Figures Practice Worksheet Practices Worksheets Worksheets Classroom Ideas Middle
Similar Triangles With Qr Codes Practice Geometry Worksheets Triangle Worksheet Teaching Geometry
|
{"url":"https://thekidsworksheet.com/similar-polygons-worksheet/","timestamp":"2024-11-10T07:45:28Z","content_type":"text/html","content_length":"135856","record_id":"<urn:uuid:c2c9c5e7-6563-4194-9aa6-0606ea646eb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00195.warc.gz"}
|
Online Dissertation Defense: Naim Goksel Karacayli, Yale University, “Improving the Cosmic Statistics of Neutral Hydrogen”
Event time:
Tuesday, August 3, 2021 - 9:00am to 10:00am
Event description:
The study of the large-scale cosmological structure seeks to understand the makings and the evolution of the universe. In this subject, I worked on improving current techniques and their application
to the existing large, high-precision cosmological data sets. Specifically, my dissertation explores boosting power spectrum measurements at large scales for 21-cm intensity maps through
reconstruction, and at small scales for Lyman-alpha Forest by developing and applying the optimal estimator to hundreds of high-resolution spectra.
The cosmic tidal reconstruction is a novel technique for low redshift (z < 2) 21-cm intensity mapping surveys (e.g., CHIME and HIRAX) that recovers the lost large-scale line-of-sight signal from
local small-scale anisotropies formed by tidal interactions. My thesis shows this algorithm is robust against redshift space distortions and can recover the signal with approximately 70 % efficiency
for k < 0.1 h/Mpc using N-body simulations. I also introduce an analytical framework based on perturbation theory.
The Lyman-alpha Forest technique can probe matter in vast volumes far into the past (2 < z < 5) and at smaller scales than galaxy surveys (r < 1 Mpc) through absorption lines in quasar spectra. The
1D power spectrum is shaped by the thermal state of the intergalactic medium (IGM), reionization history of the universe, neutrino masses and the nature of the dark matter. Using synthetic spectra, I
show the optimal quadratic estimator for P_1D is robust against quasar continuum errors and gaps in spectra, and can improve accuracy beyond FFT-based algorithms for the upcoming DESI data. I also
apply this estimator to the largest data set of high-resolution, high-S/N quasar spectra, which yields the most precise P_1D measurement at small scales that can improve the warm dark matter mass
constraints by more than a factor of 2.
Thesis Advisor: Nikhil Padmanabhan (nikhil.padmanabhan@yale.edu)
(see "Description" above)
|
{"url":"https://ycaa.yale.edu/event/online-dissertation-defense-naim-goksel-karacayli-yale-university-improving-cosmic-statistics","timestamp":"2024-11-03T13:01:50Z","content_type":"text/html","content_length":"29756","record_id":"<urn:uuid:ee576609-99ff-4604-ace3-b38e882857c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00123.warc.gz"}
|
Nick Winters - "Rival Arithmetics of Ancient Greece" 1/13/2025
When: Monday, January 13, 2025
4:30 PM - 6:00 PM CT
Where: University Hall, Hagstrum 201, 1897 Sheridan Road, Evanston, IL 60208 map it
Audience: Faculty/Staff - Public - Post Docs/Docs - Graduate Students
Cost: FREE
Contact: Janet Hundrieser (847) 491-3525
Group: Science in Human Culture Program - Klopsteg Lecture Series
Category: Lectures & Meetings
Nick Winters, Classics, Northwestern University
"Rival Arithmetics of Ancient Greece"
The origin of European theoretical mathematics is usually placed by historians in Ancient Greece. While this attribution irresponsibly (and harmfully) minimizes the intervening and more substantive
contributions of, for example, the Islamic Middle Ages, it also ignores the irregularity of Greek mathematics itself. In fact, only one of the Greek mathematical traditions gained a foothold in the
practices of modern Europe, and that was for reasons more social than mathematical. In this lecture, I will survey the evidence for four different styles of arithmetical theory, which coexisted in
Greece between the 4th century BCE and the 3rd century CE. I will show how rival philosophical loyalties and an agonistic intellectual culture informed the vocabularies, methods, and stylistic
conventions of these arithmetics, and I will briefly explain how one of them came to be adopted by the European Renaissance, while the others faded to near obscurity. In the process, we will see
samples of Euclidean geometry, Babylonian astronomy, Pythagorean mysticism, early concepts of infinity and very large numbers, commercial and pedagogical counting systems, and mathematical poetry.
Nick Winters (PhD Duke University) is a classicist and former physicist specializing in ancient mathematics and science. His dissertation, "Schools of Greek Mathematical Practice" (2020), proposed a
major revision to the history of Greek mathematics, organizing ancient texts into networks of information transmission and methodology. He spent 2022-23 as a fellow at the American School of
Classical Studies st Athens.
|
{"url":"https://planitpurple.northwestern.edu/event/620460","timestamp":"2024-11-07T12:39:26Z","content_type":"application/xhtml+xml","content_length":"10756","record_id":"<urn:uuid:2f7c28ac-fae7-46ab-b86d-a687e6af99db>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00211.warc.gz"}
|
Introduction to Multiplying and Dividing Fractions
What you’ll learn to do: Multiply and divide fractions
On Friday night, Karen and her roommates ordered a large pizza. After eating 1/2 of the pizza, they wrapped up the rest to save for later. The next day, Karen and her roommates finished 2/3 of the
remaining pizza. What fraction of the original pizza is left? To figure this out, we can use mathematical operations with fractions. Multiplying and dividing fractions are skills that often come in
handy in everyday situations like this one. Read on to find out how to do it!
Before you get started, take this readiness quiz.
readiness quiz
If you missed this problem, review the following video.
Draw a model of the fraction [latex]{\Large\frac{6}{8}}[/latex].
Show Solution
If you missed this problem, review this example.
Shade [latex]{\Large\frac{3}{4}}[/latex] of the circle.
Find two fractions equivalent to [latex]\Large\frac{5}{6}[/latex].
Show Solution
If you missed this problem, review the following video.
|
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/multiply-and-divide-fractions/","timestamp":"2024-11-03T13:21:18Z","content_type":"text/html","content_length":"54368","record_id":"<urn:uuid:d28ca7bb-9637-43c0-ba39-4e237629aff7>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00780.warc.gz"}
|
Skill rack solution in C
Skill Rack offers a diverse range of programming challenges, providing opportunities to apply theoretical knowledge in practical scenarios.
By solving these challenges, you'll develop a deeper understanding of C programming concepts and improve your coding proficiency.
Whether you're a beginner seeking to grasp basic concepts or an experienced programmer aiming to enhance your problem-solving skills, Skill Rack offers a platform for continuous learning and growth
in the field of C programming.
Skill Rack Solution in C employs algorithmic techniques by decomposing complex programming challenges into smaller, more manageable sub-problems.
This approach involves identifying base cases, representing the simplest form of the problem, and recursive cases, which handle more intricate scenarios by breaking them into smaller subproblems.
The structured steps for implementing a Skill Rack Solution in C include:
Base Case Identification: Recognize the problem's simplest version that can be directly solved without recursion.
Implementation of Recursive Case: Utilize recursion to partition the main problem into smaller subproblems, ensuring multiple calls to the recursive function with varied input values.
Integration of Results: Combine outcomes obtained from both the base case and recursive case to derive the final solution.
// Program for skill rack in C
int main() {
int num1 = 5, num2 = 10;
int sum = num1 + num2;
printf("The sum of %d and %d is: %d\n", num1, num2, sum);
return 0;
The sum of 5 and 10 is: 15
Application Skill Rack Solution in C
Skill Development: Practice a wide array of C programming problems covering various concepts and difficulty levels.
Concept Reinforcement: Reinforce fundamental concepts such as loops, arrays, functions, and pointers through practical exercises.
Problem Solving: Refine your problem-solving skills by tackling real-world programming challenges and implementing efficient solutions.
Structured Learning: Follow a structured learning path with increasing levels of difficulty, enabling gradual skill enhancement and confidence building.
Feedback and Improvement: Receive instant feedback on your solutions, allowing you to identify areas for improvement and refine your coding techniques.
Discount Coupons
FREE Pro Account worth $99.95 for 14 Days.
Waiting for your comments
|
{"url":"https://www.codingtag.com/skill-rack-solution-in-c","timestamp":"2024-11-14T09:01:48Z","content_type":"text/html","content_length":"82336","record_id":"<urn:uuid:e6e1414c-f711-4c8b-b879-eceb9eeb5cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00384.warc.gz"}
|
Contest Results
(Analysis by Spencer Compton)
For Bessie to have an attainable winning tic-tac-toe board configuration, she must do so through a sequence of movements and encountering pieces of paper. Key to this problem is conceptualizing a
"state" that encompasses Bessie's situation at any point in time in such a process.
One aspect of Bessie's state would be what her current tic-tac-toe board looks like. However, there may be multiple positions in the maze Bessie could be at with a particular board state (which may
affect the potential board states she could later reach).
Another aspect of Bessie's state would be her position in the maze. However, there could be multiple board states Bessie could have when she is at a particular position in the maze.
Fortunately, when we combine both of these pieces of information, this perfectly encapsulates Bessie's state in the process. Our goal is to first figure out which states Bessie could possibly reach,
and then how many distinct winning tic-tac-toe board configurations are there such that Bessie can reach a state with that board configuration.
To find all states Bessie can reach, we can use a depth first search (DFS) starting at Bessie's starting position in the maze and an empty tic-tac-toe board. From each state, we can try recursing a
further level by trying to move in each possible direction in the maze. To make sure our DFS does not take very long, we will keep track of which states we have visited so we do not need to revisit
them (e.g. we can have a boolean array that indicates whether or not we have visited each state). Note that if we use a set to keep track of these states, this might cause a solution to exceed the
time limit of some test cases. Instead, for example, we can convert our board state to a number and have a 3-dimensional visited array with dimensions for Bessie's row, column, and board state
(converted to an integer). Since there are $25^2$ possible locations in the maze and $\le 3^9$ possible board states, our number of states is bounded by $25^2 \times 3^9$.
Our depth first search will enable us to determine exactly which states Bessie could obtain, and then we can finally count the number of distinct winning boards among boards where there is some state
such that Bessie could have that board.
Brian Dean's code:
#include <cstdio>
#include <set>
using namespace std;
int N;
char board[25][25][3];
set<int> answers;
bool beenthere[25][25][19683];
int pow3[10];
bool test_win(int b)
int cells[3][3];
for (int i=0; i<3; i++)
for (int j=0; j<3; j++) {
cells[i][j] = b%3;
b /= 3;
for (int r=0; r<3; r++) {
if (cells[r][0] == 1 && cells[r][1] == 2 && cells[r][2] == 2) return true;
if (cells[r][0] == 2 && cells[r][1] == 2 && cells[r][2] == 1) return true;
for (int c=0; c<3; c++) {
if (cells[0][c] == 1 && cells[1][c] == 2 && cells[2][c] == 2) return true;
if (cells[0][c] == 2 && cells[1][c] == 2 && cells[2][c] == 1) return true;
if (cells[0][0] == 1 && cells[1][1] == 2 && cells[2][2] == 2) return true;
if (cells[0][0] == 2 && cells[1][1] == 2 && cells[2][2] == 1) return true;
if (cells[2][0] == 1 && cells[1][1] == 2 && cells[0][2] == 2) return true;
if (cells[2][0] == 2 && cells[1][1] == 2 && cells[0][2] == 1) return true;
return false;
void dfs(int i, int j, int b)
if (beenthere[i][j][b]) return;
beenthere[i][j][b] = true;
if (board[i][j][0]=='M' || board[i][j][0]=='O') {
int r = board[i][j][1]-'1', c = board[i][j][2]-'1', idx = r*3+c;
int current_char = (b / pow3[idx]) % 3;
if (current_char == 0) {
int new_char = board[i][j][0]=='M' ? 1 : 2;
b = (b % pow3[idx]) + new_char * pow3[idx] + (b - b % pow3[idx+1]);
if (!beenthere[i][j][b] && test_win(b)) { answers.insert(b); return; }
beenthere[i][j][b] = true;
if (board[i-1][j][0] != '#') dfs(i-1,j,b);
if (board[i+1][j][0] != '#') dfs(i+1,j,b);
if (board[i][j-1][0] != '#') dfs(i,j-1,b);
if (board[i][j+1][0] != '#') dfs(i,j+1,b);
int main(void)
int bess_i, bess_j, bstate = 0;
pow3[0] = 1;
for (int i=1; i<=9; i++) pow3[i] = pow3[i-1]*3;
scanf ("%d", &N);
for (int i=0; i<N; i++)
for (int j=0; j<N; j++) {
scanf (" %c%c%c", &board[i][j][0], &board[i][j][1], &board[i][j][2]);
if (board[i][j][0] == 'B') { bess_i = i; bess_j = j; }
dfs(bess_i, bess_j, bstate);
printf ("%d\n", (int)answers.size());
|
{"url":"https://usaco.org/current/data/sol_prob1_silver_open21.html","timestamp":"2024-11-07T09:29:05Z","content_type":"text/html","content_length":"6350","record_id":"<urn:uuid:abb97925-ba38-40bf-a1d4-32d766c38b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00025.warc.gz"}
|
Is Home Mortgage Simple Interest Or Compound Interest?
I had a good chuckle while reading this epic discussion thread on the Bogleheads Investment Forum: Does a home mortgage use Simple or Compound Interest?
It sounds a like factual question, as in "Is Miami located to the north or south of Boston?" The answer shouldn’t be ambiguous or subject to opinion or interpretation. You look at a map and say
"south" and everybody would agree. Yet as I’m writing this, there are more than 100 replies, and still growing, by the smartest people offering opposite answers, assisted by graphs, math equations,
and numeric examples. Some say it’s simple interest; some say it’s compound interest.
Someone answered by saying it’s a compound interest loan that doesn’t compound. If it doesn’t compound, does it make it a simple then? Or is it like the difference between 0 and null?
To answer the question we first need to understand what is a simple interest loan, what is a compound interest loan, and what are the characteristics of each.
Simple Interest Loan
In a simple interest loan, the interest in a second period is not affected by the interest in the previous period. Suppose we have a 3-year $100,000 simple interest loan at 1% annual interest. The
interest for each of the 3 years is $1,000 for a total of $3,000.
If the interest rate is 2% a year, the interest over the life of the loan would be $6,000, exactly twice as much as in the 1% loan.
If the rate is still 1% a year but the term of the loan is 6 years instead of 3 years, the total interest over the life of the loan also doubles.
Same goes with a 3% loan or a 9-year loan. You just multiply the principal by the rate and the years to get the total interest.
Compound Interest Loan
In a compound interest loan, the unpaid interest at the end of the first period is added to the principal for the second period, allowing the interest to compound. In a 3-year $100,000 compound
interest loan at 1% annual interest rate, the interest for the first year is $1,000, the second year $1,010, the third year $1,020.10, for a total of $3,030.10. That’s more than the total interest
paid on a comparable simple interest loan.
If interest rate is twice as high at 2%, the total interest over the life of the loan is $6,120.80, which is more than twice the total interest on a 1% loan, due to compounding interest.
If the term of the loan is twice as long at 6 years at 1% interest rate, the total interest over the life of the loan is $6,152, also more than twice the total interest on a 3-year loan at the same
rate, again due to compounding interest.
A higher rate or a longer term in a compound interest loan costs more than just a straight multiple.
Home Mortgage
In a typical home mortgage, your monthly payment first covers the interest for that month, with the remainder being applied to principal. Interest does not add to the principal for the next month.
This led to the answer that it’s a compound interest loan that doesn’t compound because you pay the interest for each month in full, leaving nothing to compound in the next month.
If the mortgage is interest-only — yes, there are those mortgages — it behaves exactly like a simple interest loan. If the rate is twice as high, your total interest in each period and over the life
of the loan is twice as much. If the term of the loan is twice as long and the rate is the same, your total interest over the life of the loan is also twice as much.
Principal Payments
Paying down principal by an amortization schedule makes it more tricky. Even though interest still doesn’t carry over from month to month — and if you skip a payment, you are not charged more
interest the next month — the loan no longer behaves like a simple interest loan.
Doubling the interest rate more than doubles the total interest over the life of the loan. The total interest of a 30-year mortgage at 8% is 2.3 times that of a 30-year mortgage at 4%.
Doubling the length of the loan also more than doubles the total interest over the life of the loan. The total interest of a 30-year mortgage at 4% is 2.2 times that of a 15-year mortgage at the same
Making a principal payment early has a compounding effect. If you pay $1,000 extra in month 13, you not only stop paying interest on that $1,000 but you also cause more of your subsequent regular
payments to go toward principal, further reducing the interest you pay.
These characteristics make a typical home mortgage with amortized payments behave more like a compound interest loan, but it doesn’t make it one. The compounding effect comes from varying principal
payments, not from compounding interest.
Between two mortgages, if you keep principal payments the same, they behave like simple interest loans.
If you have a 8% loan and a 4% loan, and you just go by the amortization schedules, you are paying less toward principal each month on the 8% loan, at least in the first half of the loan term, even
though your monthly mortgage payment is higher. Those lower principal payments compound, resulting in your paying more than twice as much in total interest on the 8% loan versus the 4% loan.
If you actually keep principal payments the same by making extra principal payments on the 8% loan, your 8% loan will be exactly twice as costly as the 4% loan but not more than twice. That’s the
classic trait of a simple interest loan.
A typical home mortgage is still a simple interest loan even though it feels like compound interest. The compounding feel comes from varying principal payments. If you don’t let the principal
payments vary, as in an interest-only loan (zero principal payment), or by equalizing the principal payments, the loan interest itself doesn’t compound.
Why does a seemingly simple factual question elicit completely opposite answers? Because it focuses people’s attention on the wrong thing. The interest doesn’t compound. The principal payments do. A
$1,000 principal payment saves interest on that $1,000 and causes higher principal payments the next year, and higher the following year, and so on.
This is the same situation as in asking whether the 401k loan interest is double taxed. There are two taxes, unrelated, and you still pay the same two taxes whether you borrow from your 401k or not.
Focusing on the wrong thing leads people down the wrong path.
In practice though, you are better off treating the mortgage as compound interest even though it actually isn’t. Lowering the rate has a compounding effect. Shortening the term has a compounding
effect. Pre-paying principal also has a compounding effect.
[Photo credit: Flickr user hannah8ball]
Say No To Management Fees
If you are paying an advisor a percentage of your assets, you are paying 5-10x too much. Learn how to find an independent advisor, pay for advice, and only the advice.
1. Carl says
Uhm… I think your examples are wrong.
On simple interest:
“Suppose we have a 3-year $100,000 simple interest loan at 1% annual interest. The interest for each of the 3 years is $1,000 for a total of $3,000.” I didn’t think you paid interest on principle
that had been paid off in previous years. The interest is recalculated after each payment only on the unpaid principle (assuming no late payments). I’m obviously assuming you are making payments
that cover more than just the interest.
The same applies with the compound interest example.
You never pay interest on money you have already paid back. Or said another way, Interest only accrues on the outstanding balance.
2. Harry says
Carl – That example has just one balloon payment at the end of the term. Same for the next example for compound interest. I need to start with the simplest form to show the difference between
simple interest and compound interest. Once you start making payments in the middle it gets murky.
3. Lei says
Understandable article. Thank you! My question is: for paying extra principle each month, does it make difference paying on the 1st day or paying on the last day of the month? In other words, is
interest calculated on daily basis?
4. Harry says
Lei – The required monthly payments are due on the 1st of each month, usually with a grace period to the 15th of the month. It doesn’t matter whether you make the required monthly payments on the
28th of the previous month or the 12th of each month. You are not charged less interest for paying early or more interest for paying late (but still within the grace period). The extra principal
payments, however, are calculated daily. The day your extra principal payment hits, you stop paying interest on that amount for the rest of the month and beyond. If you want to be anal, you make
the extra principal payment separately (and make sure it’s marked as extra principal payment) and you pay as early as you have the money for it.
□ Rob says
My mortgage payment is $2000 per week. My payment is due on the 1st of the month. I have a grace period until the 15th. My payment includes principal, interest, and escrow. I pay $1050 every
2 weeks to the mortgage bank through my online banking service of my personal bank; one payment before the 1st of the month and the 2nd payment before the 15th. I noticed the online detail on
the mortgage bank’s website each month the first payment shows up as “lockbox” on the day it is received and the next day a new transaction shows up as principal curtailment for $1050. When
the 2nd payment arrives it shows as “lockbox”. I call the bank after I see the 2nd payment arrives but definitely before the 14th of each month and have the 1st $1050 reapplied to the monthly
payment along with $950 of the amount in “lockbox”. The extra $100 is applied as principal curtailment. The breakdown on my monthly “paper” statement shows (example w/ rounded numbers; each
month my escrow distribution stays same, interest paid goes down slightly, principal paid increases slightly) $500 to principal, $1000 to interest, $500 to escrow, and $100 extra principal.
My intent is to have the first $1050 payment (paid before the 1st of the month) applied as principal (which it appears to be doing), thus reducing my principal balance for approximately 14
days each month. Then I call on the day before the grace period expires and have it reapplied as the monthly payment, along with a $950 portion of the 2nd $1050 payment. I go through this
extra effort each month because it appears to me that I benefit financially by having my principal balance lowered by $1050 for 14 days each month. In your opinion should this strategy
financially benefit me more than just paying the $2000 mortgage payment + $100 extra to principal each month before the 15th + an additional $2000 principal only payment each year when I get
my taxes back. I was quite surprised when I first noticed on my bank’s online detail of my account that the $1050 payment I make each month before the 1st was automatically applied to
principal. This discovery is what prompted me to develop this strategy that seems to maximize my benefit financially with the only down side being a 5 minute call with a customer support
person at the bank each month to be sure everything gets applied properly. Thanks for thinking this through with me. I am trying to minimize the overall interest paid and pay the loan off
5. Doug says
I pay extra mortgage principal payments each month. At my bank, the principal payments are ‘effective’ as of the first of that same month, regardless of which day I pay it – at least that’s what
my bank statement tells me. I see that as an extra bonus.
6. Junsheng says
I liked some other stuff you wrote but this is not correct. Let’s say I have a mortgage of 100k with APR of 12%. If the mortgage were indeed simple interest and I pay a first payment of $1010
after one month, that should have returned to the lender $1000 principal + $10 interest on that $1000 principal for the month. For the next month, my principal used for interest calculation would
have been reduced to 99k (sure I still owe another $990 of interest from that $99k for the 1st month).
But that is not reality. The reality is the $1010 payment will be considered $1000 “interest” and $10 “principal”. Note that it is only convenient to think a part of the payment being interest
and the rest being applied towards principal in this case. The fact of the matter is, after one month before I make any payment, I simply *owe $101k* to the lender. There is no distinction of the
$1000 interest from the $100k principal, because they would equally earn interest going forward, until I pay some of it off. This is compound interest.
□ Harry Sit says
I don’t follow how you are able to pay down $1,000 principal with a $1,010 payment when the interest due in the first month is $1,000. The lender will always collect the interest first. Any
excess is then applied to principal.
Suppose you have an interest-only mortgage of $100k at 12% APR as in your example. Think about what if you don’t make the first payment. Putting aside the issue of late fees, how much do you
need to pay in the second month to catch up to current? The answer is $2,000. The interest you didn’t pay in the first month will not increase the interest in the second month. No
interest-on-interest. That makes it a simple interest loan. It’s different than a credit card loan.
□ Junsheng says
You overlooked the time value of money. By requesting interest on full principal amount be paid each month, it is already compound interest. To make a point, suppose the lender made one
hundred such $100k loans at 12% APR. After one month, the lender can make a new loan using the 100 x $1000 interest payments he will have received. That is interest on interest.
In my book, real simple interest is like the counterpart of CD. For the sake of argument, think of my $100k loan as one hundred bills with 12% APR each at $1000 face value that matures in
30years with no prepayment penalty. In my prior example, after one month, I choose to pay off one such bill, which costs me $1000(1+12%/12) = $1010, and reduces the principal amount by $1000.
□ Harry Sit says
What the lender does with the money received is not your business. When it comes to determining whether a loan is a simple interest loan or a compound interest loan, it only matters whether
*you* are paying interest-on-interest on this loan. If yes, it’s a compound interest loan; if not, it’s a simple interest loan.
Requiring payments received be applied to interest first before reducing principal doesn’t change whether a loan is simple interest or compound interest. Going back to the simple example in
this article, $100,000 simple interest loan at 1% annual interest for 3 years, if I add the requirement that any money received before the end is applied to interest first, the loan is still
a simple interest loan. 3 years, $3,000 in interest. If you do a partial payment in the 26th month, the interest for the second year is the same as the interest for the first year. It doesn’t
CDs by the way do compound. Many CDs pay interest monthly. The interest you receive in the second month is higher than the interest you receive in the first month. If the CD only pays
interest annually, the interest you receive in the second year is higher than the interest you receive in the first year.
□ Junsheng says
It’s not my business, true.. But that was not the point. The point was, if the lender keeps lending out the interest payment he received, he earns compound interest. Guess who’s paying the
bills? The borrowers as a whole. Equivalently, forget about the lender lending the interest out, but think of the *loss* of time value of the (interest) money the borrowers paid.
If you take the formula for mortgage payment (P) for a given loan amount (L) and interest rate (I), it has compound interest written all over it: L = P * sum_j (1+I)^-j. Here j goes from 1 to
the total number of payment cycles (typically months).
□ Harry Sit says
Then by your definition as long as the borrower is required to pay interest before the final loan due date, there is no simple interest loan, because the lender will be able to lend out the
money received and the borrower will lose the time value on the money paid. That’s not the definition of a simple interest loan as commonly understood.
If you don’t agree that a $100k loan at 1% interest for 3 years with $1,000 payable at the end of each year plus a final payment of $100k at the end is a simple interest loan, we don’t really
have a common ground at the root level. The whole discussion about whether something is a simple interest loan or not becomes moot when there is no agreement on the definition of a simple
interest loan.
If you create your strict definition for the color ‘green’ and you say the color of the traffic light isn’t green because it’s off a shade, you are right. When others say the light is green,
they are also correct.
7. David says
Can my lender add unpaid interest to my loan balance on an automobile installment loan.
8. Olabode says
Harry, I appreciate your write-up. Its educative and interesting based on the scenarios stated therein. I think the high interest rate ab initio in the life cycle of a mortgage loan is as a
result of the borrower paying smaller principal which increases gradually month on month. And the interest payments also reduces month on month.
@Junsheng, Please clearly state your argument and the mortgage payment formula. I will be glad to have you do that.
Thanks guys
9. Jose says
Hi, if I have a mortgage on which I make advance payments on the principal every month. Does it matter if make that payment at the beginning of the month or if in turn I make the payment at the
end of the month, before the next payment is due?
□ Harry Sit says
I don’t think it matters. However, holding on to the extra principal payment for extra 20 days doesn’t really earn you much interest anyway. If you keep $1,000 for 20 days at 1%, you will get
only $0.55 extra before tax.
10. Steven Charles Scott says
Good stuff, Harry. When I read some of the comments to your article it reminds me of the simple fact that “intelligent” people read more (and most times conclude before reading) than actually
just looking and confirming what is actually happening in practice i.e. empirical evidence is big here). Your CD rebuttal is spot on, anyone can see that easily in their own CD sitting in the
bank over two months. And naturally that is also obvious (after good blogs such as these) when looking at your mortgage statement over time.
11. WR says
jun sheng just shut the fuck up alr lol… clearly doesnt even understand what simple interest is. going by his argument, there is no such thing as a simple interest loan in any way since the
lender can do what ever he wants with the earned interest LOL..
12. michael ryan says
OK theres something missing here which is why youre all still confused although the author is correct. its the temporal relation between the interest rate and the payment schedule. In his example
of an interest only loan the interest is “compounding” at the one year time frame in one example and is not in the other. Its pretty clear but both loans principle is only due at loans end (3
years) and thats a different time frame from either interest schedule.
Put aside the question of what time of month you make your payments for a moment youre getting ahead of yourselves. lets look at the two loans again more closely and imagine them this time as not
balloon loans but a typical mortgage that will be paid off by the end of the loans term in this case 3 years whats most important to understand is that you are not paying interest on money you no
longer owe principal you have already paid back. The math is a bit tricky but to allow you to have the same monthly bi monthly annual etc payment they figure that out. say in the non compounded 3
year loan you wanted annual payments they would be less than the authors example because at the end of the first year you would owe the interest on the entire 100k but at the end of the second
year you would only owe interest on something close to 66,660, and at the end of the third year on about 33,333. in short as you repay the loans principal you are paying interest on less money.
Its a differential equation where the parts are all moving simultaneously but a simple concept and it applies to both compounded and simple interest loans. Imagine again this loan as compound
interest the important thing to note first is compounded over what period of time, a real simple interest loan is in a sense a compounded interest loan compounded over the time period (term) of
the loan but the authors example is a more standard understanding- the interest is calculated at a rate per year, but you do not pay interest on the interest, I havnt read the fine print but Im
pretty sure you actually would pat interest on the interest of any normal mortgage if you actually paid less than the principle and the amount of interest that was calculated to keep you ahead of
it. This is what the authors trying to make you see, the bank has already figured out how much interest you must pay and how much principle and if you pay a bit late more of you payment will go
toward interest and if you missed a payment not only will you owe interest on that amount of principal longer you now missed paying that interest on last months balance and its essentially become
principal instead of changing your payment amount every time you are a day late or early or even miss they will just adjust the length of the loan on paper which is why when you refi or wish to
pay off they must give you a calculation they call the payoff.
But heres the thing they are not trying to make interest on interest in fact they strongly resent people who attempt to skip payments and let the interest accumulate.They instead tried to fugure
out an “amortization” schedule that accounted for you paying on time a certain amount that would end up with you only paying interest each month on the principal balance they estimate it will be
and enough principal to be paid off by the term of the loan obviously these are moving in opposite directions you payment amount is calculated assuming you pay exactly that amount exactly on time
for the entire term in which case you will not be paying interest on interest. if youre still confused lets make it a one year loan of 100k at 10% if it were a ballon loan at the end you owe 100k
plus 10k interest=110k now if divide by 12 its 9166.66 but thats wrong because after every monthly payment you are borrowing less money so they have figured it out to be 8791.59 if paid on time
you will be paid off in a year and have only paid 10% on whatever you still owed every month which keeps changing
13. Mick says
Harry, what do you think about the concept of using a HELOC to pay off your mortgage? The concept that was shown to me is to get a 1st position HELOC and pay off your mortgage and also use the
HELOC like a savings or checking account. Put all of your money you make into the HELOC each month and then pay your bills and life expenses out of it. Looks like this will pay your mortgage off
in 5-10 years and save thousands in interest.
□ michael ryan says
Thats retarded who telling you this a heloc salesman or some income taxes are unconstitutional nutjob?
mortgage rates are much cheaper than helocs and amortized over longer periods. Now whether you want to have a mortgage debt is a two sided sword. On one hand the rates are actually cheaper
than real inflation so in essence youre getting paid to borrow so why pay off a loan when you could use that cash to make a better return than your mortgage interest rate. If you actually
can. Most use mortgage to leverage say real estate is appreciating at 5% annually and you have 100 k earning 3% in the bank you could buy a 100 k house and make an extra 25 on your cash or
you could get a 400 k morgage @ 3% and buy a 500 k house and make 10% on your 100k. now if you can make say 7% in stocks why pay off a 3% mortgage to save 3% when you could make 7%? The other
side is how safe do you want to play it if the stock market seems to risky for you and you are not interested in other investments you may want to pay off your mortgage. but keep in mind a
house is really really illiquid and you are putting all your eggs in the home basket if the real estate market tanks you may not lose your home but it might not be worth anything. this sounds
confusing but if you put that other money in investments that did better and held their value you could if real estate crashes buy a much better home for pennies on the dollar, you could walk
away from your mortgage or renegotiate it with the bank if you had little equity in it.
Ok regarding HELOC they ar basically second mortgages or lines of credit, in some way they sem great no interest unless you use it only interest on whats used. thing is these lenders will
reduce that line of credit id realestate prices drop so you only have the line of credit in good times unless you spend it at which point you are paying interest.
But their rates are lower than unsecured credit and often tax deductable.here a good way i think to use them, or at least how i have. So i bought when there was blood in the street in 09 and
have a house with a low interest rate and a couple million in home equity its worth five times what i paid for it. great but im worried so much of my paper wealth is in one illiquid property.
I dot really want to pay off a 3% mortgage because its free money with inflation more than that and my rents cover the mortgage. but i would like to diversify some of that equity, a cash out
refinance would raise my rates and cost quite a bit in fees, but i could get a heloc and use it to take someof the equity out of this property and invest it in something else that would pay
more than the heloc interest. I could buy some bare land in the retirement area i have my vacation/ retirement home and build a spec property. A rental in that area would bring in more than
the heloc interest especially when rental tax advantages calculated, i could expand my hobby farming business, or start another business if i were sure enough of it to essentially be betting
part of my house on, I could even buy gold, or something it might not pay off right away but i would have diversified the home equity. But realize my house is valued at 5x times my mortgage
and has rents covering the mortgage I would be only using a bit of that equity to invest in something else. Mot peopleuse helocs to fix up there house which makes sense if they will actually
see a return in the matket value which many renovations do not. some just are resigned tothat house and want to live more comfortably rather than selling and moving to a better suited house.
14. Mick says
Thanks for your reply Michael.
The concept is a 1st position heloc, not a 2nd position. My understanding is in a heloc you only pay interest daily on the current balance. So if you put all of your monthly income directly into
the heloc that makes your heloc (ie mortgage) payment and it pays down the balance (so paying less in interest) until you take money out to pay any bills. Supposedly this concept knocks off so
much interest as compared to an amortized loan (heavy on interest on the front end) that you can pay it off in 5-10 years.
15. Maggie Ru says
I think the best standard to apply to decide if an interest rate is a simple interest rate or a compound interest rate is that to see if the interest generated by the principle during a specific
time is counted for future calculation of the interest. In other words, if the interest is not generating new interest, then it’s a simple interest rate. If the interest is generating new
interest, then it’s a compound interest rate. For example, you have a mortgage of 10K, the first year mortgage total payment let’s say is $5000, and $4000 is paid as interests with only $1000
deducted from the original principle, which gives you 9.9K principle for the beginning of the second year. When bank calculates the interests for the second year, they use 9.9K which in certain
way includes the interest $4000 you paid in the first year. So the $4000 interests you paid in the first year actually generate new interests for the second year. In this way, I would say home
mortgage interest rate is a compound interest rate.
□ Harry Sit says
You lost me toward the end. How is it 9.9k when you started with $10k and you paid off $1,000? Wouldn’t it be $9k? Or if you meant you started with $100k borrowed and it becomes $99k after
you paid off $1,000 in the first year, in what way does the 99K include the $4000 interest you paid in the first year? $100k – $1k = $99k. The interest number isn’t part of it, whether it was
$4,000 or $8,000.
□ Maggie Ru says
Thank you for the reply and sorry for the typo. What I meant is to start with $10K principle and $100 principle paid off with $400 paid for the interests in the first year leaves 9.9K
principle for the second year. What I am trying to say is a simple interest rate actually represents a way of calculating interests. When a rate is a simple interest rate, we can use a
formula of I=PRT to calculate the total amount of interests paid for a loan, where P is the principal, R is the annual interest rate in decimal form, and T is the loan period expressed in
years. When it comes to the home mortgage, within each year, the R can be considered a simple interest rate when calculating interests for that year and the formula I=PRT applies. But to
calculate the total I paid for the mortgage, the formula I=PRT doesn’t apply and it’s much more complicated. From the perspective of viewing simple interest rate as a way of calculating
interests, the interest rate for a home mortgage is not a simple interest rate. In addition, I am not denying your point of views, but just provide a different perspective to understand the
simple interest rate.
16. Maggie Ru says
In addition, I think from the banks (lenders) point of view, the mortgage rate is a return rate of banks’ investment. Banks use the rate more like a discount rate (which is a compounding rate) to
get present values of future cash flows which help decide how profitable of their loan (investment). In my opinion, a simple interest rate is a very ideal situation in the real lending/borrowing
17. Marc says
I understand that with mortgages we’re never reporting accrued interest since they are already paid… so that technically fits the definition of simple interest. However, why are we saying that in
the US, mortgages are compounded 12 times, and in Canada 2 times? Something must still be compounded?
18. Joseph says
Some of the information here is correct, and some is incorrect or not stated properly. Take it form a mathematician.
First of all, “Does a home mortgage use Simple or Compound Interest” is indeed a factual question, the same as “Is Miami located to the north or south of Boston?” The answer is a simple yes or
no. The determination… if you miss a mortgage payment, does the interest get added to the balance that the next month’s interest is calculated on? If the answer is yes, that’s interest on
interest, which is compounding. If the answer is no, it is not compounding.
Just because a loan or CD can compound doesn’t mean it will. For a loan that compounds, if you make a payment at the exact moment that interest is being charged and will be added to the balance,
and your payment is greater than this interest, you pay it off before it has a chance to be added to the balance and have interest calculated on interest. When this happens, mathematically,
there’s no difference between compound interest and simple interest.
19. Nikko says
This is incorrect. Mortgage amortization has a premise that your interest payment goes lower as the Principal Balance goes down. In your example the Interest went higher every year. That’s false.
It’s compounded interest, because interest is calculated at the frequency of the payment, the more the frequency the more the interest.
Simple interest means the interest is calculated as a whole then spread apart based on the frequency. Big difference, because for simple interest no matter the frequency the interest adds up to
the same amount.
For compounded interest the more the frequency the higher the interest. That is why making principal payments reduce interest because at every frequency you calculate the interest based on the
principal balance. Lowering it lowers your interest for that frequency.
20. Naga Rajan says
Very good article Harry.
As far as the type of loan, I would say “*Simple Interest with Varying Principal*”, as I don’t see any compounding at all. Please let me know if I am missing something here. Thanks.
21. partha says
Old thread, but I will add some fuel to this. In India, the government offers housing loan for its employees with “simple interest” rate of 9.5% per annum. Typical term is 20 years, the first 150
payments are applied to principal. On a 100K loan, this makes the monthly payments as 100K/150 = 667. The (simple) interest outstanding at this point is then paid back in the next 90 instalments
of 664 each, simply by dividing the outstanding interest by 90 – no interest on interest during this repayment period. What do we call this – really simple interest? If we apply the standard EMI
formula 100K principal 240 payments of 666, the interest rate we get is a little over 5%. So it is 9.5% really simple interest or 5.1% simple interest?
22. Sue says
Hi: am looking at a line of credit loan which offers a fixed rate with monthly payments that are equally divided between principal and interest.
I could not find anywhere a calculator for this type of payment (and I do not know how to figure out the total interest to be paid on this line of credit) to compare it with a standard fixed rate
mortgage. Do you happen to know of a calculator that could show how much interest I would pay for a specified amount and term and potentially also show an amortization schedule? Could you maybe
tell me how much interest I would pay on a 100K loan with a 10 yr term and fixed interest= 6.19%, assuming each monthly payment being half principal and half interest? Thanks.
23. Hector Martin says
The definition of compounded interest as “interest over interest” is incomplete and confusing. As long as the interest is liquidated before the end of the whole investment, the interest will be
compound. I does not mater whether the interest is paid (withdrawn form the investment), or added to the former capital. In a mortgage, there are infinite different composition of each monthly
payment. The one where you pay all the interest is only one of them, and it is precisely the one that is confusing because it pays all the interest accrued without compounding.
So, compounded interest definition: The interest is liquidated before the end of the whole investment and it can be paid (withdrawn from the investment) or added to the capital. Both cases are
financially equivalent at a compound interest rate.
□ Harry Sit says
So the question is what happens if you stop making payments. If you pay nothing toward your mortgage in one month, will the interest charged next month be higher, not counting late fees? Most
people don’t get to find out that answer because they don’t want to incur late fees and risk losing their home. Those who do will see the interest portion doesn’t go up from one month to the
next. Of course it’s only a small consolation against the backdrop of large late fees and the upcoming foreclosure proceedings, but it does prove that the mortgage is a simple interest loan.
24. John Syrinek says
The math I did in school had monthly payments that decreased each month; as principal decreases, so does the monthly payment. *This is not the case for mortgages*
With mortgages, monthly payment remains the same:
Monthly payment (constant) = interest payment (decreases) + amount towards principal (must increase)
Said another way, if amount paid in interest decreases each month, but the monthly payment must remain the constant, then *amount towards principal must increase each month*.
This results in an effect that is non-linear, but that doesn’t mean interest is compounding. It means you’re paying more and more towards principal each month, which ends up saving you $$$ on
money! 🎉
25. Ivene says
You said if you miss a mortgage payment, the interest does not increase the following month.
Still, you owe that interest payment you missed. Doesn’t that mean that you do increase what you owe and therefore total interest paid at the end of the loan is more?
For example, if I missed a $1,000 payment ($900 interest for the month and $100 on principle) , wouldn’t I have to pay that $1,000 (say the following month) and be charged interest to? I.e.
Interest on interest?
Btw, Nice article.
26. Bubba Gump says
A mortgage was granted 36 years ago the note states $7000 @ 8% interest for 15 years of equal payments . No payment was ever made in 36 years. My opinion is the 15 year term of contract never
So now the borrower wants to pay back only $7000. When a note payment is not made it is deferred with interest and compounded . The note does not have an end date, nor does it have an amount due
each payment. I feel that the amount due is compound interest . What is correct
Leave a Reply Cancel reply
|
{"url":"https://thefinancebuff.com/is-home-mortgage-simple-interest-or-compound-interest.html","timestamp":"2024-11-08T04:38:24Z","content_type":"text/html","content_length":"227630","record_id":"<urn:uuid:894a046c-f17f-49a3-9a05-dccb3d9b25a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00477.warc.gz"}
|
Denotational semantics of recursive types in synthetic guarded domain theory
Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for reasoning operationally about programming
languages with advanced features including general references, recursive types, countable non-determinism and concurrency.
Guarded recursion also offers a way of adding recursion to type theory while maintaining logical consistency. In previous work we initiated a programme of denotational semantics in type theory using
guarded recursion, by constructing a computationally adequate model of the language PCF (simply typed lambda calculus with fixed points). This model was intensional in that it could distinguish
between computations computing the same result using a different number of fixed point unfoldings.
In this work we show how also programming languages with recursive types can be given denotational semantics in type theory with guarded recursion. More precisely, we give a computationally adequate
denotational semantics to the language FPC (simply typed lambda calculus extended with recursive types), modelling recursive types using guarded recursive types. The model is intensional in the same
way as was the case in previous work, but we show how to recover extensionality using a logical relation.
All constructions and reasoning in this paper, including proofs of theorems such as soundness and adequacy, are by (informal) reasoning in type theory, often using guarded recursion.
Original language English
Title of host publication LICS '16 Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science
Number of pages 10
Publisher Association for Computing Machinery
Publication date 2016
Pages 317-326
ISBN (Print) 978-1-4503-4391-6
Publication status Published - 2016
• Guarded Recursion
• Denotational Semantics
• Type Theory
• Recursive Types
• Computational Adequacy
Dive into the research topics of 'Denotational semantics of recursive types in synthetic guarded domain theory'. Together they form a unique fingerprint.
|
{"url":"https://pure.itu.dk/en/publications/denotational-semantics-of-recursive-types-in-synthetic-guarded-do","timestamp":"2024-11-04T05:59:17Z","content_type":"text/html","content_length":"62824","record_id":"<urn:uuid:bf14ec45-2446-4d3a-910b-792f2aff2d48>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00878.warc.gz"}
|
Advent of Code 2023 - Day 1
Published on
Advent of Code 2023 - Day 1
This series of posts is a writeup of my solutions to the Advent of Code 2023 problems. While time and space complexities are not the main concerns of these problems, I will try to include them in my
Part 1
Given a string, combine the first digit and the last digit (in that order) to form a single two-digit number. Then find the sum of all numbers formed this way.
For example:
In this example, the values of these four lines are 12, 38, 15, and 77. Adding these together produces 142.
This problem is fairly straightforward. We can use a regular expression to find the first and last digits of each line, and then add them together.
First way
We need to find all the digits in the string, so the simplest way to do this is to use the regex /\d/g to match all the digits. The g flag tells the regex to match all occurrences of the pattern,
rather than just the first one.
We can then use the first and the last one to form a string, and then convert that string to a number using parseInt.
From the 4th line of the example treb7uchet, we can see that it should produce the number 77. The regex above matches only one 7, but accessing the "first" and the "last" number will give us exactly
77. So there should be no more edge cases to consider.
function part1(input) {
let result = 0;
for (const line of input.split("\n")) {
const nums = line.match(/\d/g);
if (nums) {
result += parseInt(nums[0] + nums[nums.length - 1]);
return result;
Second way
Rather than using regex, we can iterate through the string and check if each character is a digit.
By breaking down the string into an array of characters with line.split, we can use filter to remove all non-digit characters.
Here we use a string comparison to check if the character is a digit. This is because we know that the characters are all ASCII characters, and the digits are all consecutive. So we can just check if
the character is between 0 and 9. Ref: MDN page on string comparision
function part1(input) {
let result = 0;
for (const line of input.split("\n")) {
const nums = line.split("").filter((c) => c >= "0" && c <= "9");
if (nums.length > 0) {
result += parseInt(nums[0] + nums[nums.length - 1]);
return result;
Part 2
The problem becomes more complex: one, two, three, four, five, six, seven, eight, and nine also count as valid "digits".
In this example, the values are 29, 83, 13, 24, 42, 14, and 76. Adding these together produces 281.
We can see that the digit words can overlap. For example, zoneight234 contains one, eight, 2, 3 and 4. We need a way to keep the previous digit-checking and take care of the words.
First attempt
We can use an iterative approach to look the string from left to right. Here are the psuedo code for processing each line:
create a list to keep track of numbers found
start from the first character, go through each index
if the current character is a digit
add it to the list
if the string looking from the current index starts with any of the digit words
find the corresponding digit
add the digit to the list
take the first and last digit from the list and convert it to number
... same steps as before ...
Digit words
Considering the following array of digit words:
const digitWords = [
Notice the zero entry at the beginning. I chose this arragement because of two main reasons:
1. There is no appearance of zero or 0 in the example input, so we can just ignore it.
2. The index of the digit word in the array is the same as the digit it represents.
With such, we can update the pseudo to a more precise version:
create a list to keep track of numbers found
start from the first character, go through each index
if the current character is a digit
add it to the list
// if the string looking from the current index starts with any of the digit words
// find the corresponding digit
// add the digit to the list
go through each digit word
if the string looking from the current index starts with the digit word
add the index of the digit word to the list
take the first and last digit from the list and convert it to number
... same steps as before ...
And the corresponding code:
function part2(input) {
let result = 0;
for (const line of input.split("\n")) {
const nums = [];
for (let i = 0; i < line.length; i++) {
const c = line[i];
if (c >= "0" && c <= "9") {
for (let j = 0; j < digitWords.length; j++) {
const word = digitWords[j];
if (line.startsWith(word, i)) {
if (nums.length > 0) {
result += parseInt(nums[0] + nums[nums.length - 1]);
return result;
It is important to note that we need to push the string representation of the index of the digit word, rather than the index itself. Otherwise nums[0] + nums[nums.length - 1] will actually add up the
two numbers rather than concatenating them.
The above solution should be short and simple enough to understand, but there are still some room for improvement in terms of performance.
Time complexity
Consider n as the number of lines and m as the longest string in the input. The time complexity of the above solution is O(n * m).
This is because for each input line (n), we need to iterate through each character in each line (m), and for each character, we need to iterate through each digit word. However the number of digit
words is constant, so we can consider it as constant time.
First optimization
If we look at the logic of checking digits THEN checking digit words, we can see that the two logic are mutually exclusive. If we find a digit, we don't need to check for digit words. So we can
actually skip the digit word checking if we find a digit.
Although the time complexity is still O(n * m), the program is expected to run faster if there are more digits in the string
Second optimization
We can also see that we only need the first and the last digit/word in each line. So we can actually stop checking the line once we find the first digit/word. But then we will need to check from both
We can use a variable to keep track of the direction we're checking, default to forward. Once we found something in the forward direction, we switch the direction to backward and start checking from
the end of the string.
Again, the time complexity is still O(n * m), but the program will be much faster if the digits are located towards both ends
The complete optimized code is as follows:
export function part2(input) {
let result = 0;
for (const line of input.split("\n")) {
const nums = [];
let mode = "forward"; // 'forward' or 'backward'
let currentIndex = 0;
while (currentIndex >= 0 && currentIndex < line.length) {
const c = line[currentIndex];
// Indicates whether a match has been found or not
let found = "";
if (c >= "0" && c <= "9") {
found = c;
} else {
for (let j = 0; j < digitWords.length; j++) {
const word = digitWords[j];
* If we're going backwards, we need to start from the end of the word
* e.g. trying to check for 'three' in 'abcone2threeooo' from the end
* 'abcone2threeooo'
* ^ current index index is 11
* ^ we need to start from 11 - 5 + 1 = 7
* 'abcone2threeooo'.startsWith('three', 11) === true
const startPos =
mode === "forward" ? currentIndex : currentIndex - word.length + 1;
if (line.startsWith(word, startPos)) {
found = j.toString();
// The flow will be forward -> backward -> end
if (mode === "forward") {
if (found) {
// finish finding forward direction, time for backward
mode = "backward";
currentIndex = line.length - 1;
// nothing found, keep going forward until index reaches the end
} else {
if (found) {
// finish finding background direction, time to end
// nothing found, keep going forward until index reaches the end
if (nums.length > 0) {
result += parseInt(nums[0] + nums[nums.length - 1]);
return result;
Space complexity
In the algorithm we have a growing list of numbers, which is in the space complexity of O(m), where m is length of the longest line in the input.
However, with the above optimizations, we will at most push twice into the array, once for the forward direction and once for the backward direction. So the space complexity has become O(1). Hooraay!
AOC 2023 Day 1 was fairly simple, but it could be quite a challenge to do it both fast (human clock) and fast (computer clock). If you like to see more of these writeups, please share my post and
comment below. Thanks for reading!
I am seeking a good way to 1. put a live javascript runner in my blog post, and 2. make a newsletter for my blog. If you have any suggestions, please let me know :)
|
{"url":"https://hinsxd.dev/blog/aoc-day-1","timestamp":"2024-11-05T06:19:15Z","content_type":"text/html","content_length":"162453","record_id":"<urn:uuid:7d45c749-b71f-4da6-b0a0-ea0fb49df834>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00837.warc.gz"}
|
Line Equations Formulas Calculator - Slope
Online Web Apps, Rich Internet Application, Technical Tools, Specifications, How to Guides, Training, Applications, Examples, Tutorials, Reviews, Answers, Test Review Resources, Analysis, Homework
Solutions, Worksheets, Help, Data and Information for Engineers, Technicians, Teachers, Tutors, Researchers, K-12 Education, College and High School Students, Science Fair Projects and Scientists
By Jimmy Raymond
Contact: aj@ajdesigner.com
Privacy Policy, Disclaimer and Terms
Copyright 2002-2015
|
{"url":"https://www.ajdesigner.com/phpline/line_slope_intercept_equation_m.php","timestamp":"2024-11-13T00:09:54Z","content_type":"text/html","content_length":"26255","record_id":"<urn:uuid:c6944eab-5bfb-4e21-857c-e8027af5ee42>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00536.warc.gz"}
|
Convert Infix to Postfix Notation (C++, Java & Python Code)
In computers, parentheses in an expression can increase the time needed to solve for a solution. To minimize computational complexity, various notations have been devised for representing operators
and operands in an expression. So, in this article, let us understand the technique for converting infix to postfix notation along with C++, Java, and Python code implementation.
Before this, we first need to revise a couple of those notations, namely, infix and postfix notation, in detail, along with its example. So, let’s get started!
What is Infix Notation?
When the operator is placed between its operands, it is known as infix notation. The operand is not necessarily a constant or variable; it can also be an expression.
For example: (a + b) * (c + d)
The above expression contains three operators. The operands for the first plus operator are a and b, whereas the operands for the second plus operator are c and d. Evaluating the result of these
operations requires applying some set of rules.
At last, after using the addition operator on (a + b) and (c + d), the multiplication operation will be performed to look towards the final answer. Here is the syntax:
<operand> <operator> <operand>
Remember that if there is only one operator in the expression, we do not have to follow any rule set to evaluate the expression. However, if the expression possesses more than one set of operators,
you can define the operator's priority according to the below-given table and evaluate the final result.
Operators Symbols
Parenthesis ( ), {}, [ ]
Exponents ^
Multiplication and Division *, /
Addition and Subtraction +, -
What is Postfix Notation?
The expression in which the operator is written after the operands are known as a postfix expression or reverse Polish notation.
For example, the postfix notation of the infix expression (a + b) can be written as ab+.
A postfix expression is an arithmetic expression in which operators are applied from left to right. There is no need for parentheses while working with postfix notation, unlike infix expressions.
Moreover, no operator precedence rules or associativity rules are needed, meaning that coders do not need to memorize a special set of rules to help them determine the order in which operations will
be performed.
Let us understand the algorithm to define postfix notation using an example:
Algorithm for postfix notation
Let the infix notation of an expression be: a + b * c
Here, we will start scanning the expression from the left side. As the multiplication operator is the first operator appearing while scanning an expression from left to right, the expression will be:
Resultant Expression 1: a + bc*
Again, we scan the expression from left to right, and the next operator encountered will be plus. As a result, the final output of the expression will be:
Resultant Expression 2: abc+*
Therefore, the final algorithmic steps for the postfix expression are:
1) Scanning the expression from left to right
2) If the operand is encountered, push it into the stack
3) If the operator is encountered, then pop the corresponding operand from the stack and perform computation
4) Continue the same process and retain the final value in the stack
How to Convert Infix to Postfix Notation?
Using the stack data structure is the best method for converting an infix expression to a postfix expression. It holds operators until both operands have been processed, and it reverses the order of
operators in the postfix expression to match the order of operation.
Here is an algorithm that returns the string in postfix order, left to right. For each token, there are four cases:
1. If the current token is an opening parenthesis, push it into the stack.
2. If the current token is a closing parenthesis, pop tokens from the stack until a corresponding opening parenthesis is popped. Append each operator to the end of the postfix expression.
3. Append the current token to the end of the postfix expression if it is an operand
4. If the current token is an operator, append it to the postfix expression with higher precedence than the operator on top of the stack. If it has lower precedence, first pop operators from the
stack until we have a higher precedence operator on top, or the stack becomes empty.
5. Finally, if you have any remaining operators in the stack, add them to the end of the postfix expression until the stack is empty and return the postfixed expression.
Let us understand the conversion of infix to postfix notation using stack with the help of the below example:
Let the expression be evaluated by m*n+(p-q)+r.
Steps Current Token Operation on Stack Stack Representation Postfix Expression
1 m m
2 * Push * m
3 n * mn
4 + Push + mn*
5 ( Push +( mn*
6 p +( mn*p
7 - Push +(- mn*p
8 q +(- mn*pq
9 ) Pop and append until the open bracket + mn*pq-
10 + Push + mn*pq-+
11 r + mn*pq-+r
12 end Pop until the stack is empty mn*pq-+r+
Therefore, the final output as postfix notation for the above expression is mn*pq-+r+
Python Code for Infix to Postfix Conversion:
Operators = set(['+', '-', '*', '/', '(', ')', '^']) # collection of Operators
Priority = {'+':1, '-':1, '*':2, '/':2, '^':3} # dictionary having priorities of Operators
def infixToPostfix(expression):
stack = [] # initialization of empty stack
output = ''
for character in expression:
if character not in Operators: # if an operand append in postfix expression
output+= character
elif character=='(': # else Operators push onto stack
elif character==')':
while stack and stack[-1]!= '(':
while stack and stack[-1]!='(' and Priority[character]<=Priority[stack[-1]]:
while stack:
return output
expression = input('Enter infix expression ')
print('infix notation: ',expression)
print('postfix notation: ',infixToPostfix(expression))
Enter infix expression m*n+(p-q)+r
infix notation: m*n+(p-q)+r
postfix notation: mn*pq-+r+
Java Code for Infix to Postfix Conversion:
import java.io.*;
import java.lang.reflect.Array;
import java.util.*;
import java.lang.*;
public class favtutor{
public static int precedence(char x){
if(x=='^'){ // highest precedence
return 2;
else if(x=='*' || x=='/'){
return 1; // second highest precedence
else if(x=='+' || x=='-'){
// lowest precedence
return 0;
return -1; // not an operator
public static String InfixToPostfix(String str){
Stack<Character>stk= new Stack<>(); // used for converting infix to postfix
String ans=""; // string containing our final answer
int n= str.length();
for (int i = 0; i <n ; i++) {
char x= str.charAt(i);
if(x>='0' && x<='9'){
else if(x=='('){ // push directly in the stack
else if(x==')'){
while(!stk.isEmpty() && stk.peek()!='('){ // keep popping till opening bracket is found
while(!stk.isEmpty() && precedence(stk.peek())>=precedence(x)){ // remove all higher precedence values
return ans;
C++ Code for Infix to Postfix Conversion:
using namespace std;
// A Function to return precedence of operators
int prec(char ch) {
if (ch == '^')
return 3;
else if (ch == '/' || ch == '*')
return 2;
else if (ch == '+' || ch == '-')
return 1;
return -1;
// A Function to convert infix expression to postfix expression
string infixToPostfix(string s) {
stack<char> st; //For stack operations, we are using C++ built in stack
string ans = "";
for (int i = 0; i < s.length(); i++) {
char ch = s[i];
// If the current character is an operand, add it to our answer string.
if ((ch >= 'a' && ch <= 'z') || (ch >= 'A' && ch <= 'Z') || (ch >= '0' && ch <= '9'))
ans += ch; // Append the current character of string in our answer
// If the current character of string is an '(', push it to the stack.
else if (ch == '(')
// If the current character of string is an ')', append the top character of stack in our answer string
// and pop that top character from the stack until an '(' is encountered.
else if (ch == ')') {
while (st.top() != '(')
ans += st.top(); // Append the top character of stack in our answer
//If an operator is scanned
else {
while (!st.empty() && prec(s[i]) <= prec(st.top())) {
ans += st.top();
st.push(ch); // Push the current character of string in stack
// Pop all the remaining elements from the stack
while (!st.empty()) {
ans += st.top();
return ans;
int main() {
string s;
cin >> s;
cout << infixToPostfix(s);
return 0;
Time Complexity
The time complexity of the above solution to convert infix to postfix notation is O(n), where n is the length of infix expression. Similarly, the space complexity for the conversion is O(n) as it
requires equal space to execute the solution using the stack data structure.
Infix to Postfix Rules
Let's first go over the guidelines for transforming infix to postfix before talking about the strategy for doing so without using a stack. Following are the guidelines for changing from infix to
1. Output the token if it belongs to an operand.
2. If the token is an operator, output operators one at a time from the stack until one with a lower precedence is at the top. Stack the operator that is now active.
3. Push the token onto the stack when it is a left parenthesis.
4. Pop operators off the stack as well as output them unless a left parenthesis is popped if the token is a right parenthesis. Remove the left parenthesis and throw it away.
The following formula transforms infix to postfix without the use of a stack:
• Create a variable precedence to record the precedence of the most recent operator encountered, along with an empty string postfix.
• Go over every token in the infix expression one by one:
• Add the token to the postfix string if it is an operand.
• Add the token to the postfix string if it is an operator and its precedence is greater than or equal to the order of precedence of the last operator encountered. If not, add the last operator
encountered onto the
• postfix string and carry on until you come across an operator with a lower precedence.
• Increase the precedence variable and carry on if the token is a left parenthesis.
• Reduce the precedence variable and go on if the character is a right parenthesis.
• Send the postfix string back.
How to convert infix to postfix without using stack?
Although infix notation provides a popular method for expressing mathematical equations, computing evaluation of infix notation can be challenging due to the need to keep track of operator ordering
and associativity.
Converting an expression to postfix notation is one technique to make the evaluation of that expression simpler. Because Postfix notation is linear and does not need to keep up with operator
precedence or associativity, it is also referred to as Reverse Polish notation which is simpler to evaluate using a computer.
The operator is positioned between the operands when writing arithmetic expressions in infix notation. Writing arithmetic expressions in postfix notation places the operator after the operands. In
the conventional way of transforming infix to postfix, both operands, and operators are kept track of in a stack.
The conversion can be carried out without utilizing a stack, which is advantageous when stack memory is constrained or when using a language that doesn't allow one.
#include <iostream>
#include <string>
#include <queue>
int getPrecedence(char c) {
if (c == '*' || c == '/')
return 2;
else if (c == '+' || c == '-')
return 1;
return 0;
std::string infixToPostfix(std::string infix) {
std::string postfix;
std::queue<char> outputQueue;
for (int i = 0; i < infix.length(); i++) {
char c = infix[i];
if (isalnum(c)) {
postfix += c;
} else if (c == '+' || c == '-' || c == '*' || c == '/') {
while (!outputQueue.empty() && getPrecedence(outputQueue.front()) >= getPrecedence(c)) {
postfix += outputQueue.front();
} else if (c == '(') {
} else if (c == ')') {
while (!outputQueue.empty() && outputQueue.front() != '(') {
postfix += outputQueue.front();
outputQueue.pop(); // Remove the '(' from the output queue
while (!outputQueue.empty()) {
postfix += outputQueue.front();
return postfix;
int main() {
std::string infix = "a+b*(c+d)-e/f";
std::string postfix = infixToPostfix(infix);
std::cout << "Infix: " << infix << std::endl;
std::cout << "Postfix: " << postfix << std::endl;
return 0;
Infix: a+b*(c+d)-e/f
Postfix: abcd+*+ef/-
Infix expressions are what we humans use to solve problems normally. However, computers require a stack to solve expressions. Now you know the simplest technique to convert infix to postfix notation
using the stack data structure. It is highly recommended to understand this problem thoroughly to make your programming easy and efficient. Happy Learning :)
|
{"url":"https://favtutor.com/blogs/infix-to-postfix-conversion","timestamp":"2024-11-06T11:17:43Z","content_type":"text/html","content_length":"120218","record_id":"<urn:uuid:7516a260-d940-4c78-8616-919a95a0d4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00668.warc.gz"}
|
Integer Division and Finding the Remainder in JavaScript
In programming, you likely know that we need to do a lot of math operations, like integer division, where you divide two numbers and get an integer as a result. But what if you also need to find the
remainder of that division? That's where the modulo operation comes in.
In this Byte, we'll explore how to perform an integer division and find the remainder in JavaScript.
Finding the Remainder in JavaScript
The modulo operation finds the remainder after division of one number by another. In JavaScript, this operation is performed using the % operator. For example, if we divide 10 by 3, the quotient is 3
and the remainder is 1. The modulo operation gives us only the remainder.
let num1 = 10;
let num2 = 3;
let remainder = num1 % num2;
console.log(remainder); // Output: 1
In this code, num1 % num2 performs the modulo operation and the result is stored in the remainder variable. When we log remainder to the console, we get 1, which is the remainder when 10 is divided
by 3.
Integer Division in JavaScript
JavaScript does not have a built-in operator for integer division. However, we can achieve this by first performing a regular division using the / operator, and then using the Math.floor() function
to round down to the nearest integer.
let num1 = 10;
let num2 = 3;
let quotient = Math.floor(num1 / num2);
console.log(quotient); // Output: 3
Here, num1 / num2 performs the division and Math.floor() rounds down the result to the nearest integer. The result is then stored in the quotient variable. When we log quotient to the console, we get
3, which is the integer part of the division of 10 by 3.
Note: The Math.floor() function rounds a number DOWN to the nearest integer, which means it always rounds towards negative infinity. This is important to remember when working with negative numbers.
Use Cases
Integer division and finding the remainder are fundamental operations in programming and have a wide variety of use cases. Let's look at a few examples where these operations are needed.
One common use case is in time conversion. Let's say you have a number of seconds and want to convert it into minutes and seconds, you can use integer division and the modulo operation. Here's how
you could do that:
let totalSeconds = 125;
let minutes = Math.floor(totalSeconds / 60);
let seconds = totalSeconds % 60;
console.log(`Minutes: ${minutes}, Seconds: ${seconds}`);
Minutes: 2, Seconds: 5
Another use case is in distributed computing or parallel processing, where tasks are divided amongst multiple processors. If you need to distribute n tasks across m processors, you can use integer
division to determine how many tasks each processor should get and the modulo operation to handle any remaining tasks. We'd do it this way since we can't use a fraction of a processor.
Common Errors
One common mistake is forgetting that JavaScript's division operation / always results in a floating-point number, not an integer. This can lead to unexpected results if you're expecting an integer
value. To perform integer division, you should use Math.floor() or Math.trunc() to remove the decimal part.
let num = 10 / 3;
console.log(num); // Outputs: 3.3333333333333335
Another common issue is not understanding the behavior of the modulo operation with negative numbers. In JavaScript, the sign of the result is the same as the dividend (the number being divided), not
the divisor (the number you're dividing by). This can lead to unexpected results if you're used to the behavior of the modulo operation in other programming languages.
console.log(-10 % 3); // Outputs: -1
console.log(10 % -3); // Outputs: 1
That wraps it up on performing integer division and finding the remainder in JavaScript. We covered the basics of these operations, looked at a few practical use cases, and discussed some common
errors to watch out for. Just keep in mind that JavaScript's division and modulo operations can behave a little differently than in other languages.
Last Updated: September 6th, 2023
|
{"url":"https://stackabuse.com/bytes/integer-division-and-finding-the-remainder-in-javascript/","timestamp":"2024-11-04T20:28:18Z","content_type":"text/html","content_length":"60249","record_id":"<urn:uuid:8cc8a58e-2628-4cc2-9e9c-56596493b21d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00647.warc.gz"}
|
Viscosity | Definition, application and implementation
Take three glasses and fill them with water, oil, and honey. Take the eraser from your table, drop it in the above three different glasses one by one, and note the time. What have you observed? The
time taken by the eraser to hit the bottom is more for honey when compared with the other two liquids. This is due to the viscosity of the liquid.
The viscosity resists the relative motion in the liquid. More the viscosity more is the resistance. In this article we will learn about the viscosity of liquid, how to evaluate it, how to calculate
the force exerted by the viscosity?
Interact with the viscosity
In this interactive widget, you can feel the viscosity by yourself. Let’s discuss the setup of the widget. There are two plates. One is a long plate, which is at the bottom. We have placed the
movable plate at the top of the water.
When we try to move the upper plate, it will get resistance from the water. More resistance will slow down the movement of the plate. Now slide the button and try the viscosity of your choice you may
find more viscosity will increase the timing of the plate. This is how viscosity will control the rate of change of strain.
How to define viscosity?
To calculate the force exerted by the viscosity of the liquid on solid ball lets try to understand this with one simple experiment. Consider two plates and fluid flowing between them. One plate is
stationary and another plate which is at the top is moving with the velocity $V$ . The velocity profile is linear and hence the velocity at any point can be calculated as:
$$ u(x)=\frac{V}{l}y$$
where $u(x)$ is the velocity at any point $y$ . The force we are applying in the top plate is generating the shear stress $\tau = \frac{F}{A}$. This force is causing the shear deformation in the
body. The shear stress is directly proportional to the rate of shear strain. $$\overbrace{\tau \propto \dot{\gamma}}^{\text{Liquid}} \quad | \quad \overbrace{\tau \propto \gamma}^{\text{Solid}} $$
A similar expression we derive for the solid as well. But the difference between them is liquid has no resistance to shear strain, but shear stress is directly proportional to rate of shear strain,
in case of solid the shear stress is directly proportional to shear strain.
What is the Newton’s law of viscosity?
In 1687, Sir Isaac Newton proposed the law of viscosity for fluid. It says ” the fluid for which velocity gradient is directly proportional to the shear stress are called Newtonian fluids.”
$$\tau \propto \frac{du}{dy}\Rightarrow \tau = \mu \frac{du}{dy} $$
Where $du/dy$ is velocity gradient. The rate of change of velocity with respect to distance $y$.
Here, we need some explanation. We have earlier defined the $\tau$ proportionality with rate of deformation $\dot{\gamma}$ and in the Newton’s law for viscosity we have written the relation with
velocity gradient $du/dy$ . How they are related with each other and why we need them?
Limitation of Newton’s law of viscosity
The viscosity law defined by Newton used the velocity gradient. It says shear stress in the fluid is directly proportional to the velocity gradient of the liquid. This is valid for the simple
one-dimensional laminar flow. As you can see from the section above where we have derived the relation between the rate of change of deformation with the velocity gradient of the liquid. This
derivation is valid only for the one-dimensional Newtonian fluid.
Different types of fluid categorized based on viscosity
Based on the definition of Newton’s law of viscosity, we can categorize the fluid into different types. The fluids that follow Newton’s law of viscosity we call them the Newtonian fluid. The
remaining kinds of fluids we call non-Newtonian fluids.
In the graph between the velocity gradient and shear stress we can see the Newtonian and non-Newtonian fluids. We define the ideal fluid here. It has the zero viscosity. Similarly the when shear rate
is zero for any force than it is ideal solid.
Relation between rate of deformation and velocity gradient
Let’s consider the above experiment again which we have used to explain the viscosity. The shear strain is the angle $d\gamma$ which is shown in the figure. The force $F$ is applied to upper plate
which gives the velocity $V$ to the plate. If the plate travels the distance $dx$ at time $dt$. The distance between the plate is $l$.
If the angle of deformation is small than $$d\gamma = \tan{d\gamma} = \frac{dx}{l} = \frac{Vdt}{l} = \frac{du}{dy}dt $$$$\frac{d\gamma}{dt} = \dot{\gamma} = \frac{du}{dy}$$
This is how the deformation rate can be related with the velocity gradient.
Non-Newtonian fluids
The fluids which does not follow the Newton’s law, we call them as Non-Newtonian fluid. Ostwald-de-waele model gives the power law for Non-Newtonian fluid. Which is as follows:
$$\tau = m \left( \frac{du}{dy} \right)^n$$
After some adjustment we can write it as:
\tau=m\left|\frac{d u}{d y}\right|^{n-1} \frac{du}{dy}
Where $m$ is the flow behavior index and $n$ is the flow consistency index. This is the power law. Many fluids follow this law. For $n < 1$, we call the fluid pseudoplastic fluid. For the fluids that
follow the power law for$n>1$, we call it dilatant fluid.
Variation with the temperature
As the cause of viscosity is the bond between the particles in the fluid and for the gases it is due to collision of particle. When we increase the temperature for the liquid the viscosity reduces as
the bond between atoms become weak.
In the case of the gas, the collision increases with an increase in temperature. Hence this increase in temperature increases the viscosity of the gas. You can refer the graph to see the variation of
viscosity with temperature.
We define a fluid as an ideal fluid whose viscosity is zero. But generally, all the real fluids have viscosity. Then why do we define the term ideal fluid? It is because at high speed of flow away
from the surface of the boundary of the solid there is a zone where fluid can show the behaviour which is similar to the ideal fluid.
Apart from this, we categorise the fluid based on the viscosity. This concept is useful for the calculation of drag force into the solid.
This article explains how we can define the viscosity in the fluid, how it is related to the shear stress and strain of the solid. Apart from this, there are many essential learnings which we can
conclude here.
In this blog, you have learned the following points:
• Viscosity: According to the simple definition we can define the it as the ratio of shear stress in the fluid to the velocity gradient of the fluid.
• Apparent viscosity: In non-Newtonian fluid, we can observe that the viscosity is not only the property of fluid rather it is the property of fluid flow. Hence, we call it apparent viscosity.
Android Apps
⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 1000+ | 400,000 + Downloads (Cumulative)
At eigenplus, our goal is to teach civil engineering students about structural analysis and design starting from the fundamental principles. We do this with the help of interactive android
applications and accompanying web articles and videos.
Our apps have helped more than 400 thousand students across the world to understand and learn the concepts of structural engineering. Check out our apps on the google play store.
This article was crafted by a group of experts at eigenplus to ensure it adheres to our strict quality standards. The individuals who contributed to this article are:
Leave a Comment
|
{"url":"https://www.eigenplus.com/viscosity-definition-application-and-implementation/","timestamp":"2024-11-02T09:07:59Z","content_type":"text/html","content_length":"148445","record_id":"<urn:uuid:76716c97-66c6-4578-bc02-42f6e1680d71>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00031.warc.gz"}
|
Gecode::NaryPropagator< View, pc > Class Template Reference
[Propagator patterns]
n-ary propagator More...
#include <pattern.hpp>
Public Member Functions
virtual PropCost cost (const Space &home, const ModEventDelta &med) const
Cost function (defined as low linear).
virtual void reschedule (Space &home)
Schedule function.
virtual size_t dispose (Space &home)
Delete propagator and return its size.
Protected Member Functions
NaryPropagator (Space &home, NaryPropagator &p)
Constructor for cloning p.
NaryPropagator (Space &home, Propagator &p, ViewArray< View > &x)
Constructor for rewriting p during cloning.
NaryPropagator (Home home, ViewArray< View > &x)
Constructor for creation.
Protected Attributes
ViewArray< View > x
Array of views.
Detailed Description
template<class View, PropCond pc>
class Gecode::NaryPropagator< View, pc >
n-ary propagator
Stores array of views of type View with propagation condition pc.
If the propagation condition pc has the value PC_GEN_NONE, no subscriptions are created.
Definition at line 142 of file pattern.hpp.
Constructor & Destructor Documentation
template<class View , PropCond pc>
Gecode::NaryPropagator< View, pc >::NaryPropagator ( Space & home,
NaryPropagator< View, pc > & p
) [inline, protected]
Constructor for cloning p.
Definition at line 485 of file pattern.hpp.
template<class View, PropCond pc>
Gecode::NaryPropagator< View, pc >::NaryPropagator ( Space & home,
Propagator & p,
ViewArray< View > & x
) [inline, protected]
Constructor for rewriting p during cloning.
Definition at line 493 of file pattern.hpp.
template<class View, PropCond pc>
Gecode::NaryPropagator< View, pc >::NaryPropagator ( Home home,
ViewArray< View > & x
) [inline, protected]
Member Function Documentation
template<class View , PropCond pc>
PropCost Gecode::NaryPropagator< View, pc >::cost ( const Space & home,
const ModEventDelta & med
) const [inline, virtual]
Cost function (defined as low linear).
Implements Gecode::Propagator.
Reimplemented in Gecode::Float::Rel::NaryEq< View >, Gecode::Int::Bool::NaryEq< BV >, Gecode::Int::Bool::NaryLq< VX >, Gecode::Int::Circuit::Val< View, Offset >, Gecode::Int::Circuit::Dom< View,
Offset >, Gecode::Int::Distinct::Dom< View >, Gecode::Int::Precede::Single< View >, Gecode::Int::Rel::NaryEqDom< View >, Gecode::Int::Rel::NaryEqBnd< View >, Gecode::Int::Rel::NaryLqLe< View, o >,
Gecode::Int::Rel::NaryNq< View >, Gecode::Action::Recorder< View >, Gecode::CHB::Recorder< View >, and Gecode::Set::Precede::Single< View >.
Definition at line 500 of file pattern.hpp.
template<class View , PropCond pc>
void Gecode::NaryPropagator< View, pc >::reschedule ( Space & home ) [inline, virtual]
template<class View , PropCond pc>
size_t Gecode::NaryPropagator< View, pc >::dispose ( Space & home ) [inline, virtual]
Member Data Documentation
template<class View, PropCond pc>
The documentation for this class was generated from the following file:
|
{"url":"https://www.gecode.org/doc/6.1.1/reference/classGecode_1_1NaryPropagator.html","timestamp":"2024-11-07T21:47:28Z","content_type":"text/html","content_length":"22507","record_id":"<urn:uuid:370fa772-d89b-45a4-82ab-b17d60d164c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00722.warc.gz"}
|
Properties of Matrices - Properties, Definition, Formulas, Examples.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Properties of Matrices
Properties of matrices are helpful in performing numerous operations involving two or more matrices. The algebraic operations of addition, subtraction, multiplication, inverse multiplication of
matrices, and involving different types of matrices can be easily performed by the use of properties of matrices. The additive, multiplicative identity, and inverse of matrices are also included in
this study of properties of matrices.
Let us learn more about the properties of matrix addition, properties of scalar multiplication of matrices, properties of matrix multiplication, properties of transpose matrix, properties of an
inverse matrix with examples and frequently asked questions.
1. What Are the Properties of Matrices?
2. Properties of Matrix Addition
3. Properties of Scalar Multiplication of Matrix
4. Properties of Multiplication of Matrices
5. Properties of Transpose of a Matrix
6. Other Important Properties of Matrices
7. Examples on Properties of Matrices
8. Practice Questions on Properties of Matrices
9. FAQs on Properties of Matrices
What Are the Properties of Matrices?
The properties of matrices help in performing numerous operations on matrices. The properties of matrices can be broadly classified into the following five properties.
• Properties of Matrix Addition
• Properties of Scalar Multiplication of Matrix
• Properties of Matrix Multiplication
• Properties of Transpose Matrix
• Properties of Inverse Matrix and other properties.
Let us check more about each of the properties of matrices.
Properties of Matrix Addition
The addition of matrices satisfies the following properties of matrices.
• Commutative Law. For the given two matrixes, matrix A and matrix B of the same order, say m x n, then A + B = B + A.
• Associative law: For any three matrices, A , B, C of the same order m x n, we have (A + B) + C = A + (B + C)
• Existence of additive identity Let A be a matrix of order m × n, and O be a zero matrix or a null matrix of the same order m × n , then A + O = O + A = A. In other words, O is the additive
identity for matrix addition.
• Existence of additive inverse Let A be a matrix of order m × n. and let -A be another matrix of order m × n such that A + (– A) = (– A) + A= O. So the matrix – A is the additive inverse of A or
the negative of matrix A.
Properties of Scalar Multiplication of Matrix
The properties of scalar multiplication of matrix involve a scalar constant and a matrix. For matrixes A and B of order m x n, and k and l as scalars values, the property of scalar multiplication of
matrices is as follows.
• The product of a constant with the sum of matrices is equal to the sum of the individual product of the constant and the matrix. k(A + B) = kA + kB
• The product of the sum of the constants with a matrix is equal to the sum of the product of each of the constants with the matrix. (k + l)A = kA + lA
Here both the matrix A and B are of the same order and the constants K and l are any real number values.
Properties of Multiplication of Matrices
The following properties of matrix multiplication help in performing numerous operations involving matrix multiplication. The condition for matrix multiplication is the number of columns in the first
matrix should be equal to the number of rows in the second matrix. Let us check the three important properties of matrices.
• Associative Property: For any three matrices A, B, C following the matrix multiplication conditions, we have (AB)C = A(BC). Here both sides of the matrix multiplication are defined.
• Distributive Property: For any three matrices A, B, C following the matrix multiplication conditions, we have A(B + C) = AB + AC.
• The existence of multiplicative identity. For a square matrix A, having the order m × n, and an identity matrix I of the same order we have AI = IA = A. Here the product of the identity matrix
with the given matrix results in the same matrix.
Properties of Transpose of a Matrix
The properties of matrices for matrices A and B of the same order m × n, and a constant k is defined. The following are some of the important properties of the transpose of a matrix.
• The transpose of a matrix on further taking a transpose for the second time results in the original matrix. (A')' = A
• The transpose of the product of a constant and a matrix is equal to the product of the constant and the transpose of the matrix. (kA)' = kA'
• The transpose of the sum of two matrices is equal to the sum of the transpose of the individual matrices. (A + B)' = A' + B'
• The transpose of the product of two matrices is equal to the product of the transpose of the second matrix and the transpose of the first matrix. (AB)' = B'A'
Other Important Properties of Matrices
In addition to the above set of properties of matrices, some of the other important properties have been grouped and presented across the below points.
• For a square matrix with real number entries, A + A' is a symmetric matrix, and A - A' is a skew-symmetric matrix.
• A square matrix can be expressed as a sum of a symmetric and skew-symmetric matrix. A = 1/2(A + A') + 1/2(A - A').
• The inverse of a matrix if it exists is unique. AB = BA = I.
• If matrix A is the inverse of matrix B, then matrix B is the inverse of matrix A.
• If A and B are invertible matrices of the same order m × n, then (AB)^-1 = B^-1A^-1.
Related Topics
The following topics would help in a better understanding of the properties of matrices.
Examples on Properties of Matrices
1. Example 1: For the matrix \(\begin{pmatrix}4&3\\2&1\end{pmatrix}\) prove the transpose property of (A')' = A.
Let the given matrix be A = \(\begin{pmatrix}4&3\\2&1\end{pmatrix}\)
The transpose of the matrix is A' = \(\begin{pmatrix}4&3\\2&1\end{pmatrix}\)
The transpose of the transpose matrix is (A')' = \(\begin{pmatrix}4&3\\2&1\end{pmatrix}\)
This on observation is equal to the origin matrix A, and hence it satisfies the matrix transpose property of (A')' = A.
Therefore, the given matrix satisfies the matrix transpose property of (A')' = A.
Example 2: Prove that the matrixes A =\(\begin{pmatrix}6&1\\0&2\end{pmatrix}\), B = \(\begin{pmatrix}4&5\\3&2\end{pmatrix}\), C = \(\begin{pmatrix}-3&4\\4&2\end{pmatrix}\), follow the
distributive property of matrix multiplication.
The given matrices are A =\(\begin{pmatrix}6&1\\0&2\end{pmatrix}\), B = \(\begin{pmatrix}4&5\\3&2\end{pmatrix}\), C = \(\begin{pmatrix}-3&4\\4&2\end{pmatrix}\).
And the distributive property of matrix multiplication is A(B + C) = AB + AC. Let us first find the product AB and AC.
AB = \(\begin{pmatrix}6&1\\0&2\end{pmatrix}\) ×\(\begin{pmatrix}4&5\\3&2\end{pmatrix}\)
= \(\begin{pmatrix}6×4+1×3&6×5+1×2\\0×4+2×3&0×5+2×2\end{pmatrix}\) = \(\begin{pmatrix}27&32\\6&4\end{pmatrix}\)
AC = \(\begin{pmatrix}6&1\\0&2\end{pmatrix}\) × \(\begin{pmatrix}-3&4\\4&2\end{pmatrix}\)
= \(\begin{pmatrix}6×(-3) + 1 × 4&6×4 + 1 × 2\\0×(-3) + 2×4&0×4+2×2\end{pmatrix}\) = \(\begin{pmatrix}-14&26\\8&4\end{pmatrix}\)
B + C = \(\begin{pmatrix}4&5\\3&2\end{pmatrix}\) + \(\begin{pmatrix}-3&4\\4&2\end{pmatrix}\)
= \(\begin{pmatrix}4+(-3)&5+4\\3+4&2+2\end{pmatrix}\) = \(\begin{pmatrix}1&9\\7&4\end{pmatrix}\)
A(B + C) = \(\begin{pmatrix}6&1\\0&2\end{pmatrix}\) × \(\begin{pmatrix}1&9\\7&4\end{pmatrix}\)
= \(\begin{pmatrix}6×1+1×7&6×9+1×4\\0×1+2×7&0×9+2×4\end{pmatrix}\) = \(\begin{pmatrix}13&58\\14&8\end{pmatrix}\)
AB + AC = \(\begin{pmatrix}27&32\\6&4\end{pmatrix}\) + \(\begin{pmatrix}-14&26\\8&4\end{pmatrix}\)
= \(\begin{pmatrix}27 + (-14)&32+26\\6+8&4+4\end{pmatrix}\) = \(\begin{pmatrix}13&58\\14&8\end{pmatrix}\)
From the above two expressions we can observe that A(B + C) = AB + AC.
Therefore, the given matrices follow the distributive property of matrix multiplication.
View Answer >
Breakdown tough concepts through simple visuals.
Math will no longer be a tough subject, especially when you understand the concepts through visualizations.
Practice Questions on Properties of Matrices
Check Answer >
FAQs on Properties of Matrices
What Are the Addition Properties of Matrices?
The addition of matrices satisfies the following properties of matrices.
• Commutative Law. For the given two matrixes, A + B = B + A.
• Associative law: For any three matrices, A , B, C, we have (A + B) + C = A + (B + C)
• Existence of additive identity Let A be a matrix of order m × n, and O be a zero matrix or a null matrix of the same order m × n , then A + O = O + A = A.
• Existence of additive inverse Let A be a matrix of order m × n. and let -A be another matrix of order m × n such that A + (– A) = (– A) + A= O.
What Are Properties of Transpose of Matrices?
For given two matrices, A and B, the properties of the transpose of matrices can be explained as given below,
• (A^T)^T = A
• (A + B)^T = A^T + B^T, A and B being of the same order
• (KA)^T= KA^T, K is any scalar(real or complex)
• (AB)^T= B^TA^T, A and B being conformable for the product AB. (This is also called reversal law.)
What are the Different Types of a Matrix?
There are different types of matrices depending upon the properties of their properties. Some of them are given as,
• Row matrix and Column matrix
• Square matrix and Rectangular matrix
• Diagonal Matrix
• Scalar Matrix
• Identity matrix
• Null matrix
• Upper triangular matrix and lower triangular matrix
• Idempotent matrix
• Symmetric and Skew-symmetric matrix
What are the Properties of Scalar Multiplication in Matrices?
For the matrices A = [a\(_{ij}\)]\(_{m\times n}\) and B = [b\(_{ij}\)]\(_{m\times n}\) and scalars K and l, the different properties associated with the multiplication of matrices is as follows.
• K(A + B) = KA + KB
• (K + l)A = KA + lA
• (Kl)A = K(lA) = l(KA)
• (-K)A = -(KA) = K(-A)
• 1·A = A
• (-1)A = -A
How to Express a Matrix as a Sum of Symmetric and Non-Symmetric Matrix?
Any square matrix A can be written as, A = P + Q, where P and Q are symmetric and skew-symmetric matrices respectively, such that, P = (A + A^T)/2 and Q = (A - A^T)/2.
What Are the Multiplication Properties of Matrices
The following properties of matrix multiplication help in performing numerous operations involving matrix multiplication.
• Associative Property: For any three matrices A, B, C we have (AB)C = A(BC).
• Distributive Property: For any three matrices A, B, C we have A(B + C) = AB + AC.
• The existence of multiplicative identity. For a square matrix A, having the order m × n, and an identity matrix I of the same order we have AI = IA = A.
What Are the Properties of Matrices for Inverse of a Matrix?
The following are the important properties of the inverse of a matrix.
• The inverse of a matrix if it exists is unique. AB = BA = I.
• If matrix A is the inverse of matrix B, then matrix B is the inverse of matrix A.
• If A and B are invertible matrices of the same order m × n, then (AB)^-1 = B^-1A^-1.
Math worksheets and
visual curriculum
|
{"url":"https://www.cuemath.com/algebra/properties-of-matrices/","timestamp":"2024-11-14T12:04:30Z","content_type":"text/html","content_length":"229613","record_id":"<urn:uuid:e75c8183-2ede-4cad-9f0b-3cd2241d923a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00254.warc.gz"}
|
Canonical Quantum Phase Estimation
Quantum phase estimation (QPE) is a quantum algorithm used to estimate the phase \(\phi \in [0, 1)\) of a given unitary operator \(U\) and eigenstate \(|\phi\rangle\) satisfying
\[U|\phi\rangle = e^{i2\pi\phi} |\phi\rangle .\]
QPE based on the quantum Fourier transform (QFT) [11, 12, 13, 14] is referred to as canonical QPE. QFT can be considered to be a quantum analogue to the discrete Fourier transform, and is expressed
\[|j\rangle \longrightarrow \frac{1}{\sqrt{N}} \sum_{k} \exp\left({i\frac{2\pi jk}{N}}\right) |k\rangle\]
where \(N\) is the total number of states. This general expression can be rewritten with \(n\) qubits as
\[|j_{1}\cdots j_{n}\rangle \longrightarrow \frac{1}{\sqrt{2^{n}}} \left( |0\rangle + e^{i2\pi 0.j_{n}}|1\rangle \right) \otimes \cdots \otimes \left( |0\rangle + e^{i2\pi 0.j_{1}j_{2}\cdots j_{n}}|1
\rangle \right)\]
where the binary fraction is represented as \(0.j_{1}\cdots j_{n} = \sum_{k=1}^{n} j_{k} 2^{-k}\). The basic idea of the canonical QPE is to compute the right-hand side of QFT using Eq. (23) and
estimate the phase factor as a bit string by the inverse QFT.
To do this, canonical QPE uses two quantum registers. The first register contains \(n\) qubits initially in the state \(|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)\). The choice of \(n\)
depends on the precision of the estimate of \(\phi\) we wish to reach. The second register is initialized in the state \(|\phi\rangle\), and contains as many qubits as is necessary to represent \(|\
We compute the right-hand side of (25) using the phase kickback technique:
\[|+\rangle \otimes |\phi\rangle \xrightarrow{\mathrm{ctrl-}U^{2^{k-1}}} \frac{1}{\sqrt{2}}\left( |0\rangle + e^{i2\pi 0.\phi_{n-k+1}\phi_{n-k}\cdots \phi_{n}} |1\rangle \right) \otimes |\phi\rangle
The example of the canoncial QPE circuit is shown in Fig. 4.
In chemistry, QPE is most often proposed within the context of calculating molecular energies. For this purpose, we set the unitary to the time evolution operator \(U(t) = e^{-iHt}\), where \(H\) is
the Hamiltonian describing the system. \(t \in \mathbb{R}\) is a parameter. Then, the eigenstate energy is obtained as \(E = -2\pi\phi / t\). The initial state is chosen to be some trial electronic
wavefunction \(|\Phi\rangle\), such as the Hartree-Fock state. The “quality” of the initial state is an important factor to obtain the target phase value efficiently, as the probability of obtaining
\(\phi\) is dependent on the overlap \(\langle \phi | \Phi \rangle\). One can also use an ansatz that has been optimized, such as by VQE, which may lead to a reduction in the overall computational
AlgorithmDeterministicQPE may be used for performing the canonical QPE algorithm. The phase estimation circuit requires subcircuits which perform repeated sequences of the unitary evolution operator
controlled upon the ancilla qubits. We refer to one of these sequences as \(\mathrm{ctrl-}U\). Here, to prepare \(\mathrm{ctrl-}U\) from the molecular Hamiltonian \(H\), we follow the same steps as
those for AlgorithmVQE.
from inquanto.express import get_system
from inquanto.mappings import QubitMappingJordanWigner
target_data = "h2_sto3g.h5"
fermion_hamiltonian, fermion_fock_space, fermion_state = get_system(target_data)
mapping = QubitMappingJordanWigner()
qubit_hamiltonian = mapping.operator_map(fermion_hamiltonian)
Currently, InQuanto supports the Suzuki-Trotter decomposition to construct the \(\mathrm{ctrl-}U\) circuit (shown in the cell below). Several methods have been proposed in the literature with
different asymptotic scaling, such as quantum signal processing. [17]
# Generate a list of qubit operators as exponents to be trotterized.
qubit_operator_list = qubit_hamiltonian.trotterize(trotter_number=1)
# The parameter `t` that is physically recognized as the time period in atomic units.
time = 1.5
The initial state \(|\Phi\rangle\) is provided as a non-symbolic ansatz. The example below uses a modestly optimized UCCSD ansatz for our initial state preparation circuit.
from inquanto.ansatzes import FermionSpaceAnsatzUCCSD
# Preliminary calculated parameters.
ansatz_parameters = [-0.107, 0., 0.]
# Generate a non-symbolic ansatz.
ansatz = FermionSpaceAnsatzUCCSD(fermion_fock_space, fermion_state, mapping)
parameters = dict(zip(ansatz.state_symbols, ansatz_parameters))
state_prep = ansatz.subs(parameters)
A list of qubit operators thus generated is passed to the constructor of AlgorithmDeterministicQPE as
from inquanto.algorithms import AlgorithmDeterministicQPE
algorithm = AlgorithmDeterministicQPE(
qubit_operator_list * time,
Then, we build a protocol to construct a canonical QPE circuit. n_rounds specifies the number of ancilla qubits of the first quantum register, which determines the precision of the computation.
Together with the four qubit representation hydrogen molecule state the circuits have a total of eight qubits.
from pytket.extensions.qiskit import AerBackend
from inquanto.protocols import CanonicalPhaseEstimation
# Choose the backend.
backend = AerBackend()
# Set the number of rounds (ancilla qubits of the first quantum register)
n_rounds = 4
# Chose the protocol to specify how the circuit is handled.
protocol = CanonicalPhaseEstimation(
# Build the algorithm to get it ready for experiments.
Now the circuit is run by algorithm.run() to produce the final results. The algorithm.final_energy() returns the energy estimate.
# Run the protocol.
# Display the final results.
energy = algorithm.final_energy(time=time)
print(f"energy estimate = {energy:8.4f} hartree")
energy estimate = -1.1667 hartree
|
{"url":"https://docs.quantinuum.com/inquanto/manual/algorithms/qpe_canonical.html","timestamp":"2024-11-01T22:11:54Z","content_type":"text/html","content_length":"55423","record_id":"<urn:uuid:212ff6f3-5074-48fd-a7bb-c8fd6c14cb6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00790.warc.gz"}
|
NULLs in Group-Value Functions in Oracle 12c
Group-value functions in Oracle treat NULL values differently than single-value functions do. Group functions (other than COUNT(*)) ignore NULL values and calculate a result without considering them.
Take AVG as an example. Suppose you have a list of 100 friends and their ages. If you picked 20 of them at random and averaged their ages, how different would the result be than if you picked a
different list of 20, also at random, and averaged it, or if you averaged all 100? In fact, the averages of these three groups would be very close. What this means is that AVG is somewhat insensitive
to missing rows, even when the missing data represents a high percentage of the total number of records available.
AVG is not immune to missing data, and there can be cases where it will be significantly off (such as when missing data is not randomly distributed), but these cases are less common.
The relative insensitivity of AVG to missing data needs to be contrasted with, for instance, SUM. How close to correct is the SUM of the ages of only 20 friends to the SUM of all 100 friends? Not
close at all. So if you had a table of friends, but only 20 out of 100 supplied their ages, and 80 out of 100 had NULL for their age, which one would be a more reliable statistic about the whole
group and less sensitive to the absence of data—the AVG age of those 20 friends, or the SUM of them? Note that this is an entirely different issue than whether it is possible to estimate the sum of
all 100 based on only 20 (in fact, it is precisely the AVG of the 20, times 100). The point is, if you don’t know how many rows are NULL, you can use the following to provide a fairly reasonable
You cannot get a reasonable result from this, however:
This same test of whether or not results are reasonable defines how the other group functions respond to NULLs. STDDEV and VARIANCE are measures of central tendency; they, too, are relatively
insensitive to missing data. (These are shown in “STDDEV and VARIANCE,” later in this chapter.)
MAX and MIN measure the extremes of your data. They can fluctuate wildly while AVG stays relatively constant: If you add a 100-year-old man to a group of 99 people who are 50 years old, the average
age only goes up to 50.5—but the maximum age has doubled. Add a newborn baby, and the average goes back to 50, but the minimum age is now 0. It’s clear that missing or unknown NULL values can
profoundly affect MAX, MIN, and SUM, so be cautious when using them, particularly if a significant percentage of the data is NULL.
Is it possible to create functions that also take into account how sparse the data is and how many values are NULL, compared to how many have real values, and make good guesses about MAX, MIN, and
SUM? Yes, but such functions would be statistical projections, which must make explicit their assumptions about a particular set of data. This is not an appropriate task for a general-purpose group
function. Some statisticians would argue that these functions should return NULL if they encounter any NULLs because returning any value can be misleading. Oracle returns something rather than
nothing, but leaves it up to you to decide whether the result is reasonable.
COUNT is a special case. It can go either way with NULL values, but it always returns a number; it never evaluates to NULL. The format and usage for COUNT are shown shortly, but to simply contrast it
with the other group functions, it counts all the non-NULL rows of a column, or it counts all the rows. In other words, if asked to count the ages of 100 friends, COUNT returns a value of 20 (because
only 20 of the 100 gave their ages). If asked to count the rows in the table of friends without specifying a column, it returns 100.
|
{"url":"https://logicalread.com/nulls-group-value-functions-oracle-mc06/","timestamp":"2024-11-12T09:30:01Z","content_type":"text/html","content_length":"64936","record_id":"<urn:uuid:66ba1933-b129-4659-b362-323ff68a5d0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00518.warc.gz"}
|
Variable Annuities vs. Taxable Accounts
To make a case for annuities, advisors often compare the most expensive, tax-inefficient mutual funds to the most inexpensive variable annuities. However, an objective analysis, will find that
annuities appear to be poor choices in most (but not all) cases.
To fairly compare the two (and remove compensation considerations), I used a 0.30% marginal cost gleaned from the cost of a Vanguard “no load” annuity. The 30 basis points are in addition to the
underlying cost of the funds – vastly lower than the additional costs of most annuities that are sold.
Annuities made a lot more sense when the ordinary income and capital gains rates were the same. Now, annuities have to overcome both higher rates (ordinary income vs. capital gains) and higher
expenses. Given a long enough time horizon, the 100% tax-deferral theoretically may overcome these disadvantages. Seeing if that is true is the point of this exercise.
Six factors impact the decision:
1. Rate of return – higher returns favor annuities; lower returns favor taxable accounts.
2. Ordinary income tax bracket – lower ordinary income tax brackets favor annuities; higher ones favor taxable accounts.
3. Capital gains tax bracket – higher capital gains rates favor annuities; lower ones favor taxable accounts.
4. Tax efficiency of taxable alternative – tax-inefficient, high-turnover investments favor annuities; low-turnover approaches favor taxable accounts.
5. Time horizon – long time horizons favor annuities; short horizons favor taxable accounts.
6. Annuity cost – the lower the marginal cost of the annuity over the comparable mutual fund investment, the better it will compare (obviously).
Also note that I am not talking about immediate annuities which can have a place in a portfolio as insurance against the additional costs of a long life.
Here is one example:
1. Investment rate of return – assume 10%, the long-term stock market average. (In reality, it would be somewhat lower due to the expense ratio of the subaccount/fund, ad due to lower expected
returns currently but using this relatively high number should favor the annuity.)
2. Tax bracket – assume the most favorable case for an annuity, 25% ordinary income and 15% capital gains. (This is most favorable because the spread between the rates is lowest.)
3. Tax efficiency – assume a 50% portfolio turnover. This is extremely inefficient and would favor the annuity. I have assumed that all gains are long-term, however. This implies an advisor would
not be foolish enough to select funds for a taxable account that throw off short-term gains. (In reality, I think a 10% turnover in a taxable account is more reasonable for a competent advisor.)
4. Annuity cost – as mentioned earlier, I used a 30 bps marginal cost to the annuity. This attempts to remove compensation confusion from the analysis. In other words, if the total expenses are
1.15% for a fund, the annuity would be 1.45%. In this example, the net return ends up being 10.0% for the fund and 9.7% for the annuity.
In short, I have tried to use reasonable factors that would favor the annuity. Using the numbers above, we solve (using my spreadsheet calculator here) for the time horizon necessary to make the
annuity a better investment than the taxable alternative. In this case the breakeven is 26 years. In other words, a rational investor should not place in an annuity any funds he or she will need
within the next 26 years. If we change any one assumption, it just gets worse. For example:
1. If the portfolio turnover is 10%, a 49-year time horizon is required to favor the annuity.
2. If the net investment is 8% instead of 10%, the breakeven becomes 34 years.
3. If the ordinary income tax bracket is 35%, the breakeven is 42 years.
4. If the ordinary income tax bracket is 15% or lower, the breakeven is never (because of a 0% capital gains tax rate).
5. If we make a conservative assumption that the market will return 8%, and our alternative is a passively managed investment with a 10% turnover (in essence combining 1 & 2 above), the breakeven is
62 years.
Some other factors:
1. In the case of death, the heirs are vastly better off with a taxable investment because of the step-up in basis. The odds of dying in the early years (when the investor would be likely to have
losses) are trivial vs. the odds of dying in much later years (when the odds are in favor of large embedded gains). Remember, if there aren’t big gains, the taxable investment will be better; it
is therefore irrational to use an annuity for “protection” for the very small chance that someone will die when it will be worse if they live.
2. Taxable accounts allow tax loss harvesting much more easily and efficiently. Annuity losses have to exceed the 2% of AGI threshold, and the taxpayer must itemize.
3. If the investor needed the money early, he or she could be vastly worse off in three potential ways: 1) surrender charges, 2) the time period was too short to favor the annuity alternative, 3)
early withdrawal penalties for pre-59½ distributions.
4. Using an annuity increases the standard deviation of returns relative to the taxable alternative. This is contrary to what is desired. In other words, for any given time horizon there is a rate
of return where annuities and the taxable alternative are equivalent. If the investment experience has been good (i.e. better than breakeven), then the annuity will be the superior choice. If
the investment experience has been bad (i.e. below breakeven), then the taxable alternative would have been better. In other words, when purchasing an annuity, very good returns get better, and
very bad returns get worse. This is undesirable in most cases.
5. Finally, sometimes an advisor has placed an annuity inside of an account that is already tax advantaged. The rationale is that the client is risk-averse and wants this “protection” even with the
higher costs. The expected payoff is computed by multiplying the average percentage the account is likely to be down (when it is at a loss), times the probability of being down, times the
probability of dying. This figure would be compared to the marginal cost of the annuity vs. the alternative investment in the account. My calculations show this to be a bad bet because the
probability of dying is too low – unless the annuity owner is in his or her nineties.
Let me dilate further on #1 above. The death benefit has a computable value that will be greatest the very first year of the annuity because the investment has a positive expected return. Even if
some losses are bigger in years after the first one, the chance of them happening goes down even faster. Using a mortality table, we can compute what a rational investor should be willing to pay for
the insurance. Still, that isn’t the whole picture because if the annuity has increased in value, the heirs lose on the tax treatment, and the odds of being up are much higher than of being down.
So, no matter what the mortality is, even if the investor dies after the first year, the annuity “protection” is a small net loss unless future investment performance will be dramatically worse than
history. And, if that assumption is valid, purchasing an annuity is not optimal because high returns are required for it to make sense if the investor lives. Note that that is the best year! After
year one, it gets dramatically worse. This means the “protection” is on average worth much less than zero because of the adverse tax treatment.
If the annuity is purchased within an already tax-advantaged account, we can ignore the second part of the analysis above and simply look at the benefit vs. how much the annuity costs (the
incremental cost over an alternative mutual fund). The downside protection is only worth the probability of being down, times the average magnitude, times the probability of death. There are no
annuities inexpensive enough to make sense in a tax-advantaged account, unless life expectancy is less than about 5 years.
Despite what our foregoing analysis, if an investor wants/needs to hold very tax-inefficient investments, and does not have sufficient “room” in tax-advantaged (e.g. retirement) accounts, and does
not need the income generated, annuities can be the correct solution to put a tax-efficient “wrapper” around those inherently inefficient vehicles. (Tax inefficiency is a function of both the tax
rate and the amount recognized each year. Treasury bonds in the late 1970’s or early 1980’s for example were very inefficient holdings.)
Finally, annuity salespeople will claim that people don’t buy (i.e. they aren’t sold) the “plain vanilla” annuities I have used in my examples above. That they are actually buying some sort of
market guarantees or protection. It is hard to debunk this because the products are changed constantly so it is like playing whack-a-mole, but I would simply point out that it is impossible for
insurance companies to offer returns without the commensurate risk. They can do this with other types of insurance only because the risks are uncorrelated (all the houses don’t burn down at the same
time). In the market all the investments do tend to decline simultaneously so there is no advantage to risk pooling.
|
{"url":"https://www.financialarchitectsllc.com/blog/523","timestamp":"2024-11-03T00:12:59Z","content_type":"text/html","content_length":"62146","record_id":"<urn:uuid:2295baa0-5ed3-4032-9451-09a8ddcee09f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00898.warc.gz"}
|
R-squared intuition (article) | Khan Academy (2024)
Want to join the conversation?
7 years agoPosted 7 years ago. Direct link to ivan08urbieta's post “Which parameter is then b...”
Which parameter is then better to evaluate the fit of a line to a data set? the correlation coefficient (r) or the coefficient of determination (r2)?
7 years agoPosted 7 years ago. Direct link to Nahuel Prieto's post “The short answer is this:...”
The short answer is this: In the case of the Least Squares Regression Line, according to traditional statistics literature, the metric you're looking for is r^2.
Longer answer:
IMHO, neither r o r^2 are the best for this. In the case of r, it is calculated using the Standard Deviation, which itself is a statistic that has been long put to doubt because it squares
numbers just to remove the sign and then takes a square root AFTER having added those numbers, which resembles more an Euclidean distance than a good dispersion statistic (it introduces an
error to the result that is never fully removed). Here is a paper about that topic presented at the British Educational Research Association Annual Conference in 2004: https://www.leeds.ac.uk
/educol/documents/00003759.htm .
If we used the MAD (mean absolute deviation) instead of the standard deviation to calculate both r and the regression line, then the line, as well as r as a metric of its effectiveness, would
be more realistic, and we would not even need to square r at all.
This is a very extensive subject and there are still lots of different opinions out there, so I encourage other people to complement my answer with what they think.
Hope you found my answer helpful or at least interesting.
5 years agoPosted 5 years ago. Direct link to morecmy's post “what's the difference bet...”
what's the difference between R-squared and the total sum of squared residual?
• 3 years agoPosted 3 years ago. Direct link to Shannon Hegewald's post “They lost me at the squar...”
They lost me at the squares
2 years agoPosted 2 years ago. Direct link to deka's post “don't worry about them to...”
don't worry about them too much
they're simply a visualization of squaring numbers then summing them like 3^2 + 7^2 + 13^2 to assess how far they are from a regression line
6 years agoPosted 6 years ago. Direct link to Maryam Azmat's post “If you have two models of...”
If you have two models of a set of data, a linear model and a quadratic model, and you have worked out the R-squared value through linear regression, and are then asked to explain what the
R-squared value of the quadratic model is, without using any figures, what would this explanation be?
6 years agoPosted 6 years ago. Direct link to Ian Pulizzotto's post “A quadratic model has one...”
A quadratic model has one extra parameter (the coefficient on x^2) compared to a linear model. Therefore, the quadratic model is either as accurate as, or more accurate than, the linear model
for the same data. Recall that the stronger the correlation (i.e. the greater the accuracy of the model), the higher the R^2. So the R^2 for the quadratic model is greater than or equal to
the R^2 for the linear model.
Have a blessed, wonderful day!
7 years agoPosted 7 years ago. Direct link to Brown Wang's post “How we predict sum of squ...”
How we predict sum of squares in the regression line?
6 years agoPosted 6 years ago. Direct link to 347231's post “Tbh, you really cannot ge...”
Tbh, you really cannot get around squaring every number. I guess if you have decimals, you could round them them off, but really,, other than that, there’s no shortcut. It is difficult to
predict because the powers have to be applied to each and every number. You could always do a bit of mental math and round things off into easier numbers, but it’s not always reliable.
3 months agoPosted 3 months ago. Direct link to Bo Stoknes's post “Why do we square the resi...”
Why do we square the residuals? I get that we need a positive value for all residuals to calculate the sum of the prediction error, but wouldn't it be easier to just calculate the sum of the
absolute values of the residuals?
10 months agoPosted 10 months ago. Direct link to 24pearcetc's post “is there a shorter way to...”
is there a shorter way to create a estimation without taking all the steps to solve the problem? is there a hack or a way to do it quickly?
9 months agoPosted 9 months ago. Direct link to Jose Prieto Lechuga's post “Maybe using software like...”
Maybe using software like JASP?
6 years agoPosted 6 years ago. Direct link to Neel Kumar's post “Can I get the exact data ...”
Can I get the exact data set, based on that this dot plot have been created.
a year agoPosted a year ago. Direct link to gembaindonesia's post “Hi. I have in several cas...”
Hi. I have in several cases lately observed that when you remove several obvious outliers from a data set in "one go" the R-sq actually gets lower? This is rather counter intuitive to me at
least, and also when looking into the formula for how R is calculated it doesn't seem to make much sense either? Any insights into this?
5 months agoPosted 5 months ago. Direct link to daniella's post “The phenomenon you descri...”
The phenomenon you described, where removing outliers from a dataset results in a lower R^2 value, can occur in certain cases. One possible reason is that the outliers were exerting a
disproportionate influence on the correlation between the variables, causing the regression line to be biased. Removing outliers may result in a more accurate estimation of the true
relationship between the variables, leading to a lower residual sum of squares and hence a lower R^2 value. Additionally, it's important to consider the nature of the outliers and the
underlying data generating process to fully understand the impact of their removal on the regression analysis.
3 years agoPosted 3 years ago. Direct link to Scott Samuel's post “how do you calculate r^2”
how do you calculate r^2
2 years agoPosted 2 years ago. Direct link to deka's post “in case you already have ...”
in case you already have r, simply do r*r
yes. r^2 is nothing but a square of r(correlation coefficient)
|
{"url":"https://photone.net/article/r-squared-intuition-article-khan-academy","timestamp":"2024-11-05T09:15:56Z","content_type":"text/html","content_length":"139099","record_id":"<urn:uuid:7371db08-05bc-4ab4-a127-6838d26779f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00659.warc.gz"}
|
Course 2023-2024 a.y. - Universita' Bocconi
20933 - MATHEMATICS FOR AI - PREPARATORY COURSE
Department of Computing Sciences
Course taught in English
Suggested background knowledge
No background is required, other than basic mathematical knowledge.
Mission & Content Summary
This preparatory course introduces the basis of linear algebra and probability theory. In the first part of the course, we will cover some basic topics of linear algebra, including vectors, matrices,
linear systems, vector spaces, linear maps, eigenvalues and eigenvectors, the spectral theorem, and the singular value decomposition. In the second part of the course, we will cover basic topics of
probability theory, introducing discrete and continuous random variables, expectation, variance, Markov's Inequality and Chebyshev's Inequality.
Lecture 1 (28/08/23):
• Complex Numbers
• Vectors and Matrices
• Linear Systems
• Gaussian Elimination
Lecture 2 (29/08/23):
• Linear Combination of Vectors
• Vector Spaces
• Basis and Dimension of a Vector Space
Lecture 3 (30/08/23):
• Matrix Multiplication, Rank of a Matrix, Inverse Matrix, Trace of a Matrix
• Linear Maps and their Matrix Representation
• Kernel and Image of a Linear Map and the Rank-Nullity Theorem
• Injective and Surjective Linear Maps
Lecture 4 (31/08/23):
• Invertible Linear Maps and Isomorphism
• Computing an Inverse Matrix with the Gaussian Elimination
• The Rank of a Matrix (equivalent definitions)
• Determinant, Computing the Determinant with the Gaussian Elimination
• Norms and Inner Products
• Eigenvalues and Eigenvectors
Lecture 5 (01/09/23):
• Change of Basis
• Diagonalize a Matrix
• Spectral Theorem
• Positive Definite and Semidefinite Matrices
• Singular Value Decomposition
Lecture 6 (04/09/23):
I forgot to record the first part of Lecture 6. You can find a scan of my notes attached.
• Experiments, Probability, Events, Probability in Experiments with equally likely outcomes
• Permutations, Sampling with Replacement, Sampling without Replacement
• Binomial Coefficient, Multinomial Coefficient
• Probability Space, Axioms of Probability
• Conditional Probability and Independence of Events
• Bayes' Theorem and Law of Total Probability
Lecture 7 (05/09/23):
• Discrete Random Variables
• Expectation, Linearity of Expectation
• Jensen's Inequality
• Variance and Standard Deviation
• Independent Random Variables
• Examples of Discrete Random Variables: Uniform, Bernoulli, Binomial, Poisson, Geometric
• Conditional Expectation
• Markov Inequality
• Covariance and properties of Covariance and Variance of two independent random variables
• Chebychev's Inequality
• Continuous Random Variables
• Examples of Continuous Random Variables: Uniform, Exponential
Intended Learning Outcomes (ILO)
At the end of the course student will be able to...
At the end of the course, the student will have basic knowledge of linear algebra and probability theory.
In particular, the linear algebra part of the course covers the following topics: vectors, vector spaces, matrices, linear maps, eigenvalues and eigenvectors, spectral theorem, and singular value
decomposition. The probability part of the course covers the following topics: probability spaces, random variables, Markov Inequality and Chebychef inequality.
At the end of the course student will be able to...
By the end of the course, students will know how to understand and solve basic exercises in linear algebra and probability theory.
Teaching methods
• Face-to-face lectures
• Online lectures
• Exercises (exercises, database, software etc.)
Classes are taken in person with the possibility of being taken online. In addition, all lectures are recorded.
Assessment methods
Continuous assessment Partial exams General exam
• No exam x
Teaching materials
Suggested textbooks:
• Sheldon Axler, Linear Algebra Done Right
• Marc Peter Deisenroth, A. Aldo Faisal, Cheng Soon Ong, Mathematics for Machine Learning
• Gilbert Strang, Introduction to Linear Algebra
• Fabrizio Iozzi, Lecture Notes
• Sheldon Ross, A First Course in Probability
• Michael Mitzenmacher, Eli Upfal, Probability and Computing
Last change 18/09/2023 19:30
|
{"url":"https://didattica.unibocconi.eu/ts/tsn_anteprima.php?cod_ins=20933&anno=2024&IdPag=7401","timestamp":"2024-11-07T04:02:35Z","content_type":"text/html","content_length":"170714","record_id":"<urn:uuid:fb463e45-81b9-4d7f-8e3f-03fde051aaa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00465.warc.gz"}
|
APS March Meeting 2020
Bulletin of the American Physical Society
APS March Meeting 2020
Volume 65, Number 1
Monday–Friday, March 2–6, 2020; Denver, Colorado
Session P17: Quantum Computing with Donor Spins Hide Abstracts
Sponsoring Units: DQI
Chair: RUICHEN ZHAO, National Institute of Standards and Technology Boulder
Room: 203
Wednesday, P17.00001: Demonstration and benchmarking of electron and nuclear 2-qubit logic gates with implanted donors in silicon
March 4, Invited Speaker: Andrea Morello
2:30PM - Ion-implanted ^31P donors in silicon have attained 1-qubit gate fidelities >99.9% [1].
3:06PM Here we present the realization of a 2-qubit CNOT gate between weakly J-coupled electron spins, deploying our proposal of using the nuclear spins to detune the individual qubit frequencies
In a second experiment we realize a nuclear 2-qubit CNOT gate using a two-donor cluster where both ^31P nuclei are hyperfine-coupled to the same electron. The CNOT gate is obtained by
perfoming a CZ gate via a geometric phase imparted through a 2π rotation of the electron, preceded and followed by π/2 gates on one of the nuclei [3]. Gate-Set Tomography benchmarks the
fidelity of this universal gate set, which approaches the threshold for fault-tolerant quantum error correction.
[1] J. Dehollain et al., New J. Phys. 18, 103018 (2016)
[2] R. Kalra et al., Phys. Rev. X 4, 021044 (2014)
[3] V. Filidou et al., Nat. Phys. 8, 596 (2012)
Wednesday, P17.00002: Driven dynamics of an electron coupled to spin-3/2 nuclei in quantum dots
March 4, Arian Vezvaee, Girish Sharma, Sophia Economou, Edwin Barnes
3:06PM - The problem of hyperfine interaction between a confined electron in a self-assembled quantum dot and its surrounding nuclear spin environment features interesting physics. Driving of the
3:18PM electron spin leads to dynamic nuclear spin polarization of the bath, and feedback effects on the electron spin can qualitatively change its dynamics. While for most systems of interest
the nuclei have a spin of 3/2 or higher, in which case quadrupolar terms are present, the majority of existing theoretical treatments assume a nuclear spin-1/2 bath. In this work, we
present a comprehensive theoretical framework of a driven electron spin coupled to a nuclear spin-3/2 bath based on a mean-field approach, and we use it to study the effects of higher
nuclear spin on dynamic nuclear polarization.
Wednesday, P17.00003: Long-time noise characteristics of an isotopically-enriched silicon nuclear spin bath
March 4, Matthew D Grace, Wayne M Witzel
3:18PM -
3:30PM Electron spin qubits in silicon are known to have long coherence times, especially in isotopically-enriched materials with few nuclear spins. As a variety of silicon qubit devices are
explored experimentally, it is useful to have a theoretical understanding of noise characteristics as induced by nuclear spins to know when other noise sources should be suspected. We
present a quantitative study of nuclear spin dynamics for moderately-sized baths over a range of timescales. We approximate the quantum dynamics by adapting the cluster-correlation
expansion (CCE) technique to calculate bath “flip-flop” probabilities, improving the accuracy of simulations for longer timescales by introducing random, projective measurements throughout
the time evolution as a semiclassical approximation. In this way, we are able to estimate the spectral density of noise induced by a nuclear spin bath for a wide range of frequencies and
study the variation for different instances of nuclear spin samples. We present results of our modified CCE technique, comparing it to exact analytical and numerical solutions.
Wednesday, P17.00004: A silicon quantum-dot-coupled nuclear spin qubit
March 4, Bas Hensen, Wister Wei Huang, Chih-Hwan Yang, Kok Wai Chan, Jun Yoneda, Tuomo I Tanttu, Fay E. Hudson, Arne Laucht, Kohei M Itoh, Thaddeus D Ladd, Andrea Morello, Andrew Steven Dzurak
3:30PM - Single nuclear spins in the solid state have long been envisaged as a platform for quantum computing. However, establishing long-range interactions between multiple dopants or defects is
3:42PM challenging. Conversely, in lithographically-defined quantum dots, tunable interdot electron tunneling allows direct coupling of electron spin-based qubits in neighboring dots. Moreover,
compatibility with semiconductor fabrication techniques provides a compelling route to scaling. Unfortunately, hyperfine interactions are typically too weak to address single nuclei. In
this presentation, we report that for electrons in silicon metal-oxide-semiconductor quantum dots the hyperfine interaction is sufficient to initialize, read-out and control single
silicon-29 nuclear spins, yielding a combination of the long coherence times of nuclear spins with the flexibility and scalability of quantum dot systems. We demonstrate that the nuclear
and electron spins can be entangled and that they both retain their coherence while moving the electron between quantum dots, paving the way to long range nuclear-nuclear entanglement via
electron shuttling. Our results establish nuclear spins in quantum dots as a powerful new resource for quantum processing [1].
[1] B. Hensen et al, arXiv:1904.08260.
Wednesday, P17.00005: Engineering electrical control of single donor flip-flop qubits for universal quantum computations
March 4, Irene Fernández de Fuentes, Tim Botzem, Rostyslav Savytskyy, Stefanie Tenberg, Vivien Schmitt, Guilherme Tosi, Fay E. Hudson, Kohei M Itoh, David Norman Jamieson, Andrew Steven Dzurak,
2020 Andrea Morello
3:42PM -
3:54PM The "flip-flop" qubit is composed of the ↑↓ / ↓↑ states of the electron and nucleus spin of an implanted ^31P atom [1] in Si. It enables fast 1-qubit gates through the electrical
modulation of the hyperfine interaction, achieved by hybridizing the orbital states of the donor electron with a quantum dot at the Si/SiO[2] interface. Biasing the electron wavefunction
towards the interface creates a large electric dipole allowing for long distance coupling between donors, which mediates 2-qubit logic gates. Coherent control of the flip-flop states of an
^123Sb donor has been demonstrated, in a non-optimized device. Here we present the progress in developing a CMOS compatible nanostructure, designed to enable accurate electric control of
the hyperfine interaction (for coherent driving), and tunability on the coupling to charge reservoirs (for state readout). We report gated control of electron tunnel times of interface
dots to a nearby read-out quantum dot by nearly two orders of magnitude. We further investigate the effects of our high-frequency electrical antenna on the coherent control of both
electron and nuclear spins.
[1] G. Tosi et al., Nat. Commun. 8, 450 (2017).
Wednesday, P17.00006: Coherent electrical control of a single high-spin nucleus in silicon
March 4, Mark Johnson, Serwan Asaad, Vincent Mourik, Benjamin Joecker, Andrew Baczewski, Hannes Roland Firgau, Mateusz T Madzik, Vivien Schmitt, Jarryd Pla, Fay E. Hudson, Kohei M Itoh, Jeffrey C
2020 McCallum, Andrew Steven Dzurak, Arne Laucht, Andrea Morello
3:54PM -
4:06PM We report the discovery of Nuclear Electric Resonance (NER) in a single 123 Sb donor, implanted in a
silicon nanoelectronic device [1]. NER enables the coherent control of a high-spin nucleus through the
electrical modulation of its quadrupole coupling. This effect was first proposed in the 1960s but never
observed in a non-polar, non-piezoelectric material, or in a single atom. Our experiments are
quantitatively matched by a microscopic theory that elucidates how an electric field distorts the bond
orbital around the atom and results in a modulation of the electric field gradient at the nucleus. The
observation of a large quadrupole splitting in a single 123 Sb nucleus paves the way to the realization of a
quantum chaotic “kicked-top” model [2] or the encoding of quantum information in an 8-level nuclear
spin qudit.
[1] S. Asaad et al., arXiv:1906.01086 (2019)
[2] V. Mourik et al., Phys. Rev. E 98, 042206 (2018)
Wednesday, P17.00007: Decoherence of Dipole Coupled Flip-Flop Qubits
March 4, John Truong, Xuedong Hu
4:06PM - A recent proposal for a scalable donor-based quantum computer scheme promises excellent coherence properties, fast qubit couplings and insensitivity to donor placement. The suggested
4:18PM system consists of two different types of qubits per donor: a flip-flop qubit consisting of the electron and nuclear spin states, and a charge qubit of the donor electron tunneling between
the donor and an interface quantum dot. In this scheme, the qubits can be coupled to each other via the electric dipole interaction between their respective charge qubits. We study in
detail this effective coupling, especially the effect of charge noise on two-qubit gates utilizing this coupling. We find that due to the proximity of the charge excited states to the
flip-flop logical states, the presence of charge noise greatly reduces the fidelity of two-qubit operations. We calculate the qubit-noise interaction strengths, and identify leakage from
the qubit Hilbert space as the main culprit of the reduced gate fidelity. We also explore different bias conditions to mitigate this decoherence channel.
Wednesday, P17.00008: Full configuration interaction simulations of exchange coupled donors in silicon in an effective mass theory framework
March 4, Benjamin Joecker, Andrew D. Baczewski, John K Gamble, Jarryd Pla, Andrea Morello
4:18PM - Several proposals for multi-qubit gates with donor spin qubit in silicon rely on the exchange interaction, using either weak exchange and microwave pulses [1], or strong tunable exchange
4:30PM [2]. Designing the optimal devices to embody these control strategies requires accurate models of the dependence of the exchange interaction on lattice placement, orientations, and
electric fields. Here, we use a full configuration interaction method within an established multivalley effective mass theory framework [3] to model the two-electron wavefunction for
different donor configurations. In particular, we investigate the exchange interaction and valley population along different lattice orientations, and the tunability of exchange with
external electric fields.
[1] R. Kalra et al., Phys. Rev. X 4, 021044 (2014)
[2] Y. He et al., Nature 571, 371 (2019)
[3] J.K. Gamble et al., Phys. Rev. B 91, 235318 (2015)
Wednesday, P17.00009: Simultaneous Comparison of Coulomb Blockade Linewidths of P Donor-based and MOS-based Si Quantum Dots
March 4, Yanxue Hong, Aruna N Ramanayaka, Michael David Stewart, Xiqiao Wang, Ranjit Kashid, Pradeep Namboodiri, richard Silver, Joshua Pomeroy
4:30PM - In solid-state quantum computation, noise often presents a limitation for coherence or device integration. One indicator of the noise levels, the effective electron temperature (T[eff]),
4:42PM must be as low as possible to enable high-fidelity coherent measurements. High T[eff] in the measurement may come from noise sources extrinsic to the device or from intrinsic noise in the
device, which can be measured by the broadening of Coulomb blockade peaks. To study the extrinsic systematic noise origins and the intrinsic lattice couplings, here we report on the
comparison of T[eff] on two different quantum dot systems, P donor-based and MOS-based Si quantum dots simultaneously measured using the same measurement setup on the same platform.
T-dependent and bias-dependent conductance are measured in different cryogenic setups over temperatures ranging from 10 mK to 25 K. The T[eff] is extracted using a theoretical model. By
initially rearranging ground configuration and noise filtering, we have successfully reduced the T[eff] in a dilution refrigerator with 10 mK base temperature to < 0.5 K.
Wednesday, P17.00010: Evaluating effective mass models of the phoshorous donor in silicon
March 4, Luke Pendo, Xuedong Hu
4:42PM - Evaluating effective mass models of a phosphorus donor in silicon is made difficult by conflation of mathematical and physical approximations. We propose a scheme to solve a class of
4:54PM effective mass models with high precision. We construct donor electron states using envelope functions expanded in freely extensible basis sets equipped with tunable parameters. With these
states, we compute the expectation values of both the donor's energy as well as the energy variance. We variationally optimize the parameters of these basis states to find stationary
points of the energy functional, with variance of the expectation energy used to evaluate the precision of our candidate eigenstates. In this manner, we can find exact energy eigenstates
of the implied Hamiltonian of an effective mass model.
To improve the physical verisimilitude of a given effective mass model, we evaluate models of the donor atom's Coulomb potential, in particular considering the effects of a dielectric
constant with dynamic response. We present a phenomenological psuedopotential for exchange and correlation effects from the donor's valence electrons. Finally, we include symmetry breaking
perturbations, in particular the effect of an electric field upon the donor electron.
Wednesday, P17.00011: A two-qubit gate between phosphorus donor electrons in silicon
March 4, Yu He, Samuel Keith Gorman, Daniel J Keith, Ludwik Kranz, Joris Gerhard Keizer, Michelle Y Simmons
4:54PM - Electron spin qubits formed by atoms in silicon have large orbital energies and weak spin-orbit coupling giving rise to isolated electron spin ground states with seconds long coherence
5:06PM times. The exchange interaction promises fast two-qubit gate operations between single-spin qubits. Until now, creating a tunable exchange interaction between two electrons bound to
phosphorus atom qubits has not been possible. This reflects the challenges in knowing how far apart to place the atoms to turn on and off the exchange interaction, whilst aligning atomic
circuitry for high fidelity independent read out of the spins. Here we report a ~800 ps √SWAP gate between phosphorus donor electron spin qubits in silicon with independent ~94 % fidelity
single shot spin read-out. By engineering qubit placement on the atomic scale, we provide a route to the realisation and efficient characterisation of multi-qubit quantum circuits based on
donor qubits in silicon.
Wednesday, P17.00012: Donor-bound excitons in Cl doped ZnSe quantum wells
March 4, Aziz Karasahin, Marvin Marco Jansen, Alexander Pawlis, Edo Waks
5:06PM - Quantum information processing heavily depends on the ability to generate the high number of indistinguishable single photons and to interface them with long-lived coherent spin states.
5:18PM Quantum dots as emerged as promising scalable solid-state platforms by offering bright photon emissions. However, epitaxially grown quantum dots are not immune to size variations and they
suffer short coherence times due to interactions with host material nuclear spin bath.
Quantum emission from electron-bound to F donor impurities in bulk ZnSe and ZnSe quantum wells are shown previously. Unlike quantum dots, these emitters show small inhomogeneous broadening
^1, short decay times^1, and long-coherence times^2. Together with the possibility of spin purification of the host material, donor impurities in such quantum wells may open the
possibility to generate efficient spin-photon interfaces.
Here we investigate emission properties of donor bound excitons in Cl doped ZnSe/ZnMgSe quantum wells. Spectral properties of emitters are investigated in cryogenic temperatures.
Time-resolved fluorescence experiments show that these emitters exhibit sub-nanosecond decay times. Second-order time-correlation measurements prove the single-photon emission.
Wednesday, P17.00013: G-factor Anisotropy of a Single Electron in a GaAs Quantum Dot
March 4, Simon Svab, Leon Camenzind, Liuqi Yu, Peter Stano, Jeramy D Zimmerman, Arthur C Gossard, Daniel Loss, Dominik Zumbuhl
5:18PM - Spins in semiconductor quantum dots are among the leading candidates for quantum computing. To lift spin degeneracy, a large in-plane magnetic field is applied. This has sizable effects on
5:30PM the confined electron, allowing the shape and orientation of the orbitals to be inferred in this way, see Camenzind et al. PRL112, 207701 (2019).
Here, we present experiments studying the effect of the strength and direction of in-plane magnetic fields on the spin structure of an electron in a gated GaAs quantum dot. We have
measured an anisotropic correction of about 7% and an isotropic correction pushing the average g-factor 10-15% below the bulk value |g|=0.44. The experiment is compared to theory by Stano
et al. PRB98, 195314 (2018), finding rather good agreement. The anisotropic correction is given by the Dresselhaus spin-orbit coupling, matching well using a coefficient of 10.6 eVÅ. The
isotropic term is dominated by the well-known Rashba term and an additional 43-term appearing in finite B-field. These corrections are predicted to depend strongly on the thickness of the
wave function in the z-direction perpendicular to the 2D gas, and may also depend on the strength of the in-plane field.
Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Become an APS Member Renew Membership Librarians
Submit a Meeting Abstract Join an APS Unit Authors
Submit a Manuscript Get My Member Number Referees
Find a Journal Article Update Contact Information Media
Donate to APS Students
© 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us
Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200
Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700
|
{"url":"https://meetings.aps.org/Meeting/MAR20/Session/P17?showAbstract","timestamp":"2024-11-02T08:39:31Z","content_type":"text/html","content_length":"33889","record_id":"<urn:uuid:126dfc05-652b-4e19-8d6b-118fc8d2f321>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00119.warc.gz"}
|
From design patterns to category theory
From design patterns to category theory by Mark Seemann
How do you design good abstractions? By using abstractions that already exist.
When I was a boy, I had a cassette tape player. It came with playback controls like these:
Soon after cassette players had become widely adopted, VCR manufacturers figured out that they could reuse those symbols to make their machines easier to use. Everyone could play a video tape, but
'no one' could 'program' them, because, while playback controls were already universally understood by consumers, each VCR came with its own proprietary interface for 'programming'.
Then came CD players. Same controls.
MP3 players. Same controls.
Streaming audio and video players. Same controls.
If you download an app that plays music, odds are that you'll find it easy to get started playing music. One reason is that all playback apps seem to have the same common set of controls. It's an
abstraction that you already know.
Understanding source code #
As I explain in my Humane Code video, you can't program without abstractions. To summarise, in the words of Robert C. Martin
"Abstraction is the elimination of the irrelevant and the amplification of the essential"
With such abstractions, source code becomes easier to understand. Like everything else, there's no silver bullet, but good coding abstractions can save you much grief, and make it easier to
understand big and complex code bases.
Not only can a good abstraction shield you from having to understand all the details in a big system, but if you're familiar with the abstraction, you may be able to quickly get up to speed.
While the above definition is great for identifying a good abstraction, it doesn't tell you how to create one.
Design patterns #
Design Patterns explains that a design pattern is a general reusable solution to a commonly occurring problem. As I interpret the original intent of the Gang of Four, the book was an attempt to
collect and abstract solutions that were repeatedly observed 'in the wild'. The design patterns in the book are descriptive, not prescriptive.
Design patterns are useful in two ways:
• They offer solutions
• They form a vocabulary
In my opinion, however, people often overlook the second advantage. Programmers are often eager to find
. "I have a problem; what's the solution? Oh, here's a design pattern that fits!"
I have no problems with ready-made solutions, but I think that the other advantage may be even bigger. When you're looking at unfamiliar source code, you struggle to understand how it's structured,
and what it does. If, hypothetically, you discover that pieces of that unfamiliar source code follows a design pattern that you know, then understanding the code becomes much easier.
There are two criteria for this to happen:
• The reader (you) must already know the pattern
• The original author (also you?) must have implemented the pattern without any surprising deviations
As a programmer (code author), you can help readers (users) of your code. Don't use every design pattern in the book, but when you use one, make it as obvious to the reader as you can: Use the
terminology, class names, and so on from the book. Add comments where your naming deviates. Add links that the novice user can follow to learn more.
Ambiguous specification #
Programming to a well-known abstraction is a force multiplier, but it does require that those two conditions are satisfied: prior knowledge, and correct implementation.
I don't know how to solve the prior knowledge requirement, other than to tell you to study. I do, however, think that it's possible to formalise some of the known design patterns.
Most design patterns are described in some depth. They come with sections on motivation, when to use and not to use, diagrams, and example code. Furthermore, they also come with an overview of
Picture this: as a reader, you've just identified that the code you're looking at is an implementation of a design pattern. Then you realise that it isn't structured like you'd expect, or that its
behaviour surprises you. Was the author incompetent, after all?
While you're inclined to believe the worst about your fellow (wo)man, you look up the original pattern, and there it is: the author is using a variation of the pattern.
Design patterns are ambiguous.
Universal abstractions #
Design Patterns was a great effort in 1994, and I've personally benefited from it. The catalogue was an attempt to discover good abstractions.
What's a good abstraction? As already quoted, it's a model that amplifies the essentials, etcetera. I think a good abstraction should also be intuitive.
What's the most intuitive abstractions ever?
Stay with me, please. If you're a normal reader of my blog, you're most likely an 'industry programmer' or enterprise developer. You're not interested in mathematics. Perhaps mathematics even turns
you off, and at the very least, you never had use for mathematics in programming.
You may not find n-dimensional differential topology, or stochastic calculus, intuitive, but that's not the kind of mathematics I have in mind.
Basic arithmetic is intuitive. You know: 1 + 3 = 4, or 3 * 4 = 12. In fact, it's so intuitive that you can't formally prove it -without axioms, that is. These axioms are unprovable; you must take
them at face value, but you'll readily do that because they're so intuitive.
Mathematics is a big structure, but it's all based on intuitive axioms. Mathematics is intuitive.
Writers before me have celebrated the power of mathematical abstraction in programming. For instance, in Domain-Driven Design Eric Evans discusses how Closure of Operations leads to object models
reminiscent of arithmetic. If you can design Value Objects in such a way that you can somehow 'add' them together, you have an intuitive and powerful abstraction.
Notice that there's more than one way to combine numbers. You can add them together, but you can also multiply them. Could there be a common abstraction for that? What about objects that can somehow
be combined, even if they aren't 'number-like'? The generalisation of such operations is a branch of mathematics called category theory, and it has turned out to be productive when applied to
functional programming. Haskell is the most prominent example.
By an interesting coincidence, the 'things' in category theory are called objects, and while they aren't objects in the sense that we think of in object-oriented design, there is some equivalence.
Category theory concerns itself with how objects map to other objects. A functional programmer would interpret such morphisms as functions, but in a sense, you can also think of them as well-defined
behaviour that's associated with data.
The objects of category theory are universal abstractions. Some of them, it turns out, coincide with known design patterns. The difference is, however, that category theory concepts are governed by
specific laws. In order to be a functor, for example, an object must obey certain simple and intuitive laws. This makes the category theory concepts more specific, and less ambiguous, than design
The coming article series is an exploration of this space:
I believe that learning about these universal abstractions is the next step in software design. If you know design patterns, you have a vocabulary, but the details are still open to interpretation.
If you know category theory, you have a better vocabulary. Just like design patterns, you have to learn these things, but once you've learned them, you've learned something that transcends a
particular software library, a particular framework, a particular programming language. Learning about functors, monoids, and so on, is a good investment, because these concepts are rooted in
mathematics, not any particular technology.
Motivation #
The purpose of this article series is two-fold. Depending on your needs and interests, you can use it to
• learn better abstractions
• learn how functional programming is a real alternative to object-oriented programming
You've already read how it's in your interest to learn universal abstractions. It'll make your code clearer, more concise, and you'll have a better software design vocabulary.
The other goal of these articles may be less clear. Object-oriented programming (OOP) is the dominant software design paradigm. It wasn't always so. When OOP was new, many veteran programmers
couldn't see how it could be useful. They were schooled in one paradigm, and it was difficult for them to shift to the new paradigm. They were used to do things in one way (typically, procedural),
and it wasn't clear how to achieve the same goals with idiomatic object-oriented design.
The same sort of resistance applies to functional programming. Tasks that are easy in OOP seem impossible in functional programming. How do you make a for loop? How do you change state? How do you
break out of a routine?
This leads to both frustration, and dismissal of functional programming, which is still seen as either academic, or something only interesting in computation-heavy domains like science or finance.
It's my secondary goal with these articles to show that:
1. There are clear equivalences between known design patterns and concepts from category theory
2. Thus, functional programming is as universally useful as OOP
3. Since equivalences exist, there's a learning path
If you're an object-oriented programmer, you can use this catalogue as a learning path. If you'd normally use a
, you can look it up and realise that it's the same as a monoid.
Work in progress #
I've been thinking about these topics for years. What's a good abstraction? When do abstractions compose?
My first attempt at answering these questions was in 2010, but while I had the experience that certain abstractions composed better than others, I lacked the vocabulary. I've been wanting to write a
better treatment of the topic ever since, but I've been constantly learning as I've grappled with the concepts.
I believe that I now have the vocabulary to take a stab at this again. This is hardly the ultimate treatment. A year from now, I hope to have learned even more, and perhaps that'll lead to further
insights or refinement. Still, I can't postpone writing this article until I've stopped learning, because at that time I'll either be dead or senile.
I'll write these articles in an authoritative voice, because a text that constantly moderates and qualifies its assertions easily becomes unreadable. Don't consider the tone an indication that I'm
certain that I'm right. I've tried to be as rigorous in my arguments as I could, but I don't have a formal education in computer science. I welcome feedback on any article, both if it's to
corroborate my findings, or if it's to refute them. If you have any sort of feedback, then please leave a comment.
I consider the publication of these articles as though I submit them to peer review. If you can refute them, they deserve to be refuted. If not, they just may be valuable to other people.
Summary #
Category theory generalises some intuitive relations, such as how numbers combine (e.g. via addition or multiplication). Instead of discussing numbers, however, category theory considers abstract
'objects'. This field of mathematics explore how object relate and compose.
Some category theory concepts can be translated to code. These universal abstractions can form the basis of a powerful and concise software design vocabulary.
The design patterns movement was an early attempt to create such a vocabulary. I think using category theory offers the chance of a better vocabulary, but fortunately, all the work that went into
design patterns isn't wasted. It seems to me that some design patterns are essentially ad-hoc, informally specified, specialised instances of basic category theory concepts. There's quite a bit of
overlap. This should further strengthen the argument that category theory is valuable in programming, because some of the concepts are equivalent to design patterns that have already proven useful.
What a perfect introduction !
I heard about category theory more than one year ago. But it was from a PhD who code in 'haskell' and I thought it was too hard for me to understand.
And then, this post.
Thank you a lot! (you aleardy published the follow up ! yeah)
2017-10-05 21:39 UTC
Thanks for writing these articles, it's nice to have some reference material that is approachable for us as dotnet programmers.
One thing I was kind of expecting to find here was something about the two building blocks of combining types: products and coproducts. Is this something you have written about, or are considering
writing about? It gets mentioned in the Church encoding series and obviously those about visitors, but not really as a concept on its own.
What triggered me to come here this time, was reading about the much requested Champion "Discriminated Unions". Not only in those comments, but also when looking at other C# code, lots of people seem
to not realize how fundamental of a concept sum types are. IF they are, I could be wrong ofcourse.
I liked the way bartosz milewski explained this by visualizing them as graphs. Or how Scott Wlaschin relates it back to other concepts we also take for granted:
• products, *, AND, classes, records, tuples
• coproducts, +, OR, discriminated unions, ...
Anyway, I don't want to ramble on too much. Just curious if it's something you think fits the list of universal abstractions.
2023-04-10 09:42 UTC
Rutger, thank you for writing. I agree that the notion of algebraic data types are, in some sense, quite fundamental. Despite that, I was never planning on covering that topic in this series. The
main reason is that I think that other people have already done a great job of it. The first time I encountered the concept was in Tomas Petricek's exemplarily well-written article Power of
mathematics Reasoning about functional types, but, as you demonstrate, there are plenty of other good resources. Another favourite of mine is Thinking with Types.
That's not to say that this is the right decision, or that I might not write such an article. When I started this massive article series, I had a general idea about the direction I'd like to go, but
I learned a lot along the way that slightly changed the plans. For example, despite the title, there's not that much category theory here. The reason for that is that I found that most of the
concepts didn't really require category theory. For example, monoids originate (as far as I can tell) from abstract algebra, and you don't need more than that to explain the concept.
So, to answer your direct question: No, this isn't something that I've given an explicit treatment. On one hand, I think there's already enough good material on the topic that the world doesn't need
my contribution. On the other hand, perhaps there's a dearth of treatment that puts this in a C# context.
I'm not adverse to writing such an article, but I have so many other topics I'd also like to cover.
2023-04-14 7:10 UTC
Wish to comment?
You can add a comment to this post by
sending me a pull request
. Alternatively, you can discuss this post on Twitter or somewhere else with a permalink. Ping me with the link, and I may respond.
Published: Wednesday, 04 October 2017 10:43:00 UTC
|
{"url":"https://blog.ploeh.dk/2017/10/04/from-design-patterns-to-category-theory/","timestamp":"2024-11-02T21:44:43Z","content_type":"text/html","content_length":"43140","record_id":"<urn:uuid:d798b8da-c672-42f0-844a-dad41fa6def1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00045.warc.gz"}
|
Metatheory and reflection
Metatheory and reflection
If our speculation above is wrong, what is to be done when we come across an inference pattern that can't be implemented efficiently as an LCF derived rule? One alternative is to throw in whatever
additional proof procedures might be deemed necessary. But for a serious project of formalization, we need a high level of assurance that the rule is correct. If it doesn't actually perform a formal
proof, how can this be done?
One idea is to use metatheory. Suppose we formalize the object logic under consideration, its syntax and its notion of provability, in an additional layer of logic, the metalogic. The idea is that
metatheorems (i.e. theorems in the metalogic) can then be proved which justify formally the possibility of proving a result in the object logic, without actually producing the proof in complete
detail. One can use another layer of logic for this purpose, as discussed by [pollack-extensibility]. This metalogic can then be particularly simple and transparent, since few mathematical resources
are needed to support the proofs of typical metatheorems. However, it's also possible to use the same logic; this is often referred to as reflection. From Gödel's work we already know how a system
can formalize its own notion of provability. The only extra ingredient needed is a rule acting as a bridge between the internal formalization and the logic itself:^44
It is clear that if the system is 1-consistent, this rule does not introduce any new theorems, since all provable existential statements are true. However the absence of hypotheses in the above
theorems was crucial. The subtly different rule with a nonempty set of hypotheses, or equivalently an additional axiom scheme of the form . Prov(p, does make the logic stronger. In the special case
where F, it amounts to a statement of consistency, which Gödel's second theorem rules out, and in fact [lob-theorem] proved that such a theorem is provable in the original system only in the trivial
case where ^45 they were dubbed 'reflection principles' by [feferman-transfinite], who, following on from work by [turing-ordinals], investigated their iterated addition as a principled way of making
a formal system stronger. Our reflection rules are intended rather to be a principled way of making proofs in a system easier.
The value of using metalogic or reflection in this way depends, of course, on the fact that the metalogical proof of provability is easier (in some practically meaningful sense) than doing a full
proof in the object logic.^46 For example, this definitely seems to be the case for the Deduction Theorem and Skolemization steps discussed above: the metatheorem can be established once and for all
in a general form by induction over the structure of object logic proofs, and thereafter instantiated for the cases in hand very cheaply. But we've already expressed scepticism over whether these
examples are realistic for logics that one would actually use in practice.
The most popular examples cited are simple algebraic manipulations. Suppose one wants to justify a[1] + ... + a[n] = b[1] + ... + b[n] (for real numbers, say) where the a[i] and b[i] are permutations
of each other. An object-level proof would need to rewrite delicately with the associative and commutative laws; this is in fact what HOL's function AC_CONV does automatically. When implemented in a
simple-minded way this takes O(n^2) time; even when done carefully, O(n log(n)). However we can prove a metatheorem stating that if the multisets {a[1], ..., a[n]} and {b[1], ..., b[n]} are equal,
then the equation holds, without any further proof. [davis-schwartz] say confidently that 'the introduction of an appropriate ''algebra'' rule of inference shortens to 1 the difficulty of a sentence
which asserts an algebraic identity'. But assigning a difficulty of 1 is of no practical significance. What of the complexity of checking multiset equality? In fact, it is O(n^2) in the worst case,
according to [knuth-ss]. Even if the elements are pairwise orderable somehow, we can't do better than O(n log(n)). So all this additional layer of complexity, and the tedium of proving a precise
metatheorem, has achieved nothing except perhaps shaving something off the constant factor.
The constant factor is not even likely to change much. However it would change a lot more if the multiset equality test didn't have to be performed in some formal system, but could be evaluated
directly by a program. This suggests another popular variant of the reflection idea: computational reflection. The idea is to verify that a program will correctly implement a rule of inference (e.g.
correctly test for multiset equality and in the case of success, add the appropriate equation to the stock of theorems), then add that program to the implementation of the theorem prover, so
thereafter it can be executed directly. This does indeed offer much more efficiency, and in certain special situations like arithmetical calculation, the speedup could be considerable. It would also
be a systematic way of gaining confidence in the many theorems which are nowadays proved with the aid of computer checking. The proof of the 4-colour theorem has already been mentioned; more prosaic
examples include simply assertions that large numbers are prime. Journals in some fields already have to confront difficult questions about what constitutes a proof in this context; the issue is
discussed by [lam-proof] for example.
However there are a number of difficulties with the scheme. In particular, correctly formalizing the semantics of real programming languages, and then proving nontrivial programs correct, is
difficult. The formal system becomes dramatically more complicated, since its rules effectively include the full semantics of the implementation language, instead of being separate and abstract.
(This has a negative effect on the possibility of checking proofs using simple proof-checking programs, an oft-suggested idea to increase confidence in the correctness of machine-checked proofs.) And
finally, reflection, though extensively studied, has not often been exploited in practice, the work of [boyer-moore-meta] being probably the most significant example to date.^47
We should add that the idea of proving program correctness doesn't necessarily demand some kind of logical reflection principle to support it. Boyer and Moore, for example, need no such thing.
Typical decision procedures only involve a rather restricted part of the logic (e.g. tautologies or arithmetic inequalities), and the semantics of this fragment can perfectly easily be internalized,
even though by Tarski's theorem this cannot be done for the whole logic. (Nuprl's type theory, stratified by universe level, allows one to come very close to the ideal: the semantics of one layer can
be formalized in the next layer up.) Ostensibly 'syntactic' manipulations can often be done directly in ordinary logic or set theory, defining some appropriate semantics; following
[howe-computational], this is often called 'partial reflection'. For example, the above method for justifying an AC rearrangement based on multiset equality can easily be carried out in the object
We should also point out the following. It is often stated that, issues of efficiency apart, some of the theorems in mathematics books are metatheorems, or can only be proved using metatheory ---
'most of' them according to [aiello-meta]. [matthews-fs0-theory] is one of the few who cites an example:
For instance in a book on algebra one might read:
'If A is an abelian group, then, for all a, b in A, the equivalence
(a o b) o ... o (a o b)^n times = (a o ... o a)^n times o (b o ... o b)^n times
... On the other hand, instead of a book, imagine a proof development system for algebra; there the theorem cannot be stated, since it is not a theorem of abelian group theory, it is, rather, a
meta-theorem, a theorem about abelian group theory.
But all this is from the completely irrelevant perspective of an axiomatization of group theory in first order logic. As we have already stated, any serious development of group theory is going to
use higher-order or set-theoretic devices, as Bourbaki does for example. With these mathematical resources, iterated operations are trivially definable and their basic properties easily provable ---
this appears right at the very beginning of the well-known algebra textbook of [lang-algebra] for example, and in section 1.4 of [jacobson1]! The stress on direct first order axiomatizations in the
AI community probably results from a laudable scepticism in some quarters towards the fancy new logics being promulgated by others, but it completely fails to reflect real mathematical practice. What
is true is that one often sees statements like 'the other cases are similar' or 'by symmetry we may assume that m proofs rather than theorems. But bear in mind that when presenting proofs informally,
this is just one of many devices used to abbreviate them. Certainly, a way of formalizing statements like this is as metatheorems connecting proofs. But another is simply to do (or program the
computer to do!) all the cases explicitly, or to use an object-level theorem directly as justification, e.g.
Performing similar proofs in different contexts may anyway be a hint that one should be searching for a suitable mathematical generalization to include all the cases.
So, we have yet to find the LCF approach inadequate, and yet to see really useful implementations of reflection. Whatever exotic principles are added, the provable sentences will form a recursively
enumerable set (we are after all considering a computer implementation), and once again there will be unprovable statements and infeasible ones. Now since we haven't come across any interesting
infeasible statements yet it seems extremely conjectural that they won't still be infeasible in the expanded system. Boyer reports that the metafunction facility of NQTHM is not widely used in
practice, even though it is the only way for an ordinary user to add new facilities to NQTHM. And [constable-programs], who are no enemies of the idea of reflection as indicated by their extensive
writings on the subject, say: 'Of all these methods [of providing extensibility] we have found the LCF idea easiest to use and the most powerful.'
John Harrison 96/8/13; HTML by 96/8/16
|
{"url":"https://www.rbjones.com/rbjpub/logic/jrh0117.htm","timestamp":"2024-11-02T21:09:39Z","content_type":"text/html","content_length":"15513","record_id":"<urn:uuid:b792e996-50a5-446e-ba92-6733d7607229>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00101.warc.gz"}
|
Undulating Periodization: Variable Repetition Training (VRT) - Part 2
Last time we exposed you to an entirely new way of training. To briefly recap - the body adapts to any given workout in as little as four to six exposures. But get this - it adapts to the rep range
the fastest and the choice of exercise the slowest. So we can continue to make progress on several exercises as long as we change the rep ranges regularly.
With the undulating Periodization program, we adjusted the rep ranges EVERY single time we hit that body part. So in effect, over a three week period, we only repeated each workout once, even though
we hit each body part twice a week.
This time we go one step further. We make some additional changes this phase - moving to a three day per week routine on a 3 way split - upper body horizontal, upper body vertical and lower body. We
also use four rep ranges in phase two - makes it a little more confusing but a LOT more effective.
The premise remains the same - over the eight weeks you'll hit each and every exercise a total of eight times - BUT you'll only repeat the same sets and reps once. Quite simply - your body won't know
what's hit it - and it will adapt by making you stronger, bigger and leaner.
So for example: workout one in upper body horizontal: Do all the exercises in order, but the sets and reps will be 6 sets of 3. The following week you'll return to the same exercises, but you'll be
hitting them for 2 sets of 25 reps each (make sure to adjust your weights accordingly!).
So each week we'll do upper horizontal, lower body and upper vertical in order, but every time you show up to train - it's a whole new series of sets and reps. Need a little more clarification? Let's
check out the workouts...
barbell deadlift
So make sure you've done part one for at least three to six weeks before you begin phase two. We will continue this routine for another 8 weeks - the new split routine and the introduction of FOUR
new rep ranges means that the possibility for growth is endless.
|
{"url":"https://www.bodybuilding.com/fun/alwyn4.htm","timestamp":"2024-11-12T20:19:10Z","content_type":"text/html","content_length":"112963","record_id":"<urn:uuid:d02e9b9b-8cd8-41c1-a3db-2135eeaa3a43>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00529.warc.gz"}
|
Part 1 – Introduction to a simple bit encoder
To start off in the first part our series we will take a look at the properties of a 4 bit integer to help imagine where our simple bit encoder comes from. For purposes of encoding and decoding data
we will refer to our decimal values as our range, and every range has a top and a bottom value.
Looking at the illustration below we can see that the msb of our range can be partitioned into two groups. The second table illustrates the partioning of the msb, as you can see an integer range of 0
to 7 has a msb of 0, and an integer range of 8 to 16 has a msb value of 1. This table can be further partitioned into two ranges using the third bit. The range of 0 to 3 has a 3rd bit of 0, and the
range of 4 to 7 has a 3rd bit of 1.
Following this sequence you can continue to equally sub divide our 4 bit integer ranges into further binary nodes, creating a binary tree. What is the signfigance in this ? Well our bit encoder will
work with a range of values and partition that range depending on the probability of the bit we wish to encode. For the simplest example we will use a 50% probability for every bit, for encoding in
our next part.
Continue to Part 2 – Encoding a simple value
This site uses Akismet to reduce spam. Learn how your comment data is processed.
This entry was posted in Compression. Bookmark the permalink.
|
{"url":"https://chrischunick.com/2010/04/28/part-1-introduction-to-a-simple-bit-encoder/","timestamp":"2024-11-06T08:35:25Z","content_type":"text/html","content_length":"51074","record_id":"<urn:uuid:a766f002-dc15-45fb-aa5a-a9edf7963cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00585.warc.gz"}
|
Random permutations: A geometric point of view | Mathematics
Main content start
Random permutations: A geometric point of view
Jacopo Borga (University of Zurich)
Mon, Jan 25 2021, 11:00am
Consider a large random permutation satisfying some constraints or biased according to some statistics. What does it look like? In this seminar we make sense of this question by presenting the notion
of permuton convergence. Then we answer the question for different choices of random permutation models.
We mainly focus on two examples of permuton convergence, introducing the “Brownian separable permuton” and the “Baxter permuton”. We also discuss several connections between various limiting
permutons and other probabilistic objects, such as the Continuum Random Tree and the coalescent flows of some perturbed versions of the Tanaka SDE.
At the end of the talk we present some conjectures on a new universal limit for random permutations called the "skew Brownian permuton".
|
{"url":"https://mathematics.stanford.edu/events/probability/random-permutations-geometric-point-view","timestamp":"2024-11-08T11:04:19Z","content_type":"text/html","content_length":"39662","record_id":"<urn:uuid:2ed176db-c493-4002-826f-468c4acd2e40>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00174.warc.gz"}
|
Charge and Energy Transfer Processes: Open Problems in Open Quantum Systems
Schedule for: 19w5016 - Charge and Energy Transfer Processes: Open Problems in Open Quantum Systems
Beginning on Sunday, August 18 and ending Friday August 23, 2019
All times in Banff, Alberta time, MDT (UTC-6).
Sunday, August 18
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
Dinner ↓
17:30 - 19:30 A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
Monday, August 19
Breakfast ↓
- Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Introduction and Welcome by BIRS Staff ↓
- A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
Marco Merkli: Quantum resonance theory of open system dynamics ↓
09:00 We present a mathematical method allowing to prove the validity of the Markovian approximation for an $N$-level quantum system in contact with thermal reservoirs. The true system dynamics is
- approximated by a completely positive, trace preserving dynamical group $f\!or\ all\ times\ t\ge0$, provided the system-reservoir coupling $\lambda$ is small but fixed. No weak coupling
09:30 condition $\lambda^2 t\lesssim {\rm const.}$ is imposed.
(TCPL 201)
Francesco Petruccione: Open Quantum Walks: a short review and some recent results ↓
- After a brief review of the theory of open quantum walks, we present some recent results, some potential applications, and some open problems.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Christiane Koch: Fast and accurate qubit reset via optimal control - where quantum information meets open quantum systems ↓
The ability to quickly and reliably reset qubits is a basic prerequisite in quantum information science. The key question is: How fast can we export entropy in order to purify a qubit and erase
its correlations with the environment? We address this question for a qubit which interacts with a structured environment, using a combination of geometric and numerical optimal control theory.
10:30 Starting from the scenario of quantum reservoir engineering where the coupling between qubit and environment is tunable, we find the optimized reset strategy to consist in maximizing the decay
- rate from one state and driving non-adiabatic population transfer into the strongly decaying state. We then consider the simplest paradigm of non-Markovian dynamics where the qubit is coupled
11:00 to the environment via an ancilla. We derive time-optimal reset protocols and find, for factorizing initial states, a lower limit for the entropy reduction as well as a speed limit. Initial
correlations, remarkably, allow for faster reset and smaller errors. Use of the Cartan decomposition of su(4) allows us to generalize our findings to all conceivable qubit drives and
qubit-ancilla interactions.
(TCPL 201)
Lunch ↓
- Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Guided Tour of The Banff Centre ↓
- Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
Group Photo ↓
- Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the
14:20 official group photo!
(TCPL 201)
Martin B Plenio: Efficient Numerical Methods for Extended System in Contact with non-Markovian Environments ↓
In this lecture I will present the time evolving density operator method with orthogonal polynomials [1-3] and the most recent developments of this method to spatially extended systems [4] as
well as arbitrary temperatures [5, 6] and fermionic environments [7]. $$ $$ [1] J. Prior, A.W. Chin, S.F. Huelga and M.B. Plenio. Efficient simulation of strong system-environment interactions.
Phys. Rev. Lett. 105, 050404 (2010) $$ $$ [2] A.W. Chin, A. Rivas, S.F. Huelga and M.B. Plenio. Exact mapping between system-reservoir quantum models and semi-infinite discrete chains using
14:20 orthogonal polynomials. J. Math. Phys. 51, 092109 (2010) $$ $$ [3] A.W. Chin, J. Prior, R. Rosenbach, F. Caycedo-Soler, S.F. Huelga and M.B. Plenio. The role of non-equilibrium vibrational
- structures in electronic coherence and recoherence in pigment-protein complexes. Nature Physics 9, 113 - 118 (2013) $$ $$ [4] A. Somoza-Marquez, O. Marty, J. Lim, S.F. Huelga, and M.B. Plenio.
14:50 Dissipation assisted matrix product factorization. To appear in Phys. Rev. Lett. (2019) and E-print arXiv:1903.05443 $$ $$ [5] D. Tamascelli, A. Smirne, S.F. Huelga and M.B. Plenio.
Non-perturbative treatment of non-Markovian dynamics of open quantum systems. Phys. Rev. Lett. 120, 030402 (2018) $$ $$ [6] D. Tamascelli, A. Smirne, S.F. Huelga and M.B. Plenio, Efficient
simulation of finite-temperature open quantum systems. To appear in Phys. Rev. Lett. (2019) and E-print arXiv:1811.12418 $$ $$ [7] A. Nüßeler, I. Dhand, S.F. Huelga and M.B. Plenio. Fermionic
TEDOPA. In preparation
(TCPL 201)
- Coffee Break (TCPL Foyer)
Jonathan Keeling: Efficient non-Markovian quantum dynamics using time-evolving matrix product operators ↓
In order to model realistic quantum devices it is necessary to simulate quantum systems strongly coupled to their environment. I will describe our novel approach to this problem, a general and
yet exact numerical approach that efficiently describes the time evolution of a quantum system coupled to a non-Markovian harmonic environment [1]. Our method relies on expressing the system
15:30 state and its propagator as a matrix product state and operator, respectively, and using a singular value decomposition to compress the description of the state as time evolves. This is based
- on an approach developed by Makri and Makaraov [2], but re-expressed in the language of tensor networks, providing several orders of magnitude improvement in the size of problem that can be
16:00 addressed. I will present a number of examples using this method which demonstrate the power and flexibility of our approach, including the localization transition of the Ohmic spin-boson
model, and applications to time-dependent problems. $$ $$ [1] A. Strathearn, P. Kirton, D. Kilda, J. Keeling & B. W. Lovett. Nat. Comm. 9, 3322 (2018) $$ $$ [2] N. Makri & D. E. Makarov. J.
Chem. Phys. 102, 4600 (1995); ibid 4611 (1995).
(TCPL 201)
David Limmer: Unravelled master equations for the study of photoisomerization pathways in the condensed phase ↓
16:00 In this talk, I will present our recent work developing hybrid master equations to study the relaxation dynamics through conical intersections. These approaches leverage a separation of time an
- energy scales to treat different degrees of freedom at applicable levels of approximation. The quantum master equation approaches can be further unravelled into stochastic wavefunction methods,
16:30 defining a trajectory space that can be conditioned on various outcomes. This enables the extension of transition path theory into the realm of quantum dynamics.
(TCPL 201)
Dominique Spehner: Adiabatic transitions of a two-level system coupled to a free Boson reservoir ↓
16:30 We consider a slowly varying time-dependent two-level open quantum system coupled to a free boson reservoir. The coupling to the reservoir is energy conserving and also depends slowly on time,
- with the same adiabatic parameter $\epsilon$ as for the system Hamiltonian. Assuming that the reservoir is initially decoupled from the system and in the vacuum state, we compute the transition
17:00 probability from one eigenstate of the two-level system to the other eigenstate at some later time in the limit of small $\epsilon$ and small coupling constant $\lambda$, and analyse the
deviations from the adiabatic transition probability obtained in absence of the reservoir. $$ $$ This is joint work with A. Joye and M. Merkli
(TCPL 201)
Alain Joye: Nonlinear Quantum Adiabatic Approximation ↓
17:00 We consider the adiabatic limit of a nonlinear Schrödinger equation in which the Hamiltonian depends on time and on a finite number of components of the wave function. We show the existence of
- instantaneous nonlinear eigenvectors and of solutions which remain close those, up to a rapidly oscillating phase in the adiabatic regime. Consequences on the energy content of the solutions
17:30 are spelled out. $$ $$ This is joint work with Clotilde Fermanian
(TCPL 201)
Dinner ↓
- A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Tuesday, August 20
- Breakfast (Vistas Dining Room)
Janet Anders: Energetic footprints of coherence and irreversibility in the quantum regime ↓
In this quantum thermodynamics [1] talk, I will discuss work extraction in the quantum regime. We set up an optimal quantum thermodynamic process that removes quantum information in analogy to
Landauer’s erasure of classical information. The thermodynamic analysis of this optimal process uncovers that work can be extracted from quantum coherences in addition to the work that can be
extracted from classical non-equilibrium states [2]. In the second part of the talk I will discuss how the unavoidable presence of irreversibility affects entropic and energetic exchanges
09:00 during a non-optimal protocol. I will show that the heat footprint of quantum irreversibility differs markedly from the classical case [3]. The analysis is made possible by employing quantum
- trajectories that allow to construct distributions for classical heat and quantum heat exchanges. We also quantify how the occurrence of quantum irreversibility reduces the amount of work that
09:30 can be extracted from a state with coherences. Our results show that decoherence leads to both entropic and energetic footprints which play an important role in the optimization of controlled
quantum operations at low temperature, including quantum processors. $$ $$ [1] Quantum thermodynamics, S. Vinjanampathy, J. Anders, Contemporary Physics 57, 545 (2016). $$ $$ [2] Coherence and
measurement in quantum thermodynamics, P. Kammerlander, J. Anders, Scientific Reports 6, 22174 (2016). $$ $$ [3] Energetic footprints of irreversibility in the quantum regime, H. Mohammady, A.
Auffeves, J. Anders, arXiv:1907.06559
(TCPL 201)
Luis A. Correa: Classical emulation of quantum-coherent thermal machines ↓
The performance enhancements observed in various models of continuous quantum thermal machines have been linked to the buildup of coherences in a preferred basis. But, is this connection always
an evidence of 'quantum-thermodynamic supremacy'? By force of example, we show that this is not the case. In particular, we compare a power-driven three-level continuous quantum refrigerator
09:55 with a four-level combined cycle, partly driven by power and partly by heat. We focus on the weak driving regime and find the four-level model to be superior since it can operate in parameter
- regimes in which the three-level model cannot, it may exhibit a larger cooling rate, and, simultaneously, a better coefficient of performance. Furthermore, we find that the improvement in the
10:25 cooling rate matches the increase in the stationary quantum coherences exactly. Crucially, though, we also show that the thermodynamic variables for both models follow from a classical
representation based on graph theory. This implies that we can build incoherent stochastic-thermodynamic models with the same steady-state operation or, equivalently, that both coherent
refrigerators can be emulated classically. More generally, we prove this for any $N$-level weakly driven device with a 'cyclic' pattern of transitions. Therefore, even if coherence is present
in a specific quantum thermal machine, it is often not essential to replicate the underlying energy conversion process.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Ronnie Kosloff: The quantum Carnot engine and its quantum signature ↓
Quantum thermodynamics follows the tradition of learning by example. The Carnot cycle would be a primary candidate. The attempts to model the four stroke quantum Carnot cycle failed due to the
diculty to model the isothermal branches, where the working medium is driven while in contact to the thermal bath. Motivated by this issue we derived a time dependent Non Adiabatic Master
10:55 Equation (NAME) [1] with a xed driving protocol. This master equation is consistent with thermodynamic principles. We then were able to generalise to protocols with small acceleration with
- respect to the xed fast protocols. This approach was conrmed experimentally in a driven Ytterbium ion in a Paul trap[2]. Using this construction we are able to nd shortcuts to an isothermal
11:25 transformation [3]. Unlike unitary transformations the map changes entropy. After this journey, we are able close a Carnot like cycle in nite time and explore its performance. We are also able
to identify the quantum signature of the cycle at very short cycle times [4]. $$ $$ [1] R. Dann, A. Levy, and R. Kosloff, Physical Review A 98, 052129 (2018). $$ $$ [2] C.-K. Hu, R. Dann, J.-M.
Cui, Y.-F. Huang, C.-F. Li, G.-C. Guo, A. C. Santos, and R. Koslo, arXiv preprint arXiv:1903.00404 (2019). $$ $$ [3] R. Dann, A. Tobalina, and R. Kosloff, Physical Review Letters 122, 250402
(2019). $$ $$ [4] R. Dann and R. Kosloff, arXiv preprint arXiv:1906.06946 (2019).
(TCPL 201)
- Lunch (Vistas Dining Room)
Jianshu Cao: Quantum Coherence in Light-harvesting Energy Transfer ↓
Quantum coherence in light-harvesting complexes is explored in a minimal model of V-shape three-level system. We systematically solve the model to predict both its transient and steadystate
coherences and demonstrate the interplay of exciton trapping at the reaction center and the non-canonical distribution due to the system-bath coupling [1,2]. Further, we analyze the efficiency
13:30 and energy flux of the three-level model and show the optimal performance in the intermediate range of temperature and coupling strength, consistent with our understanding of quantum heat
- engines. [3] Finally, if time allows, we will explain how to generalize the above analysis to complex light-harvesting networks using the waiting time distribution function [4]. $$ $$ [1] Can
14:00 natural sunlight induce coherent exciton dynamics? Olsina, Dijkstra, Wang, Cao, arXiv:1408.5385 (2014/2019) $$ $$ [2] Non-canonical distribution and non-equilibrium transport beyond weak
system-bath coupling regime: A polaron transformation approach. D. Xu and J. Cao, Front. Phys. 11, 1 (2016) $$ $$ [3] Polaron effects on the performance of light-harvesting systems: A quantum
heat engine perspective. D. Xu, C. Wang, Y. Zhao, and J. Cao, New J. Phys. 18, 023003 (2016) $$ $$ [4] Correlations in single molecule photon statistics: Renewal indicator. J. Cao, J. Phys.
Chem. B, 110, 19040 (2006)
(TCPL 201)
Javier Cerrillo: Transfer tensor method: efficient simulation of open quantum systems ↓
The transfer tensor method [1] is a compact and intuitive tool for the analysis and simulation of general open quantum systems. By extracting the information contained in short samples of the
14:00 initial dynamics, it has the ability to extend the simulation power of existing exact approaches, like the chain-mapping DMRG-based simulation method TEDOPA [2] or stochastic methods [3].
- Crucially, it can treat problems with initial system-environment correlations, such as emission and absorption spectra of multichromophoric molecules [3]. In combination with the hierarchy of
14:30 equations of motion, transfer tensors that contain information about energetic and particle currents of the environment may be derived, facilitating quantum transport studies in the
strong-coupling and non-Markovian regimes. $$ $$ [1] J. Cerrillo, J. Cao, Phys Rev. Lett. 112, 110401 (2014). $$ $$ [2] R. Rosenbach, J. Cerrillo, S.F. Huelga, J. Cao, M.B. Plenio, New J. Phys.
18, 023035 (2016). $$ $$ [3] M. Buser, J. Cerrillo, G. Schaller, and J. Cao, Phys. Rev. A 96, 062122 (2017).
(TCPL 201)
Philipp Strasberg: Quantum non-Markovianity: A physicist's perspective ↓
The rigorous detection and quantification of non-Markovianity in open quantum systems has recently gained a lot of attention. Despite generating many insights on the mathematical side, the
14:30 proposed quantifiers of non-Markovianity are very hard to compute in practice and the relation to more traditional quantities of physical interest is not clear. In the first part of my talk I
- will present a very simple, yet rigorous way to witness non-Markovianty, which is based on linear response theory. In the second part of my talk I will discuss when (and when not) temporal
15:00 negativities of the entropy production rate imply non-Markovianity. This creates an important link between the mathematical concept of non-Markovianity and a physical observable, which
quantifies the overall irreversibility of the open system dynamics.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Erik Gauger: Microscopically-derived modelling of quantum networks subject to multiple environments ↓
The interplay between coherent and dissipative processes that governs both bio-inspired as well as engineered quantum networks supports a rich tapestry of non-equilibrium phenomena. These hold
15:30 promise for enabling quantum-enhanced light harvesting, molecular electronics, generation of thermopower, and nanoscale sensing. A theoretical challenge consists of developing well-founded and
- tractable models for exploring the dynamics of systems that are simultaneously strongly coupled to more than one environment, for instance through interactions with surrounding electromagnetic
16:00 and vibrational modes — typical for many networks comprised of condensed matter nanostructures. In this talk, I will cover our recent work in this area, with a particular focus on master
equation approaches.
(TCPL 201)
Ahsan Nazir: Environmental non-additivity and strong-coupling in non-equilibrium quantum systems ↓
We consider quantum systems coupled simultaneously to multiple environments. Examples include solid-state photon emitters, with coupling both to vibrations and the electromagnetic field, and
16:00 molecular nanojunctions, with coupling both to vibrations and electronic leads. We show that enforcing additivity of such combined influences results in non-equilibrium dynamics that does not
- respect the Franck-Condon principle in the former case, and can lead to unphysical electronic current under equilibrium conditions in the latter. We overcome these shortcomings by employing a
16:30 collective coordinate representation of the vibrational environment, which permits the derivation of a non-additive master equation. When applied to a two-level emitter our treatment predicts
decreasing photon emission rates with increasing vibrational coupling, consistent with Franck-Condon physics. Applied to a molecular nanojunction we employ counting statistics techniques to
track electron flow between the system and the electronic leads, revealing both strong-coupling and non-additive effects in the electron current, noise and Fano factor.
(TCPL 201)
Yuta Fujihashi: Intramolecular vibrations complement the robustness of primary charge separation in the photosystem II reaction center ↓
The energy conversion of oxygenic photosynthesis is triggered by primary charge separation in proteins at the photosystem II reaction center. In this talk, I will discuss the impacts of the
protein environment and intramolecular vibrations on primary charge separation at the photosystem II reaction center [1]. We report that individual vibrational modes play a minor role in
16:30 promoting charge separation, contrary to the discussion in recent publications. Nevertheless, these small contributions accumulate to considerably influence the charge separation rate,
- resulting in subpicosecond charge separation almost independent of the driving force and temperature. We suggest that the intramolecular vibrations complement the robustness of the charge
17:00 separation in the photosystem II reaction center against the inherently large static disorder of the involved electronic energies. Finally, if time allows, I will talk about our recent work on
electronic excitation dynamics triggered by the interaction with quantum entangled light [2]. $$ $$ [1] Y. Fujihashi, M. Higashi and A. Ishizaki, J. Phys. Chem. Lett. 9, 4921 (2018). $$ $$ [2]
Y. Fujihashi, R. Shimizu and A. Ishizaki, arXiv: 1904.11669 (2019).
(TCPL 201)
Gabriel Hanna: Mixed quantum-classical simulations of nonequilibrium heat transport in molecular junctions ↓
The study of nonequilibrium heat transport in molecular junctions (MJs) has gathered much attention in recent years due to its crucial role in the field of molecular electronics. To gain
insight into the factors determining the heat currents in MJs, reduced models of MJs have been studied using both approximate and exact quantum dynamical methods. One such model, known as the
17:00 nonequilibrium spin-boson (NESB) model, consists of a two-level system in contact with two harmonic oscillator baths at different temperatures. Recently, we developed a mixed quantum-classical
- framework for studying heat transport in MJs, which could enable the simulation of heat transport in more realistic models of MJs with many degrees of freedom [1]. In this talk, I will give an
17:30 overview of this framework and discuss the ability of a novel mixed quantum-classical dynamics method, known as Deterministic Evolution of Coordinates with Initial Decoupled Equations (DECIDE)
[2], for calculating the steady-state heat current in the NESB model in a variety of parameter regimes [3]. $$ $$ [1] Liu, J., Hsieh, C-Y., Segal, D., Hanna, G., J. Chem. Phys, 149, 224104
(2018). $$ $$ [2] Liu, J., Hanna, G., J. Phys. Chem. Lett., 9, 3928 (2018). $$ $$ [3] Carpio-Martinez, P., Hanna, G., J. Chem. Phys, in press.
(TCPL 201)
- Dinner (Vistas Dining Room)
Wednesday, August 21
- Breakfast (Vistas Dining Room)
Takaaki Aoki: Quantum time-dependent temperature ↓
- We consider one harmonic oscillator attached to a bath of many harmonic oscillators. We define a time-dependent temperature of the oscillator and show that the temperature relaxes to that of
09:30 the bath.
(TCPL 201)
Naomichi Hatano: Exceptional points of the Lindblad operator of a two-level system ↓
- The Lindblad equation for a two-level system under an electric field is analyzed by mapping it to a linear equation with a non-Hermitian matrix. Exceptional points of the matrix are found to be
10:00 extensive; the second-order ones are located on lines in a two-dimensional parameter space, while the third-order one is at a point.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Abraham Nitzan: Quantum thermodynamics of strongly coupled driven resonant level models ↓
The driven resonance level model (a driven molecular level coupled to one or more fermionic baths) has been recently used to study thermodynamic aspects of energy conversion in simple
mechanically driven strongly coupled quantum systems. Our original treatment of this problem was based on the non-equilibrium Green function (NEGF) approach[1]. In this talk I will discuss this
model using other methodologies that reveal different physical aspects of this problem. First, I will describe an approach[2] based on an expansion of the full system-bath density matrix as a
series in powers of the modulation rate, from which the functional form of work, heat, and entropy production rates can be obtained. This approach allows for the inclusion of electron-electron
10:30 interaction in an approximate way. Second, I repeat the derivation by expressing the density matrix in terms of the asymptotic eigenstates of the system by employing Moller transition operators
- [3]. The resulting expression, which coincides with results from the steady-state theories of McLean – Zubarev and Hershfield, can reproduce the standard NEGF results for the dot population and
11:00 the current and, when extended to include driving of the dot energy level and/or the dot-leads coupling, yields the non-adiabatic (second order to the driving speed) corrections to the power,
energy and heat production obtained from the NEGF formalism. Using this approach we can easily go beyond the wide band approximation and consider models where the dot is coupled to many leads
held at different temperatures and under different chemical potentials. Finally, we employ a numerical solution based on the driven-Liouville-von Neumann approach, which can be used to
investigate systems subjected to high driving speeds. $$ \ $$ [1] A. Bruch, M. Thomas, S. V. Kusminsky, F. von Oppen and A. Nitzan, Phys. Rev. B, 93, 115318 (2016); M A. Ochoa, A. Bruchand A.
Nitzan, Phys. Rev. B 94, 035420 (2016) $$ $$ [2] W. Dou, M. A. Ochoa,A. Nitzan and J. E. Subotnik, Phys. Rev. B 98, 134306 (2018) $$ $$ [3] A. Semenov and A. Nitzan, to be published $$ $$ [4]
I. Oz, O. Hod and A. Nitzan, Molecular Physics, in press, and to be published.
(TCPL 201)
Roman Krems: Bayesian optimization for inverse problems in quantum dynamics ↓
Machine learning models are usually trained by a large number of observations (big data) to make predictions through the evaluation of complex mathematical objects. However, in many
applications in science, particularly in quantum dynamics, obtaining observables is expensive so information is limited. In the present work, we consider the limit of ‘small data’. Usually,
11:00 ‘big data’ are for machines and ‘small data’ are for humans, i.e. humans can infer physical laws given a few isolated observations, while machines require a huge array of information for
- accurate predictions. Here, we explore the possibility of machine learning that could build physical models based on very restricted information. In this talk, I will show how to build such
11:30 models using Bayesian machine learning and how to apply such models to inverse problems aiming to infer the Hamiltonians from the dynamical observables. I will illustrate the methods by two
applications: (1) the inverse problem in quantum reaction dynamics aiming to construct accurate potential energy surfaces based on reaction dynamics observables; (2) the model selection problem
aiming to derive the particular lattice model Hamiltonian that gives to rise to specific quantum transport properties for particles in a phonon field.
(TCPL 201)
- Lunch (Vistas Dining Room)
- Free Afternoon (Banff National Park)
- Dinner (Vistas Dining Room)
Thursday, August 22
- Breakfast (Vistas Dining Room)
Giuseppe Luca Celardo: Macroscopic coherence as an emergent property in molecular nanotubes ↓
Nanotubular molecular self-aggregates are characterized by a high degree of symmetry and they are fundamental systems for light-harvesting and energy transport. While coherent effects are
thought to be at the basis of their high efficiency, the relationship between structure, coherence and functionality is still an open problem. We analyze natural nanotubes present in Green
Sulfur Bacteria. We show that they have the ability to support macroscopic coherent states, i.e. delocalized excitonic states coherently spread over many molecules, even at room temperature.
Specifically, assuming a canonical thermal state we find, in natural structures, a large thermal coherence length, of the order of 1000 molecules. By comparing natural structures with other
mathematical models, we show that this macroscopic coherence cannot be explained either by the magnitude of the nearest-neighbour coupling between the molecules, which would induce a thermal
09:00 coherence length of the order of 10 molecules, or by the presence of long-range interactions between the molecules. Indeed we prove that the existence of macroscopic coherent states is an
- emergent property of such structures due to the interplay between geometry and cooperativity (superradiance and super-transfer). In order to prove that, we give evidence that the lowest part of
09:30 the spectrum of natural systems is determined by a cooperatively enhanced coupling (super-transfer) between the eigenstates of modular sub-units of the whole structure. Due to this enhanced
coupling strength, the density of states is lowered close to the ground state, thus boosting the thermal coherence length. As a striking consequence of the lower density of states, an energy
gap between the excitonic ground state and the first excited state emerges. Such energy gap increases with the length of the nanotube (instead of decreasing as one would expect), up to a
critical system size which is close to the length of the natural complexes considered. $$ $$ VIDEO-ABSTRACT: https://vimeo.com/313618747 $$ $$ REFERENCES: $$ $$ 1) Macroscopic coherence as an
emergent property in molecular nanotubes, M. Gull; A. Valzelli; F. Mattiotti; M. Angeli; F. Borgonovi and G. L. Celardo; New J. Phys. 21 013019 (2019). $$ $$ 2) On the existence of superradiant
excitonic states in microtubules G. L. Celardo; M. Angeli; T. J. A. Craddock and P. Kurian New J. Phys. 21 023005 (2019).
(TCPL 201)
Géraldine Haack: Autonomous entanglement engines: open questions raised by the use of the reset master equation ↓
Entanglement is a key phenomenon distinguishing quantum from classical physics, and is a paradigmatic resource enabling many applications of quantum information science. Generating and
maintaining entanglement is therefore a central challenge. In the past years, with my colleagues in Geneva and Vienna, we have proposed a series of autonomous thermal machines that allow to
generate in the steady-state regime quantum correlations between two or more quantum systems (that can be of arbitrary dimension) [1-3]. In contrast to other proposals towards nanoscale thermal
09:30 machines, these ones have a genuine quantum output and do not have a classical counterpart. They are now often referred to as entanglement engines. In this talk, I will present the general
- functioning of those entanglement engines. In particular, their dynamics and the steady-state regime were investigated using a reset evolution equation which describes in a probabilistic and
10:00 phenomenological way the interaction of a quantum system with an environment. The use and validity of this reset equation with respect to the laws of thermodynamics open new questions and will
be discussed, in particular with respect to local detailed balance condition and the entropy balance [4]. $$ $$ [1] J. Bohr Brask, G. Haack, N. Brunner, M. Huber, NJP 17 (2015) $$ $$ [2] A.
Tavakoli, G. Haack, M. Huber, N. Brunner, J. Bohr Brask, Quantum 2 (2018) $$ $$ [3] A. Tavakoli, G. Haack, N. Brunner, J. Bohr Brask, arXiv:1906.00022 (2019) $$ $$ [4] G. Haack et al., on going
(TCPL 201)
- Coffee Break (TCPL Foyer)
Andrew Kent Harter: Floquet edge state protection in non-Hermitian topological systems ↓
In previous works [1], we have considered a two-level system with time periodic (Floquet), PT-symmetric [2] gain and loss. Although this system is generically non-Hermitian, at any instant, the
system may be in a broken or unbroken PT-symmetric phase, and its long-term behavior is governed by an effective Floquet Hamiltonian. Interestingly, the PT phase diagram for this system
features a re-entrant PT unbroken phase, in which, as the driving frequency is swept from zero (static) to near the natural resonance frequency, the system enters the PT-broken phase; however,
10:30 if the driving frequency is sufficiently increased, the system will re-enter the PT-unbroken phase again. By extension, one can apply a PT-symmetric, Floquet driving to the one-dimensional
- Su-Schrieffer-Heeger (SSH) model, which exhibits a topologically nontrivial phase. In the static case [3], the edge states are pushed into far ends of the energy spectrum in the imaginary plane
11:00 [4]; however, in the time-periodic case, the Floquet energy spectrum provides a stabilizing, periodically-repeating structure in the frequency domain. At high enough driving frequencies, the
non-Hermitian dynamics can stabilize, leading to a completely real spectrum [5]. Specifically, in our study, we used a simplified, pulsed time dependence allowing us to analyze the viability of
the topologically relevant states for a broad range of driving frequencies which can be below the critical high-frequency threshold. $$ $$ [1] Li, J., Harter, A., Liu, J., de Melo, L.,
Joglekar, Y. and Luo, L. Nat. Comms. 10, 855, (2019) $$ $$ [2] Bender, C. and Boettcher, S. Phys. Rev. Lett. 80, 5243 (1998) $$ $$ [3] Rudner, M. S. and Levitov, L. S. Phys. Rev. Lett. 102,
065703 (2009) $$ $$ [4] Hu, Y. C. and Hughes, T. L. Phys. Rev. B 84, 153101 (2011) $$ $$ [5] C. Yuce, Eur. Phys. J. D 69, 184 (2015)
(TCPL 201)
- Jean-Bernard Bru: Macroscopic Long-Range Dynamics of Fermions and Quantum Spins on the Lattice (TCPL 201)
- Lunch (Vistas Dining Room)
- Michael Thorwart: Is the dynamics of biomolecular excitons quantum coherent? (TCPL 201)
David Coker: First Principles Model Hamiltonian Ensembles for Light Harvesting: Modeling Dissipative Down Conversion - Signatures of Vibronic Energy Transfer in Nonlinear 2DES Signals -
CSDM+PLDM ↓
14:00 Accurate model Hamiltonians for excitation energy transfer, including vibronic transitions, in a variety of down-conversion pigment-protein complexes are parameterized, using Molecular Dynamics
- and first principles calculations. Semi-classical methods based on a new hybrid partial linearized and coherent state density matrix dynamics approach are outlined and used to compute
14:30 non-linear 2D electronic spectra for these models elucidating the signatures of excitonic and vibronic energy transfer in these systems. Results are analyzed in terms of simple models for
dissipative electronic and vibronic dynamics.
(TCPL 201)
Aaron Kelly: Approximate quantum dynamics simulation methods for charge and energy transport ↓
14:30 A selection of recently developed approaches for simulating nonequilibrium quantum dynamics will be discussed. The unifying feature that these methods share is an ensemble of trajectories that
- is employed in order to construct observables and transport properties. We will explore the performance of selected techniques of this type in a variety of nonadiabatic charge and energy
15:00 transfer processes, including cavity-bound spontaneous emission, charge separation and polaron formation donor-acceptor interfaces, and heat transport through molecular junctions.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Paul Brumer: The Steady State Induced by Natural Incoherent Light: Rates, Dynamics and Coherences ↓
Processes induced by natural light (e.g. photosynthesis, vision) display properties distinct from those often studied in the laboratory using pulsed laser irradiation. The natural processes
15:30 display complexities associated with systems operating in steady state and coupled to both an irradiative bath as well as a thermal protein environment. $$ $$ We have examined assorted problems
- associated with such systems, such as the presence or absence of stationary coherences, tests for the range of validity of secular vs nonsecular treatments, the generation of coherences under
16:00 naturally slow turn-on of the radiation, rates of radiationless process under solar radiation, etc. Several of these will be described in this talk, with the remainder left for discussion
during the meeting.
(TCPL 201)
Qiang Shi: Charge and energy transfer dynamics in condensed phase using the non-perturbative hierarchical equations of motion approach ↓
Charge transfer dynamics play an important role in organic semiconductor materials and devices. I will present some recent progresses in theoretical studies of charge transfer dynamics in
different systems: (1) Real time charge separation dynamics at the donor/acceptor interface in organic photovoltaic (OPV) devices. Charge separation dynamics with multiple timescales are
16:00 indentified, including an ultrafast component within hundreds of femtoseconds, an intermediate component related to the relaxation of the hot charge transfer (CT) state, and a slow component on
- the timescale of tens of picoseconds from the thermally equilibrated CT state. Effects of hot exciton dissociation, as well as its dependence on the energy offset between the Frenkel exciton
16:30 and the CT state are also analyzed.(2)Non-pertubative memory kernels and generating functions to study the charge transfer dynamics. By using the hierarchical equation of motion (HEOM) and
extended HEOM methods, we present a new approach to calculate the exact time non-local and time-local memory kernels and their high order perturbative expansions. The new approach is applied to
the spin-boson model with different sets of parameters, a model of excitation energy transfer in the Fenna-Matthews-Olson complex, and a model of electron transport in molecular junctions.
(TCPL 201)
Dvira Segal: Full counting statistics of charge and energy transport: Methods and applications ↓
Methods developed to study the dynamics of open quantum systems can be generalized to hand over the probability distribution function of integrated currents. Considering steady state transport
16:30 of particles and energy, I will describe benefits of the full counting statistics (FCS) approach: (i) It may be easier in fact to evaluate the cumulant generating function - than study
- directly-individually the averaged current and its noise. (ii) By verifying the steady state fluctuation symmetry we can validate the thermodynamics consistency our approximations. (iii) We
17:00 automatically generate high order cumulants beyond the averaged current. I will describe our studies of FCS in charge and energy transport problems, and portray several applications including
derivation of the delta-T electronic shot noise, analysis of the anomalous electronic noise, and studies of fluctuations - entropy production trade-off relations.
(TCPL 201)
- Dinner (Vistas Dining Room)
Friday, August 23
- Breakfast (Vistas Dining Room)
Naomichi Hatano: A non-Markovian Analysis of Quantum Otto Engine ↓
- We consider a quantum version of the Otto engine, in which we use a non-Markovian time-evolution equation for the contact of the system with heat baths. We show that the engine can have a high
09:30 heat efficiency when the contact with the hot bath is short by taking advantage of an energy backflow that temporarily breaks the second law of thermodynamics.
(TCPL 201)
Jianshu Cao: Stochastic Formalism and Simulation of Quantum Dissipative Dynamics ↓
Our starting point is a stochastic decomposition scheme to study dissipative dynamics of an open system. In this scheme, any two-body interactions between constituents of the quantum system can
be decoupled with a common white noise that acts on the two individual subsystems. $$ $$ (I) Using the decomposition scheme, we obtain a stochastic–differential equation, which reduces to
generalized hierarchical equations of motion (GHEOM) and thus represents a unified treatment of boson, fermion, and spin baths.[1] Applications of GHEOM to spin baths confirm the scaling
relation that maps spin baths to boson baths and characterizes anharmonic effects often associated with low-frequency or strong coupling spin modes. [2] $$ $$ (II) The decomposition scheme also
09:30 leads to the stochastic path integral approach, which directly simulates quantum dissipation with complex noise. The approach is applied successfully to obtain the equilibrium density matrix,
- multichomophoric spectra, and Forster energy transfer rate. [3] For real time propagation, we demonstrate the advantages of combining stochastic path integrals, deterministic quantum master
10:00 equations [4], and possibly the transfer tensor method [5]. $$ $$ [1] A unified stochastic formalism of quantum dissipation: I. Generalized Hierarchical equation, Hsien and Cao, JCP 148,
p014103 (2018) $$ $$ [2] A unified stochastic formalism of quantum dissipation: II. Beyond linear response of spin baths. Hsien and Cao, JCP 148, p014104 (2018) $$ $$ [3] Equilibrium-reduced
density matrix formulation: Influence of noise, disorder, and temperature on localization in excitonic systems. J. Moix, Y. Zhao, and J. Cao, Phys. Rev. B 85, 115412 (2012) $$ $$ [4] A hybrid
stochastic hierarchy equations of motion approach to treat the low temperature dynamics of non-Markovian open quantum systems. J. M. Moix and J. Cao, J. Chem. Phys. 139, 134106 (2013) $$ $$ [5]
Non-Markovian dynamical maps: Numerical processing of open quantum trajectories. J. Cerrillo and J. Cao, Phys. Rev. Lett. 112, 110401 (2014)
(TCPL 201)
- Coffee Break (TCPL Foyer)
- Discussion (TCPL 201)
Checkout by Noon ↓
- 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the
12:00 guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
- Lunch from 11:30 to 13:30 (Vistas Dining Room)
|
{"url":"https://www.birs.ca/events/2019/5-day-workshops/19w5016/schedule","timestamp":"2024-11-09T03:42:26Z","content_type":"application/xhtml+xml","content_length":"72229","record_id":"<urn:uuid:74052776-3095-47b3-b143-d977e9702f70>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00304.warc.gz"}
|
Challenges in Achieving Optimal Thrust-to-Weight Ratio in context of thrust to weight ratio plane example
31 Aug 2024
Title: Challenges in Achieving Optimal Thrust-to-Weight Ratio: A Case Study of a Typical Commercial Aircraft
The thrust-to-weight ratio (TWR) is a critical performance metric for aircraft, as it directly affects their acceleration, climb rate, and overall maneuverability. However, achieving optimal TWR
poses significant challenges, particularly in the context of commercial airliners. This article presents a case study on a typical commercial aircraft, highlighting the complexities involved in
optimizing its TWR.
The thrust-to-weight ratio (TWR) is defined as the ratio of an aircraft’s thrust output to its weight:
TWR = Thrust / Weight
where Thrust is measured in Newtons (N) and Weight is measured in kilograms (kg).
For a commercial airliner like the Boeing 737-800, the TWR is approximately 0.5 [1]. This means that for every kilogram of weight, the aircraft produces 0.5 N of thrust.
Challenges in Achieving Optimal TWR:
1. Thrust Generation: The primary challenge lies in generating sufficient thrust to counteract the weight of the aircraft. Commercial airliners rely on high-bypass turbofan engines, which produce a
significant amount of thrust. However, as the aircraft’s weight increases due to factors like passenger capacity and cargo load, the TWR decreases.
2. Weight Reduction: To maintain an optimal TWR, reducing the aircraft’s weight is crucial. This can be achieved through lightweight materials, reduced fuel loads, or optimized structural designs.
However, these measures often come at a higher cost, which may not be feasible for commercial airliners.
3. Aerodynamic Efficiency: The shape and size of the aircraft also impact its TWR. Aerodynamic inefficiencies, such as drag and lift losses, can reduce the overall thrust output. Optimizing the
aircraft’s aerodynamics through design improvements or surface treatments is essential to maintain an optimal TWR.
Case Study:
Consider a Boeing 737-800 with a maximum takeoff weight (MTOW) of approximately 82,000 kg [2]. The aircraft’s thrust output is around 124,000 N [3], resulting in a TWR of:
TWR = 124,000 N / 82,000 kg ≈ 1.51
To achieve an optimal TWR, the aircraft would need to reduce its weight by approximately 33% (1 - 0.5) while maintaining its thrust output.
Achieving optimal TWR in commercial airliners is a complex challenge that requires careful consideration of multiple factors. By understanding the interplay between thrust generation, weight
reduction, and aerodynamic efficiency, aircraft designers can optimize their designs to improve performance and reduce operating costs. The case study presented highlights the importance of balancing
these factors to achieve an optimal TWR.
[1] Boeing. (2020). 737-800 Specifications. Retrieved from https://www.boeing.com/commercial/737/specs.html
[2] Federal Aviation Administration. (2020). Type Certificate Data Sheet for the Boeing 737-800. Retrieved from https://rgl.faa.gov/regulatory_and_guidance_library/rgl_2019_02_14_0001.pdf
[3] Pratt & Whitney. (2020). PW6156 Engine Specifications. Retrieved from https://www.prattwhitney.com/products/pw6156/
TWR = Thrust / Weight
• Thrust is measured in Newtons (N)
• Weight is measured in kilograms (kg)
Note: The formulae are presented in BODMAS (Brackets, Orders of Operations, Division, Multiplication, Addition, and Subtraction) format.
Related articles for ‘thrust to weight ratio plane example’ :
Calculators for ‘thrust to weight ratio plane example’
|
{"url":"https://blog.truegeometry.com/tutorials/education/820d8c87bb86344299d1f66f6ffc6932/JSON_TO_ARTCL_Challenges_in_Achieving_Optimal_Thrust_to_Weight_Ratio_in_context_.html","timestamp":"2024-11-09T07:50:21Z","content_type":"text/html","content_length":"19019","record_id":"<urn:uuid:fb9eea34-bf8e-4e79-83a3-ebeee02f7463>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00847.warc.gz"}
|
The history of trigonometric functions
From right triangles to Taylor series
This is not a trick: the cosine of the imaginary number i is (e⁻¹ + e)/2.
How on Earth does this follow from the definition of the cosine? No matter how hard you try, you cannot construct a right triangle with an angle i. What kind of sorcery is this?
Behind the scenes, sine and cosine are much more than a simple ratio of sides. They are the building blocks of science and engineering, and we can extend them from angles of right triangles to
arbitrary complex numbers.
In this post, we’ll undertake this journey.
If you enjoy my work and get value from it, support me with a paid subscription! Mathematics is neither dull nor dry; it’s beautiful, mesmerizing, and useful. Your support helps me show this to
Right triangles and their sides
The history of trigonometric functions goes back almost two thousand years. In their original form, as we first encounter them in school, sine and cosine are defined in terms of right triangles.
For an acute angle α, its sine is defined by the ratio of t…
Keep reading with a 7-day free trial
Subscribe to The Palindrome to keep reading this post and get 7 days of free access to the full post archives.
|
{"url":"https://thepalindrome.org/p/the-history-of-trigonometric-functions","timestamp":"2024-11-14T01:16:44Z","content_type":"text/html","content_length":"140212","record_id":"<urn:uuid:858cc363-cc62-491c-8791-b52897cd80c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00551.warc.gz"}
|
Chapter 8: Infinite Sequences and Series
Section 8.2: Series
Notations such as $\underset{n={n}_{0}}{\overset{\infty }{∑}}{a}_{n}$, $\underset{n}{\overset{\
infty }{∑}}{a}_{n}$, or even $\stackrel{\infty }{∑}{a}_{n}$ and $\mathrm{Σ}{a}_{n}$ are used to
denote an infinite series, which is simply the sum of all the terms in an infinite sequence. (The simpler
notations often appear in printed texts to save costs.)
As Zeno's paradox about the tortoise and the hare (actually, it was Achilles) shows, an infinite sequence of
operations cannot logically be completed in a finite time, yet in reality it often is. So, by what rules is a
meaning attached to a symbol that indicates the impossible task of executing an infinite number of additions?
The key to giving meaning to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is the idea of the
partial sum, as per Definition 8.2.1, below.
Definition 8.2.1: Partial Sums of an Infinite Series
The $N$th partial sum of the infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ is ${S}_{N}&
equals;\stackrel{N}{\mathrm{Σ}}{a}_{n}$, that is, the sum of the terms up through and including $
{a}_{N}$. (Some texts will define ${S}_{N}$ as the sum of the first $N$ terms.)
The meaning given to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is then $\underset{N&
rightarrow;\infty }{lim}{S}_{N}$, that is, the limit of the sequence of partial sums, $\left\{{S}_{N}\right\}
If this limit exists, then that number is the "sum" of the infinite series, and the series is said to
converge to that number. If the limit of the sequence of partial sums does not exist, then the series is said
to diverge. (An infinite series can diverge if the limit of the sequence of partial sums is infinite, or if
it fails to exist because of oscillation.)
Definitions 8.2.2 and 8.2.3 make these notions precise.
Definition 8.2.2: Convergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of
the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to $L$.
Definition 8.2.3: Divergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the
sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either
because the limit is infinite, or because it oscillates.
The astute reader will realize that Definitions 8.2.2 and 8.2.3 are reminiscent of the contents of Table
4.5.1 that detail similar calculations for improper integrals. There, the improper integral is defined as the
limit of a proper integral, with the endpoint of integration approaching either infinity or a singularity on
the real line.
Definitions 8.2.4 and 8.2.5 refine the definition of convergence by distinguishing between absolute and
conditional convergence.
Definition 8.2.4: Absolute Convergence
• If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_
{n}$ is said to be absolutely convergent.
• If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges
Definition 8.2.5: Conditional Convergence
• If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms,
but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to be
Consequently, there is a certain ambiguity in declaring that an infinite series converges. Is it a series of
nonnegative terms, in which case its convergence is necessarily absolute? Or if it contains an infinite
number of negative terms, does it converge absolutely, or does it converge just conditionally? Hence, when
discussing the convergence of an infinite series, this Study Guide will always modify the word "converge"
with either conditionally, or absolutely, unless the context makes it perfectly clear that no such
modification is needed to eliminate ambiguity.
Table 8.2.1 lists five theorems that summarize additional key points about the behavior of infinite series.
Theorem Intuitive Statement Formal Statement
• An infinite series that converges If $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then
8.2.1 absolutely, must necessarily $\mathrm{Σ}{a}_{n}$ converges.
converge conditionally.
• The general term of a convergent
series must necessarily tend to If $\mathrm{Σ}{a}_{n}$ converges, then ${a}_{n}&
8.2.2 zero. srarr;0$, but not conversely.
• The converse is false.
If $\mathrm{Σ}{a}_{n}=A$ and $\mathrm{&
Sigma;}{b}_{n}=B$, then
• Addition, subtraction and scalar 1. $\mathrm{Σ}{a}_{n}±\mathrm{Σ}{b}_{n}&
8.2.3 multiplication for convergent equals;\mathrm{Σ}\left({a}_{n}±{b}_{n}\right)
sequences is well-behaved. =A+B$;
2. $c{\mathrm{Σa}}_{n}=\mathrm{Σ}c{a}_
{n}=cA$, for any real number $c$.
• The Cauchy product of two If $\underset{n=0}{\mathrm{Σ}}{a}_{n}&
convergent series may or may not equals;A$, $\underset{n=0}{\mathrm{Σ}}{b}_
converge. If the product does {n}=B$, and ${c}_{n}=$$\underset{k=
converge, it converges to the 0}{\overset{n}{∑}}{a}_{k}{b}_{n-k}$ , then
"right" value.
1. $\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\
8.2.4 • If one of the two series converges mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$
absolutely, then the product converges absolutely.
converges to the "right" value.
2. $\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\
• If both factors converge cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm
absolutely, then the product {Σ}{b}_{n}$ converge absolutely.
converges absolutely.
• Any rearrangement or regrouping of the terms of an absolutely convergent series does not
change the value of the sum.
• The terms of a conditionally convergent series can be rearranged so the new sum is any
desired real number.
Table 8.2.1 Relevant theorems for infinite series
• Theorem 8.2.1 is intuitively appealing because it simply says that if a sum of positive numbers converges,
then making some of those numbers negative will at worst make the sum smaller.
• Theorem 8.2.2 is again somewhat intuitive in that an infinite sum that converges cannot have larger and
larger terms in its "tail end." These tail-end terms have to be getting smaller and smaller if the sum is
to be finite. Now, the falsity of the converse, that if the $n$th term goes to zero then the series
converges, is not intuitive. It takes a counterexample to show this. A standard counterexample is the
so-called harmonic series, $\mathrm{Σ}\left(1/n\right)$, which is shown to diverge by one of the
devices developed in Chapter 8.3.
• Theorem 8.2.3 is a welcomed relief because it says that simple arithmetic with convergent series "works."
Thus, addition, subtraction, and scalar multiplication of even conditionally convergent series produce new
series that are convergent to the "right" values.
• Theorem 8.2.4 is a significant result because it both defines a method for forming a product between two
infinite series, and because it also clarifies the conditions under which such a product of series results
in a series that converges to the product of the values of the factors. The Cauchy product itself will be
demonstrated at length in the Examples.
• Theorem 8.2.5 makes two statements, one about absolutely convergent series, and one about conditionally
convergent series. The statement about absolutely convergent series shouldn't be surprising - such series
are so well behaved that almost everything good about them is true. What's remarkable is the statement
about conditionally convergent series, which can be made to converge to any real number by a suitable
rearrangement of terms.
Some Types of Series
Table 8.2.2 lists some examples and some types of series and their properties.
Series Form Properties
$\underset{n=0}{\overset{\ Absolute convergence for $\left|r\right|<1$
Geometric infty }{∑}}a{r}^{n}=\
frac{a}{1-r}$ ${S}_{N}=a\frac{1-{r}^{N+1}}{1-r}$
$\underset{n=1}{\overset{\ Absolute convergence for $p>1$
$p$-Series infty }{∑}}\frac{1}{{n}^{p}}$
Diverges for $p≤1$
• Converges if $\left\{{a}_{n}\right\}$ is decreasing
$\underset{n=0}{\overset{\ with limit zero. (Leibniz)
Alternating infty }{∑}}{\left(-1\right)}^
{n}{a}_{n}$, with ${a}_{n}>0$ • If $S$ is the sum of the (convergent) series, and $
{S}_{N}$ is the partial sum up through ${a}_{n}$,
then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
Harmonic $\underset{n=1}{\overset{\ Diverges, even though ${a}_{n}=\frac{1}{n}&
infty }{∑}}\frac{1}{n}$ srarr;0$
Alternating $\underset{n=1}{\overset{\ Converges (conditionally) to $\mathrm{ln}\left(2\right)
Harmonic infty }{∑}}\frac{{\left(-1\ $
$\underset{n=0}{\overset{\ ${S}_{N}={a}_{0}-{a}_{N+1}$
Telescoping infty }{∑}}\left({a}_{n}-{a}_{n
+1}\right)$ Converges (conditionally) if ${a}_{n}→0$
Table 8.2.2 Examples and types of series
Example Sum the series$\underset{n=0}{\overset{\infty }{∑}}1/{3}^{n}$ and show that the sum
8.2.1 is the limit of the sequence of partial sums.
Example Use Maple to sum the convergent $p$-series $\underset{n=1}{\overset{\infty }{∑}}1/
8.2.2 {n}^{2}$ and show that the sum is the limit of the sequence of partial sums.
a) Use Maple to sum the alternating series $\underset{n=1}{\overset{\infty }{∑}}{\left
(-1\right)}^{n+1}/{n}^{2}$ and show that the sum is the limit of the sequence of
Example partial sums.
b) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
Example Sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n+1\right)}$
8.2.4 and show that the sum is the limit of the sequence of partial sums.
Example Use Maple to sum the series $\underset{n=3}{\overset{\infty }{∑}}\frac{4}{{n}^{2}-4}$
8.2.5 and show that the sum is the limit of the sequence of partial sums.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\mathrm{arctan}\left(n\right)$ for
8.2.6 convergence.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}
8.2.7 {0.0ex}}\mathrm{ln}\left(\frac{n}{5⁢n+2}\right)$ for convergence.
Obtain the sum of the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule
Example [-0.0ex]{5.0px}{0.0ex}}\left(\mathrm{sin}⁡\left(\frac{1}{n}\right)-\mathrm{sin}&
8.2.8 ApplyFunction;\left(\frac{1}{n+1}\right)\right)$ and show that the sum is the limit of the
sequence of partial sums.
Example Write the repeating decimal 3.$\stackrel{&conjugate0;}{45}$ as the ratio of two integers.
Example Write the repeating decimal 7.4$\stackrel{&conjugate0;}{35}$ as the ratio of two integers.
Example Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n&
8.2.11 plus;2\right)}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{9{n}^{2}-1}$
Example and show that the sum is the limit of the sequence of partial sums.
Note that although $\frac{1}{9{n}^{2}-1}=\frac{1}{2}\left(\frac{1}{3n-1}-\frac{1}{3n+
1}\right)$ (partial fractions), this is not a telescoping series.
The Cauchy product of $\underset{n=0}{\overset{\infty }{∑}}{a}_{n}$ and $\underset{n&
equals;0}{\overset{\infty }{∑}}{b}_{n}$ is the series $\underset{n=0}{\overset{\infty }
Example {∑}}{c}_{n}$, where ${c}_{n}=\underset{k=0}{\overset{n}{∑}}{a}_{k}\cdot {b}_
8.2.13 {n-k}$.
What happens to ${c}_{n}$ when the index in both the series being multiplied starts not at $n&
equals;0$, but $n=1$?
Example Obtain the Cauchy product of $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{{n}^{2}}$
8.2.14 with itself. Is the value of the product the square of the value of the given series?
Obtain the Cauchy product of the absolutely convergent series $\underset{n=1}{\overset{\
Example infty }{∑}}\frac{1}{{n}^{2}}$ and the conditionally convergent series $\underset{n=1}{\
8.2.15 overset{\infty }{∑}}\frac{{\left(-1\right)}^{n+1}}{n}$. Is the product the product of the
sums of the two given series?
a) Show that Leibniz' theorem on the convergence of alternating series applies to the alternating
harmonic series. (See Table 8.2.2.)
8.2.16 b) Use Maple to show that the sequence of partial sums converges to $\mathrm{ln}\left(2\right)$.
c) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
a) Show that Leibniz' theorem on the convergence of alternating series applies to the series $\
underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{\left
(-1\right)}^{n+1}}{\sqrt{n}}$. (See Table 8.2.2.)
8.2.17 b) Obtain the first few, but graph the first 50, partial sums.
c) If ${S}_{k}$ is the partial sum of the first $k$ terms, what value of $k$ will guarantee that
the error in ${S}_{k}$ is no worse than ${10}^{-3}$?
<< Previous Section Table of Contents Next Section >>
© Maplesoft, a division of Waterloo Maple Inc., 2024. All rights reserved. This product is protected by
copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.
For more information on Maplesoft products and services, visit www.maplesoft.com
Notations such as $\underset{n={n}_{0}}{\overset{\infty }{∑}}{a}_{n}$, $\underset{n}{\overset{\infty }{∑}}{a}_{n}$, or even $\stackrel{\infty }{∑}{a}_{n}$ and $\mathrm{Σ}{a}
_{n}$ are used to denote an infinite series, which is simply the sum of all the terms in an infinite sequence. (The simpler notations often appear in printed texts to save costs.)
As Zeno's paradox about the tortoise and the hare (actually, it was Achilles) shows, an infinite sequence of operations cannot logically be completed in a finite time, yet in reality it often is.
So, by what rules is a meaning attached to a symbol that indicates the impossible task of executing an infinite number of additions?
The key to giving meaning to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is the idea of the partial sum, as per Definition 8.2.1, below.
Definition 8.2.1: Partial Sums of an Infinite Series
The $N$th partial sum of the infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ is ${S}_{N}&
equals;\stackrel{N}{\mathrm{Σ}}{a}_{n}$, that is, the sum of the terms up through and including $
{a}_{N}$. (Some texts will define ${S}_{N}$ as the sum of the first $N$ terms.)
The meaning given to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is then $\underset{N→\infty }{lim}{S}_{N}$, that is, the limit of the sequence of partial sums, $\left\{{S}_
If this limit exists, then that number is the "sum" of the infinite series, and the series is said to converge to that number. If the limit of the sequence of partial sums does not exist, then the
series is said to diverge. (An infinite series can diverge if the limit of the sequence of partial sums is infinite, or if it fails to exist because of oscillation.)
Definitions 8.2.2 and 8.2.3 make these notions precise.
Definition 8.2.2: Convergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of
the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to $L$.
Definition 8.2.3: Divergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the
sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either
because the limit is infinite, or because it oscillates.
The astute reader will realize that Definitions 8.2.2 and 8.2.3 are reminiscent of the contents of Table 4.5.1 that detail similar calculations for improper integrals. There, the improper integral
is defined as the limit of a proper integral, with the endpoint of integration approaching either infinity or a singularity on the real line.
Definitions 8.2.4 and 8.2.5 refine the definition of convergence by distinguishing between absolute and conditional convergence.
Definition 8.2.4: Absolute Convergence
• If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_
{n}$ is said to be absolutely convergent.
• If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges
Definition 8.2.5: Conditional Convergence
• If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms,
but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to be
Consequently, there is a certain ambiguity in declaring that an infinite series converges. Is it a series of nonnegative terms, in which case its convergence is necessarily absolute? Or if it
contains an infinite number of negative terms, does it converge absolutely, or does it converge just conditionally? Hence, when discussing the convergence of an infinite series, this Study Guide
will always modify the word "converge" with either conditionally, or absolutely, unless the context makes it perfectly clear that no such modification is needed to eliminate ambiguity.
Table 8.2.1 lists five theorems that summarize additional key points about the behavior of infinite series.
Theorem Intuitive Statement Formal Statement
• An infinite series that converges If $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then
8.2.1 absolutely, must necessarily $\mathrm{Σ}{a}_{n}$ converges.
converge conditionally.
• The general term of a convergent
series must necessarily tend to If $\mathrm{Σ}{a}_{n}$ converges, then ${a}_{n}&
8.2.2 zero. srarr;0$, but not conversely.
• The converse is false.
If $\mathrm{Σ}{a}_{n}=A$ and $\mathrm{&
Sigma;}{b}_{n}=B$, then
• Addition, subtraction and scalar 1. $\mathrm{Σ}{a}_{n}±\mathrm{Σ}{b}_{n}&
8.2.3 multiplication for convergent equals;\mathrm{Σ}\left({a}_{n}±{b}_{n}\right)
sequences is well-behaved. =A+B$;
2. $c{\mathrm{Σa}}_{n}=\mathrm{Σ}c{a}_
{n}=cA$, for any real number $c$.
• The Cauchy product of two If $\underset{n=0}{\mathrm{Σ}}{a}_{n}&
convergent series may or may not equals;A$, $\underset{n=0}{\mathrm{Σ}}{b}_
converge. If the product does {n}=B$, and ${c}_{n}=$$\underset{k=
converge, it converges to the 0}{\overset{n}{∑}}{a}_{k}{b}_{n-k}$ , then
"right" value.
1. $\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\
8.2.4 • If one of the two series converges mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$
absolutely, then the product converges absolutely.
converges to the "right" value.
2. $\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\
• If both factors converge cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm
absolutely, then the product {Σ}{b}_{n}$ converge absolutely.
converges absolutely.
• Any rearrangement or regrouping of the terms of an absolutely convergent series does not
change the value of the sum.
• The terms of a conditionally convergent series can be rearranged so the new sum is any
desired real number.
Table 8.2.1 Relevant theorems for infinite series
• Theorem 8.2.1 is intuitively appealing because it simply says that if a sum of positive numbers converges, then making some of those numbers negative will at worst make the sum smaller.
• Theorem 8.2.2 is again somewhat intuitive in that an infinite sum that converges cannot have larger and larger terms in its "tail end." These tail-end terms have to be getting smaller and smaller
if the sum is to be finite. Now, the falsity of the converse, that if the $n$th term goes to zero then the series converges, is not intuitive. It takes a counterexample to show this. A standard
counterexample is the so-called harmonic series, $\mathrm{Σ}\left(1/n\right)$, which is shown to diverge by one of the devices developed in Chapter 8.3.
• Theorem 8.2.3 is a welcomed relief because it says that simple arithmetic with convergent series "works." Thus, addition, subtraction, and scalar multiplication of even conditionally convergent
series produce new series that are convergent to the "right" values.
• Theorem 8.2.4 is a significant result because it both defines a method for forming a product between two infinite series, and because it also clarifies the conditions under which such a product
of series results in a series that converges to the product of the values of the factors. The Cauchy product itself will be demonstrated at length in the Examples.
• Theorem 8.2.5 makes two statements, one about absolutely convergent series, and one about conditionally convergent series. The statement about absolutely convergent series shouldn't be surprising
- such series are so well behaved that almost everything good about them is true. What's remarkable is the statement about conditionally convergent series, which can be made to converge to any
real number by a suitable rearrangement of terms.
Some Types of Series
Table 8.2.2 lists some examples and some types of series and their properties.
Series Form Properties
$\underset{n=0}{\overset{\ Absolute convergence for $\left|r\right|<1$
Geometric infty }{∑}}a{r}^{n}=\
frac{a}{1-r}$ ${S}_{N}=a\frac{1-{r}^{N+1}}{1-r}$
$\underset{n=1}{\overset{\ Absolute convergence for $p>1$
$p$-Series infty }{∑}}\frac{1}{{n}^{p}}$
Diverges for $p≤1$
• Converges if $\left\{{a}_{n}\right\}$ is decreasing
$\underset{n=0}{\overset{\ with limit zero. (Leibniz)
Alternating infty }{∑}}{\left(-1\right)}^
{n}{a}_{n}$, with ${a}_{n}>0$ • If $S$ is the sum of the (convergent) series, and $
{S}_{N}$ is the partial sum up through ${a}_{n}$,
then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
Harmonic $\underset{n=1}{\overset{\ Diverges, even though ${a}_{n}=\frac{1}{n}&
infty }{∑}}\frac{1}{n}$ srarr;0$
Alternating $\underset{n=1}{\overset{\ Converges (conditionally) to $\mathrm{ln}\left(2\right)
Harmonic infty }{∑}}\frac{{\left(-1\ $
$\underset{n=0}{\overset{\ ${S}_{N}={a}_{0}-{a}_{N+1}$
Telescoping infty }{∑}}\left({a}_{n}-{a}_{n
+1}\right)$ Converges (conditionally) if ${a}_{n}→0$
Table 8.2.2 Examples and types of series
Notations such as $\underset{n={n}_{0}}{\overset{\infty }{∑}}{a}_{n}$, $\underset{n}{\overset{\infty }{∑}}{a}_{n}$, or even $\stackrel{\infty }{∑}{a}_{n}$ and $\mathrm{Σ}{a}_
{n}$ are used to denote an infinite series, which is simply the sum of all the terms in an infinite sequence. (The simpler notations often appear in printed texts to save costs.)
As Zeno's paradox about the tortoise and the hare (actually, it was Achilles) shows, an infinite sequence of operations cannot logically be completed in a finite time, yet in reality it often is. So,
by what rules is a meaning attached to a symbol that indicates the impossible task of executing an infinite number of additions?
The key to giving meaning to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is the idea of the partial sum, as per Definition 8.2.1, below.
Definition 8.2.1: Partial Sums of an Infinite Series
The $N$th partial sum of the infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ is ${S}_{N}&
equals;\stackrel{N}{\mathrm{Σ}}{a}_{n}$, that is, the sum of the terms up through and including $
{a}_{N}$. (Some texts will define ${S}_{N}$ as the sum of the first $N$ terms.)
The meaning given to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is then $\underset{N→\infty }{lim}{S}_{N}$, that is, the limit of the sequence of partial sums, $\left\{{S}_
If this limit exists, then that number is the "sum" of the infinite series, and the series is said to converge to that number. If the limit of the sequence of partial sums does not exist, then the
series is said to diverge. (An infinite series can diverge if the limit of the sequence of partial sums is infinite, or if it fails to exist because of oscillation.)
Definitions 8.2.2 and 8.2.3 make these notions precise.
Definition 8.2.2: Convergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of
the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to $L$.
Definition 8.2.3: Divergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the
sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either
because the limit is infinite, or because it oscillates.
The astute reader will realize that Definitions 8.2.2 and 8.2.3 are reminiscent of the contents of Table 4.5.1 that detail similar calculations for improper integrals. There, the improper integral
is defined as the limit of a proper integral, with the endpoint of integration approaching either infinity or a singularity on the real line.
Definitions 8.2.4 and 8.2.5 refine the definition of convergence by distinguishing between absolute and conditional convergence.
Definition 8.2.4: Absolute Convergence
• If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_
{n}$ is said to be absolutely convergent.
• If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges
Definition 8.2.5: Conditional Convergence
• If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms,
but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to be
Consequently, there is a certain ambiguity in declaring that an infinite series converges. Is it a series of nonnegative terms, in which case its convergence is necessarily absolute? Or if it
contains an infinite number of negative terms, does it converge absolutely, or does it converge just conditionally? Hence, when discussing the convergence of an infinite series, this Study Guide
will always modify the word "converge" with either conditionally, or absolutely, unless the context makes it perfectly clear that no such modification is needed to eliminate ambiguity.
The key to giving meaning to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is the idea of the partial sum, as per Definition 8.2.1, below.
Definition 8.2.1: Partial Sums of an Infinite Series
The $N$th partial sum of the infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ is ${S}_{N}&
equals;\stackrel{N}{\mathrm{Σ}}{a}_{n}$, that is, the sum of the terms up through and including $
{a}_{N}$. (Some texts will define ${S}_{N}$ as the sum of the first $N$ terms.)
The $N$th partial sum of the infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ is ${S}_{N}=\stackrel{N}{\mathrm{Σ}}{a}_{n}$, that is, the sum of the terms up through and
including ${a}_{N}$. (Some texts will define ${S}_{N}$ as the sum of the first $N$ terms.)
The meaning given to the symbol $\stackrel{\infty }{\mathrm{Σ}}{a}_{n}$ is then $\underset{N→\infty }{lim}{S}_{N}$, that is, the limit of the sequence of partial sums, $\left\{{S}_
If this limit exists, then that number is the "sum" of the infinite series, and the series is said to converge to that number. If the limit of the sequence of partial sums does not exist, then the
series is said to diverge. (An infinite series can diverge if the limit of the sequence of partial sums is infinite, or if it fails to exist because of oscillation.)
Definition 8.2.2: Convergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of
the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to $L$.
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to
The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ converges to $L$ if the limit of the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$converges to $L$.
Definition 8.2.3: Divergence of an Infinite Series
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the
sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either
because the limit is infinite, or because it oscillates.
• The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either
because the limit is infinite, or because it oscillates.
The infinite series $\stackrel{\infty }{\mathrm{\Sigma }}{a}_{n}$ diverges if the limit of the sequence of partial sums ${S}_{N}=\stackrel{N}{\mathrm{\Sigma }}{a}_{n}$ diverges, either because
the limit is infinite, or because it oscillates.
The astute reader will realize that Definitions 8.2.2 and 8.2.3 are reminiscent of the contents of Table 4.5.1 that detail similar calculations for improper integrals. There, the improper integral is
defined as the limit of a proper integral, with the endpoint of integration approaching either infinity or a singularity on the real line.
Definitions 8.2.4 and 8.2.5 refine the definition of convergence by distinguishing between absolute and conditional convergence.
Definition 8.2.4: Absolute Convergence
• If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_
{n}$ is said to be absolutely convergent.
• If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges
• If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_{n}$ is said to be absolutely convergent.
If the series $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then the series $\mathrm{Σ}{a}_{n}$ is said to be absolutely convergent.
• If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges absolutely.
If all the ${a}_{n}$ in a convergent series are nonnegative, then the series necessarily converges absolutely.
Definition 8.2.5: Conditional Convergence
• If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms,
but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to be
• If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms, but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to
be conditional.
If $\mathrm{Σ}{a}_{n}$ is a convergent series containing an infinite number of negative terms, but the series $\mathrm{Σ}\left|{a}_{n}\right|$ diverges, then the convergence is said to be
Consequently, there is a certain ambiguity in declaring that an infinite series converges. Is it a series of nonnegative terms, in which case its convergence is necessarily absolute? Or if it
contains an infinite number of negative terms, does it converge absolutely, or does it converge just conditionally? Hence, when discussing the convergence of an infinite series, this Study Guide will
always modify the word "converge" with either conditionally, or absolutely, unless the context makes it perfectly clear that no such modification is needed to eliminate ambiguity.
Table 8.2.1 lists five theorems that summarize additional key points about the behavior of infinite series.
Theorem Intuitive Statement Formal Statement
• An infinite series that converges If $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then
8.2.1 absolutely, must necessarily $\mathrm{Σ}{a}_{n}$ converges.
converge conditionally.
• The general term of a convergent
series must necessarily tend to If $\mathrm{Σ}{a}_{n}$ converges, then ${a}_{n}&
8.2.2 zero. srarr;0$, but not conversely.
• The converse is false.
If $\mathrm{Σ}{a}_{n}=A$ and $\mathrm{&
Sigma;}{b}_{n}=B$, then
• Addition, subtraction and scalar 1. $\mathrm{Σ}{a}_{n}±\mathrm{Σ}{b}_{n}&
8.2.3 multiplication for convergent equals;\mathrm{Σ}\left({a}_{n}±{b}_{n}\right)
sequences is well-behaved. =A+B$;
2. $c{\mathrm{Σa}}_{n}=\mathrm{Σ}c{a}_
{n}=cA$, for any real number $c$.
• The Cauchy product of two If $\underset{n=0}{\mathrm{Σ}}{a}_{n}&
convergent series may or may not equals;A$, $\underset{n=0}{\mathrm{Σ}}{b}_
converge. If the product does {n}=B$, and ${c}_{n}=$$\underset{k=
converge, it converges to the 0}{\overset{n}{∑}}{a}_{k}{b}_{n-k}$ , then
"right" value.
1. $\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\
8.2.4 • If one of the two series converges mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$
absolutely, then the product converges absolutely.
converges to the "right" value.
2. $\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\
• If both factors converge cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm
absolutely, then the product {Σ}{b}_{n}$ converge absolutely.
converges absolutely.
• Any rearrangement or regrouping of the terms of an absolutely convergent series does not
change the value of the sum.
• The terms of a conditionally convergent series can be rearranged so the new sum is any
desired real number.
Table 8.2.1 Relevant theorems for infinite series
• Theorem 8.2.1 is intuitively appealing because it simply says that if a sum of positive numbers converges, then making some of those numbers negative will at worst make the sum smaller.
• Theorem 8.2.2 is again somewhat intuitive in that an infinite sum that converges cannot have larger and larger terms in its "tail end." These tail-end terms have to be getting smaller and smaller
if the sum is to be finite. Now, the falsity of the converse, that if the $n$th term goes to zero then the series converges, is not intuitive. It takes a counterexample to show this. A standard
counterexample is the so-called harmonic series, $\mathrm{Σ}\left(1/n\right)$, which is shown to diverge by one of the devices developed in Chapter 8.3.
• Theorem 8.2.3 is a welcomed relief because it says that simple arithmetic with convergent series "works." Thus, addition, subtraction, and scalar multiplication of even conditionally convergent
series produce new series that are convergent to the "right" values.
• Theorem 8.2.4 is a significant result because it both defines a method for forming a product between two infinite series, and because it also clarifies the conditions under which such a product of
series results in a series that converges to the product of the values of the factors. The Cauchy product itself will be demonstrated at length in the Examples.
• Theorem 8.2.5 makes two statements, one about absolutely convergent series, and one about conditionally convergent series. The statement about absolutely convergent series shouldn't be surprising
- such series are so well behaved that almost everything good about them is true. What's remarkable is the statement about conditionally convergent series, which can be made to converge to any
real number by a suitable rearrangement of terms.
Table 8.2.1 lists five theorems that summarize additional key points about the behavior of infinite series.
Theorem Intuitive Statement Formal Statement
• An infinite series that converges If $\mathrm{Σ}\left|{a}_{n}\right|$ converges, then
8.2.1 absolutely, must necessarily $\mathrm{Σ}{a}_{n}$ converges.
converge conditionally.
• The general term of a convergent
series must necessarily tend to If $\mathrm{Σ}{a}_{n}$ converges, then ${a}_{n}&
8.2.2 zero. srarr;0$, but not conversely.
• The converse is false.
If $\mathrm{Σ}{a}_{n}=A$ and $\mathrm{&
Sigma;}{b}_{n}=B$, then
• Addition, subtraction and scalar 1. $\mathrm{Σ}{a}_{n}±\mathrm{Σ}{b}_{n}&
8.2.3 multiplication for convergent equals;\mathrm{Σ}\left({a}_{n}±{b}_{n}\right)
sequences is well-behaved. =A+B$;
2. $c{\mathrm{Σa}}_{n}=\mathrm{Σ}c{a}_
{n}=cA$, for any real number $c$.
• The Cauchy product of two If $\underset{n=0}{\mathrm{Σ}}{a}_{n}&
convergent series may or may not equals;A$, $\underset{n=0}{\mathrm{Σ}}{b}_
converge. If the product does {n}=B$, and ${c}_{n}=$$\underset{k=
converge, it converges to the 0}{\overset{n}{∑}}{a}_{k}{b}_{n-k}$ , then
"right" value.
1. $\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\
8.2.4 • If one of the two series converges mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$
absolutely, then the product converges absolutely.
converges to the "right" value.
2. $\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\
• If both factors converge cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm
absolutely, then the product {Σ}{b}_{n}$ converge absolutely.
converges absolutely.
• Any rearrangement or regrouping of the terms of an absolutely convergent series does not
change the value of the sum.
• The terms of a conditionally convergent series can be rearranged so the new sum is any
desired real number.
Table 8.2.1 Relevant theorems for infinite series
• An infinite series that converges absolutely, must necessarily converge conditionally.
An infinite series that converges absolutely, must necessarily converge conditionally.
• The general term of a convergent series must necessarily tend to zero.
The general term of a convergent series must necessarily tend to zero.
• Addition, subtraction and scalar multiplication for convergent sequences is well-behaved.
Addition, subtraction and scalar multiplication for convergent sequences is well-behaved.
• The Cauchy product of two convergent series may or may not converge. If the product does converge, it converges to the "right" value.
The Cauchy product of two convergent series may or may not converge. If the product does converge, it converges to the "right" value.
• If one of the two series converges absolutely, then the product converges to the "right" value.
If one of the two series converges absolutely, then the product converges to the "right" value.
• If both factors converge absolutely, then the product converges absolutely.
If both factors converge absolutely, then the product converges absolutely.
1. $\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$ converges absolutely.
$\mathrm{Σ}{c}_{n}=A\cdot B$ if one of $\mathrm{Σ}{a}_{n}$ or $\mathrm{Σ}{b}_{n}$ converges absolutely.
2. $\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm{Σ}{b}_{n}$ converge absolutely.
$\mathrm{\Sigma }{c}_{n}$ converges absolutely to $A\cdot B$ if both $\mathrm{Σ}{a}_{n}$ and $\mathrm{Σ}{b}_{n}$ converge absolutely.
• Any rearrangement or regrouping of the terms of an absolutely convergent series does not change the value of the sum.
Any rearrangement or regrouping of the terms of an absolutely convergent series does not change the value of the sum.
• The terms of a conditionally convergent series can be rearranged so the new sum is any desired real number.
The terms of a conditionally convergent series can be rearranged so the new sum is any desired real number.
• Theorem 8.2.1 is intuitively appealing because it simply says that if a sum of positive numbers converges, then making some of those numbers negative will at worst make the sum smaller.
Theorem 8.2.1 is intuitively appealing because it simply says that if a sum of positive numbers converges, then making some of those numbers negative will at worst make the sum smaller.
• Theorem 8.2.2 is again somewhat intuitive in that an infinite sum that converges cannot have larger and larger terms in its "tail end." These tail-end terms have to be getting smaller and smaller
if the sum is to be finite. Now, the falsity of the converse, that if the $n$th term goes to zero then the series converges, is not intuitive. It takes a counterexample to show this. A standard
counterexample is the so-called harmonic series, $\mathrm{Σ}\left(1/n\right)$, which is shown to diverge by one of the devices developed in Chapter 8.3.
Theorem 8.2.2 is again somewhat intuitive in that an infinite sum that converges cannot have larger and larger terms in its "tail end." These tail-end terms have to be getting smaller and smaller if
the sum is to be finite. Now, the falsity of the converse, that if the $n$th term goes to zero then the series converges, is not intuitive. It takes a counterexample to show this. A standard
counterexample is the so-called harmonic series, $\mathrm{Σ}\left(1/n\right)$, which is shown to diverge by one of the devices developed in Chapter 8.3.
• Theorem 8.2.3 is a welcomed relief because it says that simple arithmetic with convergent series "works." Thus, addition, subtraction, and scalar multiplication of even conditionally convergent
series produce new series that are convergent to the "right" values.
Theorem 8.2.3 is a welcomed relief because it says that simple arithmetic with convergent series "works." Thus, addition, subtraction, and scalar multiplication of even conditionally convergent
series produce new series that are convergent to the "right" values.
• Theorem 8.2.4 is a significant result because it both defines a method for forming a product between two infinite series, and because it also clarifies the conditions under which such a product of
series results in a series that converges to the product of the values of the factors. The Cauchy product itself will be demonstrated at length in the Examples.
Theorem 8.2.4 is a significant result because it both defines a method for forming a product between two infinite series, and because it also clarifies the conditions under which such a product of
series results in a series that converges to the product of the values of the factors. The Cauchy product itself will be demonstrated at length in the Examples.
• Theorem 8.2.5 makes two statements, one about absolutely convergent series, and one about conditionally convergent series. The statement about absolutely convergent series shouldn't be surprising -
such series are so well behaved that almost everything good about them is true. What's remarkable is the statement about conditionally convergent series, which can be made to converge to any real
number by a suitable rearrangement of terms.
Theorem 8.2.5 makes two statements, one about absolutely convergent series, and one about conditionally convergent series. The statement about absolutely convergent series shouldn't be surprising -
such series are so well behaved that almost everything good about them is true. What's remarkable is the statement about conditionally convergent series, which can be made to converge to any real
number by a suitable rearrangement of terms.
Some Types of Series
Table 8.2.2 lists some examples and some types of series and their properties.
Series Form Properties
$\underset{n=0}{\overset{\ Absolute convergence for $\left|r\right|<1$
Geometric infty }{∑}}a{r}^{n}=\
frac{a}{1-r}$ ${S}_{N}=a\frac{1-{r}^{N+1}}{1-r}$
$\underset{n=1}{\overset{\ Absolute convergence for $p>1$
$p$-Series infty }{∑}}\frac{1}{{n}^{p}}$
Diverges for $p≤1$
• Converges if $\left\{{a}_{n}\right\}$ is decreasing
$\underset{n=0}{\overset{\ with limit zero. (Leibniz)
Alternating infty }{∑}}{\left(-1\right)}^
{n}{a}_{n}$, with ${a}_{n}>0$ • If $S$ is the sum of the (convergent) series, and $
{S}_{N}$ is the partial sum up through ${a}_{n}$,
then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
Harmonic $\underset{n=1}{\overset{\ Diverges, even though ${a}_{n}=\frac{1}{n}&
infty }{∑}}\frac{1}{n}$ srarr;0$
Alternating $\underset{n=1}{\overset{\ Converges (conditionally) to $\mathrm{ln}\left(2\right)
Harmonic infty }{∑}}\frac{{\left(-1\ $
$\underset{n=0}{\overset{\ ${S}_{N}={a}_{0}-{a}_{N+1}$
Telescoping infty }{∑}}\left({a}_{n}-{a}_{n
+1}\right)$ Converges (conditionally) if ${a}_{n}→0$
Table 8.2.2 Examples and types of series
Table 8.2.2 lists some examples and some types of series and their properties.
Series Form Properties
$\underset{n=0}{\overset{\ Absolute convergence for $\left|r\right|<1$
Geometric infty }{∑}}a{r}^{n}=\
frac{a}{1-r}$ ${S}_{N}=a\frac{1-{r}^{N+1}}{1-r}$
$\underset{n=1}{\overset{\ Absolute convergence for $p>1$
$p$-Series infty }{∑}}\frac{1}{{n}^{p}}$
Diverges for $p≤1$
• Converges if $\left\{{a}_{n}\right\}$ is decreasing
$\underset{n=0}{\overset{\ with limit zero. (Leibniz)
Alternating infty }{∑}}{\left(-1\right)}^
{n}{a}_{n}$, with ${a}_{n}>0$ • If $S$ is the sum of the (convergent) series, and $
{S}_{N}$ is the partial sum up through ${a}_{n}$,
then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
Harmonic $\underset{n=1}{\overset{\ Diverges, even though ${a}_{n}=\frac{1}{n}&
infty }{∑}}\frac{1}{n}$ srarr;0$
Alternating $\underset{n=1}{\overset{\ Converges (conditionally) to $\mathrm{ln}\left(2\right)
Harmonic infty }{∑}}\frac{{\left(-1\ $
$\underset{n=0}{\overset{\ ${S}_{N}={a}_{0}-{a}_{N+1}$
Telescoping infty }{∑}}\left({a}_{n}-{a}_{n
+1}\right)$ Converges (conditionally) if ${a}_{n}→0$
Table 8.2.2 Examples and types of series
• Converges if $\left\{{a}_{n}\right\}$ is decreasing with limit zero. (Leibniz)
• If $S$ is the sum of the (convergent) series, and ${S}_{N}$ is the partial sum up through ${a}_{n}$, then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
If $S$ is the sum of the (convergent) series, and ${S}_{N}$ is the partial sum up through ${a}_{n}$, then $\left|S-{S}_{N}\right|≤{a}_{n+1}$.
Example Sum the series$\underset{n=0}{\overset{\infty }{∑}}1/{3}^{n}$ and show that the sum
8.2.1 is the limit of the sequence of partial sums.
Example Use Maple to sum the convergent $p$-series $\underset{n=1}{\overset{\infty }{∑}}1/
8.2.2 {n}^{2}$ and show that the sum is the limit of the sequence of partial sums.
a) Use Maple to sum the alternating series $\underset{n=1}{\overset{\infty }{∑}}{\left
(-1\right)}^{n+1}/{n}^{2}$ and show that the sum is the limit of the sequence of
Example partial sums.
b) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
Example Sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n+1\right)}$
8.2.4 and show that the sum is the limit of the sequence of partial sums.
Example Use Maple to sum the series $\underset{n=3}{\overset{\infty }{∑}}\frac{4}{{n}^{2}-4}$
8.2.5 and show that the sum is the limit of the sequence of partial sums.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\mathrm{arctan}\left(n\right)$ for
8.2.6 convergence.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}
8.2.7 {0.0ex}}\mathrm{ln}\left(\frac{n}{5⁢n+2}\right)$ for convergence.
Obtain the sum of the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule
Example [-0.0ex]{5.0px}{0.0ex}}\left(\mathrm{sin}⁡\left(\frac{1}{n}\right)-\mathrm{sin}&
8.2.8 ApplyFunction;\left(\frac{1}{n+1}\right)\right)$ and show that the sum is the limit of the
sequence of partial sums.
Example Write the repeating decimal 3.$\stackrel{&conjugate0;}{45}$ as the ratio of two integers.
Example Write the repeating decimal 7.4$\stackrel{&conjugate0;}{35}$ as the ratio of two integers.
Example Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n&
8.2.11 plus;2\right)}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{9{n}^{2}-1}$
Example and show that the sum is the limit of the sequence of partial sums.
Note that although $\frac{1}{9{n}^{2}-1}=\frac{1}{2}\left(\frac{1}{3n-1}-\frac{1}{3n+
1}\right)$ (partial fractions), this is not a telescoping series.
The Cauchy product of $\underset{n=0}{\overset{\infty }{∑}}{a}_{n}$ and $\underset{n&
equals;0}{\overset{\infty }{∑}}{b}_{n}$ is the series $\underset{n=0}{\overset{\infty }
Example {∑}}{c}_{n}$, where ${c}_{n}=\underset{k=0}{\overset{n}{∑}}{a}_{k}\cdot {b}_
8.2.13 {n-k}$.
What happens to ${c}_{n}$ when the index in both the series being multiplied starts not at $n&
equals;0$, but $n=1$?
Example Obtain the Cauchy product of $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{{n}^{2}}$
8.2.14 with itself. Is the value of the product the square of the value of the given series?
Obtain the Cauchy product of the absolutely convergent series $\underset{n=1}{\overset{\
Example infty }{∑}}\frac{1}{{n}^{2}}$ and the conditionally convergent series $\underset{n=1}{\
8.2.15 overset{\infty }{∑}}\frac{{\left(-1\right)}^{n+1}}{n}$. Is the product the product of the
sums of the two given series?
a) Show that Leibniz' theorem on the convergence of alternating series applies to the alternating
harmonic series. (See Table 8.2.2.)
8.2.16 b) Use Maple to show that the sequence of partial sums converges to $\mathrm{ln}\left(2\right)$.
c) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
a) Show that Leibniz' theorem on the convergence of alternating series applies to the series $\
underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{\left
(-1\right)}^{n+1}}{\sqrt{n}}$. (See Table 8.2.2.)
8.2.17 b) Obtain the first few, but graph the first 50, partial sums.
c) If ${S}_{k}$ is the partial sum of the first $k$ terms, what value of $k$ will guarantee that
the error in ${S}_{k}$ is no worse than ${10}^{-3}$?
Example Sum the series$\underset{n=0}{\overset{\infty }{∑}}1/{3}^{n}$ and show that the sum
8.2.1 is the limit of the sequence of partial sums.
Example Use Maple to sum the convergent $p$-series $\underset{n=1}{\overset{\infty }{∑}}1/
8.2.2 {n}^{2}$ and show that the sum is the limit of the sequence of partial sums.
a) Use Maple to sum the alternating series $\underset{n=1}{\overset{\infty }{∑}}{\left
(-1\right)}^{n+1}/{n}^{2}$ and show that the sum is the limit of the sequence of
Example partial sums.
b) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
Example Sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n+1\right)}$
8.2.4 and show that the sum is the limit of the sequence of partial sums.
Example Use Maple to sum the series $\underset{n=3}{\overset{\infty }{∑}}\frac{4}{{n}^{2}-4}$
8.2.5 and show that the sum is the limit of the sequence of partial sums.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\mathrm{arctan}\left(n\right)$ for
8.2.6 convergence.
Example Test the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}
8.2.7 {0.0ex}}\mathrm{ln}\left(\frac{n}{5⁢n+2}\right)$ for convergence.
Obtain the sum of the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule
Example [-0.0ex]{5.0px}{0.0ex}}\left(\mathrm{sin}⁡\left(\frac{1}{n}\right)-\mathrm{sin}&
8.2.8 ApplyFunction;\left(\frac{1}{n+1}\right)\right)$ and show that the sum is the limit of the
sequence of partial sums.
Example Write the repeating decimal 3.$\stackrel{&conjugate0;}{45}$ as the ratio of two integers.
Example Write the repeating decimal 7.4$\stackrel{&conjugate0;}{35}$ as the ratio of two integers.
Example Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n&
8.2.11 plus;2\right)}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{9{n}^{2}-1}$
Example and show that the sum is the limit of the sequence of partial sums.
Note that although $\frac{1}{9{n}^{2}-1}=\frac{1}{2}\left(\frac{1}{3n-1}-\frac{1}{3n+
1}\right)$ (partial fractions), this is not a telescoping series.
The Cauchy product of $\underset{n=0}{\overset{\infty }{∑}}{a}_{n}$ and $\underset{n&
equals;0}{\overset{\infty }{∑}}{b}_{n}$ is the series $\underset{n=0}{\overset{\infty }
Example {∑}}{c}_{n}$, where ${c}_{n}=\underset{k=0}{\overset{n}{∑}}{a}_{k}\cdot {b}_
8.2.13 {n-k}$.
What happens to ${c}_{n}$ when the index in both the series being multiplied starts not at $n&
equals;0$, but $n=1$?
Example Obtain the Cauchy product of $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{{n}^{2}}$
8.2.14 with itself. Is the value of the product the square of the value of the given series?
Obtain the Cauchy product of the absolutely convergent series $\underset{n=1}{\overset{\
Example infty }{∑}}\frac{1}{{n}^{2}}$ and the conditionally convergent series $\underset{n=1}{\
8.2.15 overset{\infty }{∑}}\frac{{\left(-1\right)}^{n+1}}{n}$. Is the product the product of the
sums of the two given series?
a) Show that Leibniz' theorem on the convergence of alternating series applies to the alternating
harmonic series. (See Table 8.2.2.)
8.2.16 b) Use Maple to show that the sequence of partial sums converges to $\mathrm{ln}\left(2\right)$.
c) Test the claim that a partial sum is closer to the sum than the magnitude of the first
neglected term.
a) Show that Leibniz' theorem on the convergence of alternating series applies to the series $\
underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{\left
(-1\right)}^{n+1}}{\sqrt{n}}$. (See Table 8.2.2.)
8.2.17 b) Obtain the first few, but graph the first 50, partial sums.
c) If ${S}_{k}$ is the partial sum of the first $k$ terms, what value of $k$ will guarantee that
the error in ${S}_{k}$ is no worse than ${10}^{-3}$?
Sum the series$\underset{n=0}{\overset{\infty }{∑}}1/{3}^{n}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the convergent $p$-series $\underset{n=1}{\overset{\infty }{∑}}1/{n}^{2}$ and show that the sum is the limit of the sequence of partial sums.
a) Use Maple to sum the alternating series $\underset{n=1}{\overset{\infty }{∑}}{\left(-1\right)}^{n+1}/{n}^{2}$ and show that the sum is the limit of the sequence of partial
Use Maple to sum the alternating series $\underset{n=1}{\overset{\infty }{∑}}{\left(-1\right)}^{n+1}/{n}^{2}$ and show that the sum is the limit of the sequence of partial sums.
b) Test the claim that a partial sum is closer to the sum than the magnitude of the first neglected term.
Test the claim that a partial sum is closer to the sum than the magnitude of the first neglected term.
Sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n+1\right)}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the series $\underset{n=3}{\overset{\infty }{∑}}\frac{4}{{n}^{2}-4}$ and show that the sum is the limit of the sequence of partial sums.
Obtain the sum of the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\left(\mathrm{sin}⁡\left(\frac{1}{n}\right)-\mathrm{sin}⁡
\left(\frac{1}{n+1}\right)\right)$ and show that the sum is the limit of the sequence of partial sums.
Write the repeating decimal 3.$\stackrel{&conjugate0;}{45}$ as the ratio of two integers.
Write the repeating decimal 7.4$\stackrel{&conjugate0;}{35}$ as the ratio of two integers.
Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{n\left(n+2\right)}$ and show that the sum is the limit of the sequence of partial sums.
Use Maple to sum the series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{9{n}^{2}-1}$ and show that the sum is the limit of the sequence of partial sums.
Note that although $\frac{1}{9{n}^{2}-1}=\frac{1}{2}\left(\frac{1}{3n-1}-\frac{1}{3n+1}\right)$ (partial fractions), this is not a telescoping series.
The Cauchy product of $\underset{n=0}{\overset{\infty }{∑}}{a}_{n}$ and $\underset{n=0}{\overset{\infty }{∑}}{b}_{n}$ is the series $\underset{n=0}{\overset{\infty }{&
Sum;}}{c}_{n}$, where ${c}_{n}=\underset{k=0}{\overset{n}{∑}}{a}_{k}\cdot {b}_{n-k}$.
What happens to ${c}_{n}$ when the index in both the series being multiplied starts not at $n=0$, but $n=1$?
Obtain the Cauchy product of $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{{n}^{2}}$ with itself. Is the value of the product the square of the value of the given series?
Obtain the Cauchy product of the absolutely convergent series $\underset{n=1}{\overset{\infty }{∑}}\frac{1}{{n}^{2}}$ and the conditionally convergent series $\underset{n=1}{\
overset{\infty }{∑}}\frac{{\left(-1\right)}^{n+1}}{n}$. Is the product the product of the sums of the two given series?
a) Show that Leibniz' theorem on the convergence of alternating series applies to the alternating harmonic series. (See Table 8.2.2.)
Show that Leibniz' theorem on the convergence of alternating series applies to the alternating harmonic series. (See Table 8.2.2.)
b) Use Maple to show that the sequence of partial sums converges to $\mathrm{ln}\left(2\right)$.
Use Maple to show that the sequence of partial sums converges to $\mathrm{ln}\left(2\right)$.
c) Test the claim that a partial sum is closer to the sum than the magnitude of the first neglected term.
a) Show that Leibniz' theorem on the convergence of alternating series applies to the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{\left(-1\
right)}^{n+1}}{\sqrt{n}}$. (See Table 8.2.2.)
Show that Leibniz' theorem on the convergence of alternating series applies to the series $\underset{n=1}{\overset{\infty }{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{\left(-1\right)}^
{n+1}}{\sqrt{n}}$. (See Table 8.2.2.)
b) Obtain the first few, but graph the first 50, partial sums.
Obtain the first few, but graph the first 50, partial sums.
c) If ${S}_{k}$ is the partial sum of the first $k$ terms, what value of $k$ will guarantee that the error in ${S}_{k}$ is no worse than ${10}^{-3}$?
If ${S}_{k}$ is the partial sum of the first $k$ terms, what value of $k$ will guarantee that the error in ${S}_{k}$ is no worse than ${10}^{-3}$?
© Maplesoft, a division of Waterloo Maple Inc., 2024. All rights reserved. This product is protected by copyright and distributed under licenses restricting its use, copying, distribution, and
For more information on Maplesoft products and services, visit www.maplesoft.com
|
{"url":"https://www.maplesoft.com/support/help/Maple/view.aspx?path=StudyGuides/Calculus/Chapter8/InfiniteSequencesAndSeries/Section8-2/Series","timestamp":"2024-11-03T16:31:15Z","content_type":"text/html","content_length":"286251","record_id":"<urn:uuid:4c6ca0a8-1cff-4168-9f90-bef6acaed587>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00577.warc.gz"}
|
Printable Calendars AT A GLANCE
Metric System Conversion Worksheet Chemistry
Metric System Conversion Worksheet Chemistry - Worksheets are metric system handoutwork integrated science 1, work 2 metric conversions name. 0.057 m to km 2. Ka‘ala (highest point on o‘ahu): 13 cm3
to ml 3. Web millikan oil drop experiment 7m. Appreciate the difference between precision and accuracy; Includes a metric table and significant digits in metric unit conversions. First start with
what you are given. Web scientific notation, metric system, & unit conversion review worksheet. —> table of prefixes to metric units.
Common knowledge ratios (such as 60 seconds =. Rutherford gold foil experiment 9m. Equilibrium problems part 1 answer key. Using this generator will let you create your own worksheets for: —>
introduction to problem solving. Heating and cooling curves part 2 answer key. First start with what you are given.
The conversion factor comes from an equation that relates the given unit to the wanted, or desired, unit. —> problem solving worksheet 1 with. A study of matter © 2004, gpb 1.21 1. You may use a
number line but no calculator. Web millikan oil drop experiment 7m.
Metric Conversion Worksheet Chemistry
Ratios embedded in the text of the problem (using words such as per or in each, or using symbols such as / or %). —> table of prefixes to metric units. 1000 ml = 1 l, or 1000 g = 1 kg should be
memorized) remember that metric conversions are exact ratios and thus will not limit your significant digits.
12 Metric Length Worksheets /
Heating and cooling curves part 2 answer key. General chemistry 1 lecture videos; Heating and cooling curves answer key. All variations of the units of measure are really just the units multiplied by
powers of 10. Includes a metric table and significant digits in metric unit conversions.
9 Best Images of Metric Prefixes Worksheet King Henry Metric
Web master the use of unit conversion (dimensional analysis) in solving problems; —> problem solving worksheet 1 with. Web colligative properties answer key. —> table of prefixes to metric units. Web
unlike our english (or us customary) system of measurement with its feet and inches, quarts and gallons, the metric system is very orderly.
Unit Conversion in the Metric System CLEAR & SIMPLE YouTube
Web dimensional analysis works because the given unit is always multiplied by a conversion factor that is equal to one. For conversions within the metric system, you must memorize the conversion (for
example: Ka‘ala (highest point on o‘ahu): 13 cm3 to ml 3. Convert between a range of lengths:
12 Best Images of Customary Conversion Worksheets Conversion Gallons
Web colligative properties answer key. Test yourself on basic metric conversions. Web master the use of unit conversion (dimensional analysis) in solving problems; 13 cm3 to ml 3. All variations of
the units of measure are really just the units multiplied by powers of 10.
FREE Metric System Conversion Guide Homeschool Giveaways Metric
Web scientific notation, metric system, & unit conversion review worksheet. Includes a metric table and significant digits in metric unit conversions. Using this generator will let you create your
own worksheets for: Heating and cooling curves answer key. Ratios embedded in the text of the problem (using words such as per or in each, or using symbols such as /.
Chemistry Conversion Chart Metric System
Worksheets are metric system handoutwork integrated science 1, work 2 metric conversions name. Altitude of summit of mt. First start with what you are given. Spring 2015 reel chemistry student. All
variations of the units of measure are really just the units multiplied by powers of 10.
50 Metric Conversion Worksheet Chemistry
These chemistry problems illustrate standard metric to english conversions. Heating and cooling curves part 2 answer key. Web scientific notation, metric system, & unit conversion review worksheet.
All variations of the units of measure are really just the units multiplied by powers of 10. Web metric to english conversions:
Chemistry Conversion Chart Metric System
Conversions in the metric system, as covered earlier in this chapter. Understand the relationship between precision and the number of significant figures in a number Web worksheet chm 130 conversion
practice problems. Convert between milliliters, liters and centiliters. Rewrite the following numbers in scientific notation, in simplest form.
Metric System Conversion Worksheet Chemistry - Unit conversion worksheets for the si system, or metric system. 1 kg = 2.205 lb. Ratios embedded in the text of the problem (using words such as per or
in each, or using symbols such as / or %). For conversions within the metric system, you must memorize the conversion (for example: First start with what you are given. General chemistry 1 lecture
videos; 13 cm3 to ml 3. Web master the use of unit conversion (dimensional analysis) in solving problems; Web unlike our english (or us customary) system of measurement with its feet and inches,
quarts and gallons, the metric system is very orderly. Web scientific notation, metric system, & unit conversion review worksheet.
Web millikan oil drop experiment 7m. Web converting between metric units is an exercise in unit analysis (also called dimensional analysis). 0.057 m to km 2. Web unlike our english (or us customary)
system of measurement with its feet and inches, quarts and gallons, the metric system is very orderly. Length (millimeters to meters and similar), volume (milliliters to liters and.
Worksheets are metric system handoutwork integrated science 1, work 2 metric conversions name. General chemistry 1 lecture videos; 0.057 m to km 2. Web displaying 8 worksheets for metric conversion
Web Colligative Properties Answer Key.
Web converting between metric units is an exercise in unit analysis (also called dimensional analysis). Metric to metric conversions quiz: Spring 2015 reel chemistry student. Measured quantities are
always represented by a number and its associated unit, such as.
Heating And Cooling Curves Part 2 Answer Key.
Unit analysis is a form of proportional reasoning where a given measurement can be multiplied by a known proportion or ratio to give a result having a different unit, or dimension. Measurement,
metric system, and si units; Equilibrium problems part 1 answer key. Web master the use of unit conversion (dimensional analysis) in solving problems;
Convert Between Milliliters, Liters And Centiliters.
You may use a number line but no calculator. —> table of prefixes to metric units. General chemistry 1 lecture videos; —> introduction to problem solving.
For Conversions Within The Metric System, You Must Memorize The Conversion (For Example:
Rewrite the following numbers in scientific notation, in simplest form. Altitude of summit of mt. Web to convert a quantity from one system of units to another, medical personnel, scientists, and
engineers frequently use a procedure called dimensional analysis. Converting to larger or smaller unit measurements is a matter of multiplying or dividing by 10, 100, 1000 and so.
Related Post:
|
{"url":"https://ataglance.randstad.com/viewer/metric-system-conversion-worksheet-chemistry.html","timestamp":"2024-11-02T12:13:25Z","content_type":"text/html","content_length":"36744","record_id":"<urn:uuid:c41b8339-b9a6-4c22-833a-01de801d477c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00368.warc.gz"}
|
DAVERAGE - Excel docs, syntax and examples
The DAVERAGE function is used to calculate the average of values in a database that meet specific criteria. This function is handy when dealing with large datasets and wanting to extract average
values based on certain conditions or filters.
=DAVERAGE(Database, Field, Criteria)
Database Range of cells that make up the database. It should include the column labels.
Field The column or field from which you want to extract values for calculating the average. Enter the column label.
Criteria Range of cells that specify the conditions or criteria for selecting which rows to include in the average calculation. It should include the column label and the criteria.
About DAVERAGE 🔗
Think of DAVERAGE as your trusty data miner in Excel. When you're facing a mountain of information and need to sieve out the average of specific data subsets, this function steps in to lighten the
load. Whether you're analyzing sales figures, performance metrics, or any other dataset, DAVERAGE streamlines the process by allowing you to focus on targeted segments within your database and derive
the average values with precision and ease. Through the power of DAVERAGE, Excel users can swiftly navigate through vast datasets and extract the average values that align with their predefined
criteria. This function serves as an indispensable tool for efficient data analysis, offering flexibility and customization to meet diverse analytical needs.
Examples 🔗
Suppose you have a database containing sales data with columns for 'Product', 'Units Sold', and 'Revenue'. To find the average revenue for a specific product, say 'Product A', you can use the
following DAVERAGE formula: =DAVERAGE(A1:C100, 'Revenue', A1:C100, 'Product', 'Product A')
Assuming you have a database of student grades with columns for 'Subject', 'Score', and 'Grade'. To calculate the average score for students who scored above 80 in 'Math', you can use: =DAVERAGE
(A1:C50, 'Score', A1:C50, 'Subject', 'Math', A1:C50, 'Score', '>80')
Ensure that the criteria in the 'Criteria' argument are correctly formatted and match the data type of the values in the database. DAVERAGE will calculate the average of the values in the 'Field'
column that meet all the conditions specified in the 'Criteria' range.
Questions 🔗
How does the DAVERAGE function determine which values to include in the average calculation?
The DAVERAGE function uses the criteria specified in the 'Criteria' argument to filter out rows in the database that match the conditions. It then calculates the average of values in the 'Field'
column for the filtered subset.
Can I use multiple criteria in the DAVERAGE function?
Yes, you can use multiple criteria in the 'Criteria' argument to narrow down the dataset for the average calculation. Ensure each criteria pair includes the column label and the specific condition.
What happens if no rows match the criteria in the DAVERAGE function?
If no rows in the database meet the specified conditions in the 'Criteria' argument, the DAVERAGE function will return a #DIV/0! error to indicate the division by zero.
Related functions 🔗
Leave a Comment
|
{"url":"https://spreadsheetcenter.com/excel-functions/daverage/","timestamp":"2024-11-04T23:31:31Z","content_type":"text/html","content_length":"29735","record_id":"<urn:uuid:40c72d6f-2215-43a2-b3b7-8ee0072b1c90>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00861.warc.gz"}
|