content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
newRPL - build 1255 released! [updated to 1299]
07-30-2018, 10:41 PM
Post: #241
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
Thank you so much for not including the unit as a factor! That has always frustrated me in OldRPL. Not factoring negative numbers is an acceptable compromise.
I personally would have preferred two lists (of prime factors and their exponents) and then the unit, but this is more compatible with old programs.
Either the polynomial factoring isn't working right or I'm not understanding it, though. First I tried 'X^2-2*X+1' and it said it wanted a vector of numbers. So I tried [1 -2 1]... And got [1 0 -1 2]
as a response, which I can't make heads nor tails of.
07-31-2018, 06:14 PM
(This post was last modified: 07-31-2018 06:15 PM by Claudio L..)
Post: #242
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
Devel ROM at hpgcc3.org was updated to 1090.
Not factoring negatives is NOT an acceptable compromise, try it now.
Regarding FACTORS: The polynomials are not accepted yet in symbolic form, more symbolic work will come in the future. As you correctly guessed (since it's not documented) you can provide polynomials
the same way you do for all other polynomial functions. The answer you got is a BUG, fixed in 1090, that happened for roots with multiplicity.
Now it should provide the correct answer, in the same form as for real numbers: a pair of numbers representing the factor A and multiplicity N in (x+A)^N
Thanks for testing and reporting!
08-01-2018, 02:47 AM
Post: #243
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
ROMs updated to 1092 now.
I added a ROOT command with a bisection root finder. Arguments are similar to NUMINT (a function, an interval and a tolerance), it returns a root, or it throws an error "No root found".
The algorithm is a bisection with some improvements (heavily inspired on Namir's posts, thanks!!) that reduce the number of evaluation roughly by 30%.
Next I'll implement Brent and compare them, if it is really faster I'll switch to Brent, otherwise I like simpler methods.
I'm planning to introduce a new command CROOT using Muller, to be able to get complex roots as well. Since complex roots are not so commonly used, I think it's best to have a faster algorithm for
real roots, then only if the user needs complex roots he can choose to use the slower algorithm.
As usual, keep testing and report any issues.
08-01-2018, 05:09 AM
(This post was last modified: 08-01-2018 03:45 PM by The Shadow.)
Post: #244
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
Okay, so FACTORS now makes the first prime factor negative if the original number was negative. I assume this is just a marker, because otherwise it's flat-out wrong. (ie, it says that -180 factors
as (-2)^2*3^2*5.)
Of course, this means that -1, 1, and 0 have the same factorization! Since they have no prime factors at all.
Two possible options:
1) Bite the bullet and give two outputs, one the list and the other of the unit. So -180 would give:
{ 2 2 3 2 5 1 }
and -1 would give:
{ }
2) Or, go all OldRPL and include the unit, but be consistent and do it every time, not just for negatives. So +180 would give:
{ 1 1 2 2 3 2 5 1 }
I dislike this option but if it's always there it's much easier to manage. Likewise, the factorization of 0 would be:
{ 0 1 }
(EDIT: There's also option 3: Once tagged objects are in, tag the list of factors with the unit.)
FACTORS also chokes on some non-integer inputs without throwing an error. For example, 1.5 gives:
{ 1.5 1 -1 0 }
which is more than a little odd. Other times it seems to round off first. OldRPL would actually factor rational numbers, so 1.5 (well, '3/2', but the difference isn't as important in NewRPL) would
{ 2 -1 3 1 }
which strikes me as much more desirable.
The polynomial factorization is still a little odd. It throws an error when the polynomial has complex roots, but still manages to calculate them. I think it's more sensible to have a flag to just
not break up factors irreducible in the reals. Of course, this gives headaches in the output syntax, as the vector notation breaks down. You'd need something like this for [1 0 0 1]:
{ [ 1 1 ] 1 [1 -1 1] 1 }
EDIT: I think this is desirable anyway, since currently the input and output are both vectors, but vectors interpreted in entirely different ways.
08-01-2018, 05:11 PM
(This post was last modified: 08-01-2018 05:13 PM by Claudio L..)
Post: #245
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
Quote:The Shadow wrote:
Okay, so FACTORS now makes the first prime factor negative if the original number was negative. I assume this is just a marker, because otherwise it's flat-out wrong. (ie, it says that -180
factors as (-2)^2*3^2*5.)
Ouch, and I thought I had accomplished a lot in the 5 minutes I had available. Of course the negative sign doesn't work with even multiplicity. So I guess having -1 1 at the beginning is the only
real solution.
Quote:The Shadow wrote:
Of course, this means that -1, 1, and 0 have the same factorization! Since they have no prime factors at all.
Two possible options:
1) Bite the bullet and give two outputs, one the list and the other of the unit. So -180 would give:
{ 2 2 3 2 5 1 }
and -1 would give:
{ }
I'm liking this idea. While it will break compatibility, I think it goes a long way to make the result more usable.
Actually, once you break compatibility we can completely break it and output:
* A unity factor (or sign actually), which would allow the quantity being factored to even have a physical unit attached to it.
* A list of factors
* A list of multiplicity
Arguments returned in this order allow the original number to be reconstructed simply by:
^ ΠLIST *
For example factoring 180_m would output:
{2 3 5}
{2 2 1}
The "unity" contains the sign, and any other strange "features" in the original number.
Quote:The Shadow wrote:
2) Or, go all OldRPL and include the unit, but be consistent and do it every time, not just for negatives. So +180 would give:
{ 1 1 2 2 3 2 5 1 }
I dislike this option but if it's always there it's much easier to manage. Likewise, the factorization of 0 would be:
{ 0 1 }
(EDIT: There's also option 3: Once tagged objects are in, tag the list of factors with the unit.)
I think it would be cleaner to do the above.
Then 1 would be:
{ }
{ }
(the only catch is that ΠLIST errors on empty lists, so this case needs to be trapped separately)
or we could actually do:
{ 1 }
{ 1 }
to make it more manageable by programs
zero would factor as:
{ 0 }
{ 1 }
Quote:The Shadow wrote:
FACTORS also chokes on some non-integer inputs without throwing an error. For example, 1.5 gives:
{ 1.5 1 -1 0 }
which is more than a little odd. Other times it seems to round off first. OldRPL would actually factor rational numbers, so 1.5 (well, '3/2', but the difference isn't as important in NewRPL)
would give:
{ 2 -1 3 1 }
which strikes me as much more desirable.
I thought there was a trap for integers only, but I like the factorization of the fraction!
For example 0.15 should be factored as 15 (*10^-2), then it's almost trivial to find all '2's and '5's in the list (or add them) then subtract 2 from their exponent.
Quote:The Shadow wrote:
The polynomial factorization is still a little odd. It throws an error when the polynomial has complex roots, but still manages to calculate them.
It's on purpose, (same as a square root) it throws an error if the result is complex and you are not in complex mode, other than that the results should be good, and if you set the complex mode flag
the error will quietly vanish
Quote:The Shadow wrote:
I think it's more sensible to have a flag to just not break up factors irreducible in the reals. Of course, this gives headaches in the output syntax, as the vector notation breaks down. You'd
need something like this for [1 0 0 1]:
{ [ 1 1 ] 1 [1 -1 1] 1 }
EDIT: I think this is desirable anyway, since currently the input and output are both vectors, but vectors interpreted in entirely different ways.
I think the output should be the same as I proposed above for reals. Forget the vectors and use lists, after all the result is a list of factors, not another polynomial.
So, we would leave the leading factor, then a list of factors (or perhaps we should change the signs and output the roots?) and then a list of multiplicities.
For example for 4*x^2-8*x+4 the output would be:
{ -1}
(we could also use +1 in the middle list and define the factors as (x-an) rather than (x+an) )
Then the factored polynomial equation can be easily obtained by:
'X' ROT + SWAP ^ ΠLIST *
08-01-2018, 05:21 PM
(This post was last modified: 08-01-2018 05:24 PM by ijabbott.)
Post: #246
ijabbott Posts: 1,307
Senior Member Joined: Jul 2015
RE: newRPL - build 1089 released! [update:build 1089]
Claudio L. Wrote:
The Shadow Wrote:Okay, so FACTORS now makes the first prime factor negative if the original number was negative. I assume this is just a marker, because otherwise it's flat-out wrong. (ie, it
says that -180 factors as (-2)^2*3^2*5.)
Ouch, and I thought I had accomplished a lot in the 5 minutes I had available. Of course the negative sign doesn't work with even multiplicity. So I guess having -1 1 at the beginning is the only
real solution.
How about { -2 1 2 1 3 2 5 1 }? I.e. extract just one power of the first factor and make it negative. This would only need to be done if there is an even power of the first factor.
— Ian Abbott
08-01-2018, 06:26 PM
Post: #247
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
Quote:ijabbot wrote:
How about { -2 1 2 1 3 2 5 1 }? I.e. extract just one power of the first factor and make it negative. This would only need to be done if there is an even power of the first factor.
Clever! Although I'm currently leaning towards breaking compatibility and splitting the list as discussed. It seems to make much more sense in newRPL, especially since the operators work as expected
on lists. It just takes much less effort to use the results.
It also allows consistency between FACTORS of a number and of a poly, which is a nice-to-have feature.
08-02-2018, 02:30 AM
(This post was last modified: 08-02-2018 08:50 PM by The Shadow.)
Post: #248
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
Claudio L. Wrote:I'm liking this idea. While it will break compatibility, I think it goes a long way to make the result more usable.
Actually, once you break compatibility we can completely break it and output:
* A unity factor (or sign actually), which would allow the quantity being factored to even have a physical unit attached to it.
* A list of factors
* A list of multiplicity
I vastly prefer this solution, but didn't realize it was on the table. By all means, do this!
EDIT: The reason why I strongly prefer it is that to actually USE the factorization for anything, you have to separate the multiplicities from the primes anyway. It's an extra loop (or extra caution
on a single loop) in every single program that uses prime factors, and it shouldn't be necessary.
I hadn't thought of your idea of being able to give the unit well, units.
Quote:I think it would be cleaner to do the above.
Then 1 would be:
{ }
{ }
(the only catch is that ΠLIST errors on empty lists, so this case needs to be trapped separately)
or we could actually do:
{ 1 }
{ 1 }
to make it more manageable by programs
I can live with the latter, as long as 1 never shows up as a factor in anything but 1, -1, or 0.
(I actually wouldn't have a problem error-trapping on empty lists, but I see your point.)
Quote:zero would factor as:
{ 0 }
{ 1 }
Bad idea. You're basically defining zero as positive. Better to do:
{ 1 }
{ 1 }
Yes, technically zero is not a unit... but it just makes more sense this way.
Quote:So, we would leave the leading factor, then a list of factors (or perhaps we should change the signs and output the roots?) and then a list of multiplicities.
For example for 4*x^2-8*x+4 the output would be:
{ -1}
(we could also use +1 in the middle list and define the factors as (x-an) rather than (x+an) )
Looks good. (x-an) is more traditional in the math community, because x-an=0 yields an rather than -an.
If you decide to let FACTORS handle rational numbers (please do!) don't forget to let it also handle rational functions. Just allow for negative exponents on polynomial factors.
08-02-2018, 08:29 PM
Post: #249
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
Oh, and now that we're getting factoring stuff, can I throw in a request for FXND? I use it constantly.
08-03-2018, 03:39 AM
(This post was last modified: 08-03-2018 11:53 AM by The Shadow.)
Post: #250
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
I've been mulling over FACTORS some more, and I think the special cases of 1, -1, and 0 should have this form:
1 (or -1 or 0)
{ 1 }
{ 0 }
My reasoning is as follows: 1 is *not* a prime factor, and hence should contribute nothing to the sum of prime factors. By having the multiplicity be 1, many equations are messed up - the
sum-of-divisors function, to name just one.
08-03-2018, 02:15 PM
(This post was last modified: 08-03-2018 02:16 PM by Claudio L..)
Post: #251
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
(08-03-2018 03:39 AM)The Shadow Wrote: I've been mulling over FACTORS some more, and I think the special cases of 1, -1, and 0 should have this form:
1 (or -1 or 0)
{ 1 }
{ 0 }
My reasoning is as follows: 1 is *not* a prime factor, and hence should contribute nothing to the sum of prime factors. By having the multiplicity be 1, many equations are messed up - the
sum-of-divisors function, to name just one.
While you are of course correct, I don't think you'll get much information from the sum of divisors of 1, -1 or 0.
Check out build 1093 (I updated it), it works now per your previous post. The fractional numbers is not as good as it could be, because of course a real that cannot be represented exactly like 1/3
converts to a fraction 3333333333333.../1000000000000... instead of 1/3. I guess I could improve it by calling ->Q before and then factoring numerator and denominator independently.
Regarding the polynomials with negative exponents, I don't know how to handle this case, can you elaborate?
PS: Finally, the forum quoting function is back again!
08-03-2018, 04:59 PM
(This post was last modified: 08-03-2018 05:00 PM by The Shadow.)
Post: #252
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1089]
(08-03-2018 02:15 PM)Claudio L. Wrote: While you are of course correct, I don't think you'll get much information from the sum of divisors of 1, -1 or 0.
Of course not, but that's another bit of error trapping that isn't necessary.
Quote: I guess I could improve it by calling ->Q before and then factoring numerator and denominator independently.
Sounds like a good idea.
Quote:Regarding the polynomials with negative exponents, I don't know how to handle this case, can you elaborate?
Basically just factor the numerator and denominator, and make the latter multiplicities negative.
08-03-2018, 11:29 PM
Post: #253
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
(08-03-2018 04:59 PM)The Shadow Wrote:
(08-03-2018 02:15 PM)Claudio L. Wrote: Regarding the polynomials with negative exponents, I don't know how to handle this case, can you elaborate?
Basically just factor the numerator and denominator, and make the latter multiplicities negative.
No, really. I don't understand and need you to elaborate! Are you talking about a single polynomial in which some of the terms have negative exponents? Or a rational expression where the numerator
and denominator are polynomials?
The first case can easily by transformed by multiplying by x^n, with n being the lowest exponent in the polynomial, right? Then FACTORS will find a factor 0 with multiplicity n that wasn't there in
the original. Am I even going in the right direction?
The second case is more problematic, since FACTORS accepts only a vector for now, how do you express the rational as a vector? It's best left as 2 vectors, then you can use FACTORS independently.
08-09-2018, 03:47 PM
Post: #254
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1089]
All unofficial ROMs updated to 1099.
The main change is the addition of the multiple non-linear equation solver (command MSOLVE).
It works like this:
* List with equations in the stack (any number of equations, at least one of course)
* List with variable names (the variables that will be used as variables in the search)
* Initial guess range: List with minimum values for each of the names
* Initial guess range: List with maximum values for each of the names
* A real number with the desired tolerance
The main difference with ROOT is that equations don't need to be in the form 'F(X)=expression'. Since this is a more general solver, expressions can be of any type, mainly:
'expression' which will be interpreted as 'expression=0' when searching for roots
'expression=expression' your typical equation
'expression<expression' All kinds of inequalities accepted (>, <, <=, >=)
The list of variable names can have any number of variables. Any other variables in the equations need to be defined elsewhere (as globals or locals) and will remain constant during the analysis.
The initial guess is a range of values where the algorithm will begin the search. This is not a bracketed method, so it may converge to roots outside this range, this is only an initial guess.
The range of variables can be constrained by adding inequalities like 'X>-5' to the list of equations.
Finally, the tolerance is up to your own patience. Always start with 2 or 3 digits (0.001), and if it finds a root, then run it again with tighter tolerance.
The solver uses an optimization method, so it actually minimizes the sum of the squares of the equations (*). This means it may find local minimums instead of actual roots.
Also, the method only works properly with real numbers, so don't even try it in the complex plane. Make sure all equations return real values. Division by zero or any kind of infinity needs flag -22
to be set so your equations don't error but return +/-Inf, which the solver can handle, no problem.
Also, the solver is derivative-free, so functions don't need to be continuous.
To distinguish between them, MSOLVE returns:
* A list of values for each of the variables in the list
* A list of the value of each equation after replacing the solution found, so the user can see whether the equations are satisfied by this solution or not.
(*) For inequalities, when an inequality is True, it adds no value to the sum. When it is false, it turns the result of the sum into +Inf, making it the worst choice of solution, hence the algorithm
will turn away from areas that don't satisfy inequalities. In other words, regular equations are "minimized", while inequalities are "enforced".
This needs HEAVY testing. Please help test it thoroughly and report any anomalies (like looping forever, crashing, etc.).
08-09-2018, 03:57 PM
Post: #255
rprosperi Posts: 6,637
Super Moderator Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1099]
(08-09-2018 03:47 PM)Claudio L. Wrote: The main change is the addition of the multiple non-linear equation solver (command MSOLVE).
It works like this:
* List with equations in the stack (any number of equations, at least one of course)
* List with variable names (the variables that will be used as variables in the search)
* Initial guess range: List with minimum values for each of the names
* Initial guess range: List with maximum values for each of the names
* A real number with the desired tolerance
This needs HEAVY testing. Please help test it thoroughly and report any anomalies (like looping forever, crashing, etc.).
Claudio - Suggest you provide 2 examples, one trivial and one non-trivial, to illustrate the exact syntax of the arguments; this avoids time wasted figuring out what your explanations do/don't mean.
I'm not suggesting the wording is poor, simply that for such use, interpretation can vary. For example is the list with variable names like { A B C } or {'A' 'B' 'C'}, etc.
--Bob Prosperi
08-09-2018, 07:34 PM
(This post was last modified: 08-09-2018 07:36 PM by Claudio L..)
Post: #256
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1099]
(08-09-2018 03:57 PM)rprosperi Wrote: Claudio - Suggest you provide 2 examples, one trivial and one non-trivial, to illustrate the exact syntax of the arguments; this avoids time wasted figuring
out what your explanations do/don't mean. I'm not suggesting the wording is poor, simply that for such use, interpretation can vary. For example is the list with variable names like { A B C } or
{'A' 'B' 'C'}, etc.
Fair enough. Here's the trivial one: get the roots of x^2=4
{ 'X^2=4' } @ List of equations
{ 'X' } @ List of variables (quoted/unquoted doesn't matter)
{ 1 } @ Initial guess range: left point
{ 10 } @ Initial guess range: right point (can be anything as long as all variables are different from the left point)
0.0001 @ TOLERANCE
It will return:
{ -2.000022888184 } @ List of values for each of the variables
{ 0.000091553258. } @ List of residuals for each expression (same as storing the result and ->NUM on the equations)
In this case it converged to the negative. If we want the positive there's 2 options:
a) Try a different initial guess range, for example { 0 } { 1 } does the trick.
b) Coerce the system with a constraint: change the list of equations to:
{ 'X^2=4' 'X>0' }
And sure enough, you'll get the positive root.
A less trivial example:
To test the
Beale function from here
{ '1.5-X+X*Y' '2.25-X+X*Y^2' '2.625-X+X*Y^3' } @ We input them as 3 separate expressions
{ 'X' 'Y' } @ 2 Variables
{ -4.5 -4.5 } @ Same range as in Wikipedia
{ 4.5 4.5 } @ Same range as in Wikipedia
0.0001 @ Tolerance
The results are this:
{ -317914.438741247053. 1.000003117366. } @Values
{ 0.508944245104. 0.267885400724. -0.348176533149. } @ Residues
And from the residues we can see the algorithm didn't find a root, the value of X diverged to the negative side while Y converged to 1. Not what we expected, let's try inverting the range:
{ '1.5-X+X*Y' '2.25-X+X*Y^2' '2.625-X+X*Y^3' } @ We input them as 3 separate expressions
{ 'X' 'Y' } @ 2 Variables
{ 4.5 4.5 } @ Same range as in Wikipedia
{ -4.5 -4.5 } @ Same range as in Wikipedia
0.0001 @ Tolerance
And now we get:
{ 2.99999918766. 0.50000000274. }
{ 0.000000414389. 0.000000617475. 0.000000716962. }
And now the algorithm went the other way (towards the positive side of X), converging to the proper root.
EDIT: By the way, you could also force a constraint by adding 'X>-4.5' to the list of equations.
08-10-2018, 02:14 AM
Post: #257
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1099]
I forgot to mention, MSOLVE also accepts a program instead of a list of equations. The program must take N arguments from the stack (N=number of variables provided in the list of variable names) and
return a single real number. The algorithm will try to minimize the result of the program.
A program that does the sums of the squares of various expressions would be equivalent to providing the list of expressions directly.
08-10-2018, 09:28 AM
Post: #258
pier4r Posts: 2,248
Senior Member Joined: Nov 2014
RE: newRPL - build 1089 released! [update:build 1099]
Claudio really good work. I admire all of you (Claudio, the WP team, Thomas, of course HP, Casio, etc.., Scary, Aricalculator, HRASTprogrammer, the guys making emulators, etc... ) that go coding
calculator functions because I feel already the experience is worth plenty for refreshing math and learning a more concepts and tools.
I see this activity as a great one but I have not yet started (even with limited scope, say: bc libraries
Wikis are great, Contribute :)
08-11-2018, 03:19 AM
(This post was last modified: 08-11-2018 03:26 AM by The Shadow.)
Post: #259
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1099]
I'm getting a persistent crash when trying to enter functions. Typing most letters after a single quote crashes NewRPL. One of the few exceptions is N.
So I tried to type 'N(X)=X^2-4', but this time it crashed when I entered the second X. Indeed, entering anything after an equals sign seems to cause the crash.
EDIT: Entering any number seems to crash NewRPL, in fact. I have reinstalled the update and get the same issue.
P.S. Forget what I said about rational functions, I wasn't thinking things through.
08-11-2018, 05:01 AM
(This post was last modified: 08-11-2018 05:14 AM by The Shadow.)
Post: #260
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1099]
I tried downgrading to 1089, but I'm getting the same crash. I think I see the problem - one of my libraries has become corrupted, and appears as DIRObj in the Libraries menu.
I'm not sure how to fix the problem, though, since in the absence of being able to enter letters or numbers, I can't revert to one of my backups.
EDIT: I managed to wipe the memory by taking the batteries out, then using SDRESTORE.
Incidentally, the option under ON-A-F to wipe the memory doesn't seem to work.
User(s) browsing this thread: 1 Guest(s) | {"url":"https://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=9700&pid=101939","timestamp":"2024-11-13T08:15:39Z","content_type":"application/xhtml+xml","content_length":"96535","record_id":"<urn:uuid:0f5efc45-b561-4d00-8536-5a4e3313c6e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00570.warc.gz"} |
Python solves multiple multivariate primary equations (complete runnable version code)
Problem Description:
The three unknowns form an equation. In the CSV file, there are N rows of coefficients related to [x, y, z], and the values of the three unknowns [x, y, z] are solved.
The relationship between the three unknowns [x, y, z] is: a*x + b*y + c*z = p.
Like this formula, there are N lines in the csv file. My requirement is to find the values of three unknowns according to these different coefficients and different result p values.
1, Tool kit
The first tools to use are numpy and pandas. Pandas is also a tool based on numpy. The DataFrame in it is very suitable for opening and modifying CSV files.
2, Use steps
1. Read in file
The code is as follows:
import numpy as np
import pandas as pd
from sympy import *
df = pd.read_csv(r'C:/Users/hanhan/PycharmProjects/pythonProject/data.csv',encoding = 'gbk')
2. Write equation
The code is as follows:
#Parameter definition
x = []
y = []
z = []
for i in range(len(df)-3):
a = np.array(df['Coefficient 1'].iloc[i:i+3])
b = np.array(df['Coefficient 2'].iloc[i:i+3])
c = np.array(df['Coefficient 3'].iloc[i:i+3])
a1*x + b1*y + c1*z = p1
a2*x + b2*y + c2*z = p2
a3*x + b3*y + c3*z = p3
#p = [p1, p2, p3]
p = [1, 2, 3]
m = np.array([[a[0], b[0], c[0]], [a[1], b[1], c[1]], [a[2], b[2], c[2]]])
n = np.array(p) #Can be replaced by the constant to the right of the equation
solution = np.linalg.solve(m, n) #solution format: np.array([x, y, z])
print('solution=', solution)
① First, define the parameters (that is, the three unknowns to be solved)
② Three lines of coefficient data and constant term data are taken each time, and each three lines of data constitutes an equation group.
a1*x + b1*y + c1*z = p1
a2*x + b2*y + c2*z = p2
a3*x + b3*y + c3*z = p3
③ The np.linalg.solve() function is used to solve the equations. The function gives the solution of the linear equation in the form of matrix. The coefficients of each equation are written into m
group by group, and the constant term is written into n.
m = np.array([[a[0], b[0], c[0]], [a[1], b[1], c[1]], [a[2], b[2], c[2]]])
n = np.array(p) # can be replaced by the constant to the right of the formula
④ Each system of equations will get a solution set, which corresponds to the solutions of three unknowns [x, y, z].
⑤ It is stored in the empty list of previously defined parameters in order to facilitate the later storage of files.
⑥ Save the file by column and export it to csv.
Each column here is the feasible solution of x. because my demand is a relatively large project, I take the average value of each column as my final solution. However, if it is just a system of
equations, the output is a set of solutions.
ls = np.array([x,
df = pd.DataFrame(ls.transpose())
df.to_csv(r'C:/Users/hanhan/PycharmProjects/pythonProject/data_answer.csv', encoding='gbk')
Here is the full version code:
import numpy as np
import pandas as pd
import math
from sympy import *
#Data table
df = pd.read_csv(r'C:/Users/hanhan/PycharmProjects/pythonProject/data.csv',encoding = 'gbk')
#Parameter definition
x = []
y = []
z = []
for i in range(len(df)-3):
a = np.array(df['Coefficient 1'].iloc[i:i+3])
b = np.array(df['Coefficient 2'].iloc[i:i+3])
c = np.array(df['Coefficient 3'].iloc[i:i+3])
a1*x + b1*y + c1*z = p1
a2*x + b2*y + c2*z = p2
a3*x + b3*y + c3*z = p3
#p = [p1, p2, p3]
p = [1, 2, 3]
m = np.array([[a[0], b[0], c[0]], [a[1], b[1], c[1]], [a[2], b[2], c[2]]])
n = np.array(p) #Can be replaced by the constant to the right of the equation
solution = np.linalg.solve(m, n) #solution format: np.array([x, y, z])
print('solution=', solution)
ls = np.array([x,
df = pd.DataFrame(ls.transpose())
df.to_csv(r'C:/Users/hanhan/PycharmProjects/pythonProject/data_answer.csv', encoding='gbk')
You can also draw a picture of the solution in advance to see the effect:
import matplotlib.pyplot as plt
# Drawing
fig = plt.figure()
x1= [j for j in range(len(x))]
ax1 = fig.add_subplot(3,2,1)
ax1.scatter(x1, x)
ax2 = fig.add_subplot(3,2,2)
ax2.scatter(x1, y)
ax3 = fig.add_subplot(3,2,3)
ax3.scatter(x1, z) | {"url":"https://www.fatalerrors.org/a/19t30Dk.html","timestamp":"2024-11-10T21:23:36Z","content_type":"text/html","content_length":"14545","record_id":"<urn:uuid:82ce2c8b-5bae-4908-9fdb-f2206f00d22a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00896.warc.gz"} |
3.142: a π round-up
You're reading: Blackboard Bold
3.142: a π round-up
‘Tis the season to celebrate the circle constant! ((Pedants would have me revise that to “a circle constant”.)) Yes, that’s right: in some calendar systems using some date notation, the day and month
coincide with the first three digits of π, and mathematicians all over the world are celebrating with thematic baked goods and the wearing of irrational t-shirts.
And the internet’s maths cohort isn’t far behind. Here’s a round-up (geddit – round?!) of some of our favourites. In case you were wondering, we at The Aperiodical hadn’t forgotten about π day –
we’re just saving ourselves for next year, when we’ll celebrate the magnificent “3.14.15”, which will for once be more accurate to the value of π than π approximation day on 22/7. (Admittedly, for
the last few years, 3.14.14 and so on have strictly been closer to π than 22/7. But this will be the first time you can include the year and feel like you’re doing it right.)
Mathematical wordsmith Alex Bellos is consistently brilliant in his column for the Guardian’s Science section, and today he’s made not one but two posts: an article on constrained writing using the
digits of π, and a collection of pictures of art which is based on the digits of π.
Internet maths superstar Vi Hart is here to explain why π isn’t all that (and in doing so, explains some other cool stuff):
[youtube url=http://youtu.be/5iUh_CSjaSw]
Evelyn Lamb, whom we love, has also written a blog post on her page at Scientific American about the prime number counting function, which uses the symbol π and makes a refreshing change from talking
about the actual circle constant. Science magazine has a special π day page, which includes a list of their favourite pie recipes, while NASA has posted a collection of π-related puzzles.
Numberphile never misses a chance to get in on the action, and Aperiodifriend James Grime has a video about π and how it relates to the length of rivers, which is a lot more interesting than it
[youtube url=https://www.youtube.com/watch?v=TUErNWBOkUM]
This isn’t the first video Numberphile has done about π: you can view their entire collection of π-related videos on their π playlist. Also today, they’ve created a piece of prog rock, which uses π
in its construction, and turns out to be not half bad:
[youtube url=https://www.youtube.com/watch?v=E36qMxXGo3A]
People around the world celebrate π day in their own way: restaurants worldwide are offering special menus, mostly involving pie, and in Chicago, the Illinois Science Council is organising a
3.14-mile walk (I strongly hope it’s around a circle of diameter one mile), starting at τ (6.28pm).
While you’re here, I feel compelled to remind you (in case you ironically hadn’t remembered) that our All Squared podcast last π day was about memorising digits of π, and comes in under 10π minutes
long. Also from The Past, Simon Singh did a programme about π in his ‘Five Numbers’ series for BBC Radio 4.
Someone’s set up a “Pi Day official website“. It’s got some things about π on it, no doubt.
And finally:
2 Responses to “3.142: a π round-up”
1. Evelyn Lamb
Oh man, I’m already excited about what a pedant I can be next year when people are super excited about celebrating 3/14/15, and I can explain that I won’t be celebrating until 3/14/16 (because π
is closer to 3.1416 than it is to 3.1415). I can hardly wait.
□ Anonymous
Except that 3/14/15 at 9:26 am is more accurate. | {"url":"https://aperiodical.com/2014/03/3-142-a-pi-round-up/","timestamp":"2024-11-05T22:08:39Z","content_type":"text/html","content_length":"44619","record_id":"<urn:uuid:f224584d-5f19-4400-8e9a-c34ac74b8979>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00432.warc.gz"} |
Partial derivative/Related Articles
Main Article Discussion Related Articles ^[?] Bibliography ^[?] External Links ^[?] Citable Version ^[?]
A list of Citizendium articles, and planned articles, about Partial derivative.
See also changes related to Partial derivative, or pages that link to Partial derivative or to this page or whose text contains "Partial derivative".
Parent topics
Other related topics
Bot-suggested topics
Auto-populated based on Special:WhatLinksHere/Partial derivative. Needs checking by a human.
• Chain rule [r]: A rule in calculus for differentiating a function of a function. ^[e]
• Chemical thermodynamics [r]: The study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. ^[e]
• Complex analysis [r]: Field of mathematics, precisely of mathematical analysis, that studies those properties which characterize functions of complex variables. ^[e]
• Connectionism [r]: An approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind which models mental or behavioral phenomena as the
emergent processes of interconnected networks of simple units. ^[e]
• Derivative [r]: The rate of change of a function with respect to its argument. ^[e]
• Differential equation [r]: An equation relating a function and its derivatives. ^[e]
• Green's Theorem [r]: A vector identity, equivalent to the curl theorem in two dimensions, which relates a line integral around a simple closed curve to a double integral over the enclosed plane
region. ^[e]
• Jacobian [r]: Determinant of the matrix whose ith row lists all the first-order partial derivatives of the function ƒi(x1, x2, …, xn). ^[e]
• Lambert W function [r]: Used to solve equations in which the unknown appears both outside and inside an exponential function or a logarithm. ^[e]
• Normal distribution [r]: a symmetrical bell-shaped probability distribution representing the frequency of random variations of a quantity from its mean. ^[e]
• Total derivative [r]: Derivative of a function of two or more variables with respect to a single parameter in terms of which these variables are expressed. ^[e]
Articles related by keyphrases (Bot populated)
• Total derivative [r]: Derivative of a function of two or more variables with respect to a single parameter in terms of which these variables are expressed. ^[e]
• Multi-index [r]: Notation which simplify formulae used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index
to a vector of indices. ^[e]
• Stokes' theorem [r]: The integral of a form over the boundary of a manifold equals the integral of the exterior derivative over the manifold. ^[e]
• Right-hand rule [r]: Rule for the direction of the vector describing a cross product, a torque, or an angular momentum. ^[e] | {"url":"https://en.citizendium.org/wiki/Partial_derivative/Related_Articles","timestamp":"2024-11-14T05:13:47Z","content_type":"text/html","content_length":"44551","record_id":"<urn:uuid:07e5aca1-9c4d-4726-a31e-26a9f07e1ab2>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00753.warc.gz"} |
Ordered Probit (or Logit) Estimation
Ordered Probit (or Logit) Estimation
What is one to do when the dependent variable under investigation is categorical? Well if these categories are ordered, then an ordered probit (or logit) estimation technique is a sensible means for
estimation. An example where ordered probit estimation should be used is for an integer index ranking of physician quality between one and five. On the other hand, if the dependent variable is
the number of surgeries a patient has, a Poisson estmation methodology would be best since ‘y’ is a count variable.
Let us continue with the physician ranking example. Suppose there are three ranking categories: excellent (2), average (1), and poor (0). We assume there is a latent variable y* which is a function
of a vector of covariates (‘x‘). The latent variable determines which category the physician falls into.
• y* = xβ + ε; ε|x~N(0,1)
• y=0 if y*<α_1
• y=1 if α_1
• y=2 if y*>α_2
Now we can calculate the probabilities that a physician will fall into each category.
• P(y=0|x)=P(xβ + ε<α_1)=P(ε<α_1-Xβ )=Φ(α_1-Xβ)
• P(y=1|x)=P(xβ + ε<α_2) - P(xβ + ε<α_1) = Φ(α_2-Xβ)-Φ(α_1-Xβ)
• P(y=2|x)=P(xβ + ε>α_2)=1-Φ(α_2-Xβ)
Using maximum likelihood estimation, we can now derive the α and β parameter vectors. The log-likelihood function becomes:
• l(α,β)=1{y_i=0}log[Φ(α_1-Xβ)] + 1{y_i=1}log[Φ(α_2-Xβ) – Φ(α_1-Xβ)] + 1{y_i=2}log[1-Φ(α_2-Xβ)]
If we instead assume that the cdf of ε|x is ‘exp(Xβ)/[1+exp(xβ)]’, then we can use the logit model instead.
The end statistic of interest is P(y=j|x). This can be calculated as follows:
• ∂p_0(x)/∂x_k= -β_kφ(α_1-Xβ)
• ∂p_1(x)/∂x_k= β_k[φ(α_1-Xβ)-φ(α_2-Xβ)]
• ∂p_2(x)/∂x_k= β_k[φ(α_2-Xβ)]
For more information on ordered probits, see the Tokyo Climate Center’s ordered probit explanation as well as the treatment in Econometric Analysis of Cross Section and Panel Data (pp. 504-509) by | {"url":"https://www.healthcare-economist.com/2006/10/19/ordered-probit-or-logit-estimation/","timestamp":"2024-11-03T17:10:20Z","content_type":"text/html","content_length":"48099","record_id":"<urn:uuid:47b08143-6f6c-438d-85bc-5c029c146ad7>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00245.warc.gz"} |
Pearson's Product-Moment Correlation Coefficient
The Pearson product-moment correlation coefficient, often called Pearson's r, is a measure of the linear dependence between two random variables. Pearson's r is defined as the covariance between two
random variables divided by the product of the standard deviations of the random variables. Thus the population correlation coefficient is defined as The sample correlation coefficient can then be
calculated by The Pearson product-moment correlation coefficient will always be between -1 and 1. The closer the value is to either -1 or 1 the more highly correlated the two variables. A correlation
coefficient equal to either -1 or 1 indicates a perfect linear relationship between the two variables. A correlation coefficient close to 0 simply indicates that the two variables are not linearly
related, however they still may be highly correlated in a nonlinear sense. For example, suppose that X takes on the integer values between -50 and 50 with equal probability. Furthermore, let Y be
equal to the square of X. Then the Pearson product-moment correlation coefficient will be equal to 0. Oftentimes people take this to mean that X and Y are not correlated. This conclusion is clearly
not true as we know that Y is completely determined by X. Thus we can only say that X and Y are not linearly correlated. This rather vague conclusion is one of the disadvantages of using the Pearson
correlation coefficient. -- ErinEsp - 01 Jan 2011 | {"url":"https://ctspedia.org/ctspedia/pearsoncorrelation","timestamp":"2024-11-11T19:55:39Z","content_type":"text/html","content_length":"5162","record_id":"<urn:uuid:0bf26292-fc98-4c36-be76-b1c5fb71bf97>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00120.warc.gz"} |
Graham's Biggest Little Hexagon
The largest possible (not necessarily regular) Hexagon for which no two of the corners are more than unit distance apart. In the above figure, the heavy lines are all of unit length. The Area of the
hexagon is , where is the second-largest real Root of
See also Calabi's Triangle
Conway, J. H. and Guy, R. K. ``Graham's Biggest Little Hexagon.'' In The Book of Numbers. New York: Springer-Verlag, pp. 206-207, 1996.
Graham, R. L. ``The Largest Small Hexagon.'' J. Combin. Th. Ser. A 18, 165-170, 1975.
© 1996-9 Eric W. Weisstein | {"url":"http://drhuang.com/science/mathematics/math%20word/math/g/g235.htm","timestamp":"2024-11-13T09:42:40Z","content_type":"text/html","content_length":"4390","record_id":"<urn:uuid:c1802533-6294-4068-999c-3ecd080ed20c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00447.warc.gz"} |
Elementary Games
Elementary Games
A strategy game with a difference.
Turn all the lights out, if you can! Changing a light also changes the lights next to it.
Twist the letters back into shape by tapping on their joints and rotating any extended arms.
Shuffle the numbers into the right order
Find the original among the forgeries before the time is up.
Complete as many balloon popping challenges as you can before the time runs out. But you must follow the instructions!
Sink the enemy ships before they sink you! (Drag to place, drag outside to rotate.)
Pop the blocks before they reach your level. Lots of game options to play with!
Break the bottles in the order given, but be careful not to tap the wrong one!
A hybrid of Mastermind and the Android pattern lock. A game you will love to hate. Empty circle means correct...
Pop the bubbles before they reach your level.
Pop the bubbles before they reach your level. Lots of game options to play with!
Chocolate is nice ... so try to choose the larger block!
You can make color patterns on the number chart.
Five different connect-the-dot puzzles
Divide the blocks exactly.
Click on the missing numbers and choose the correct answer.
Find all of the hidden bugs. Gets tougher as you go.
Light the rockets ... but only in the right order!
A memory game where you match 3 symbols not just a pair. Also try
Math Match Game
Follow the bearings (000° style) and click on the destination. You get points for being close, too. Click...
Follow the North, South, East and West directions and click on the destination. You get points for being close,...
Also called Five in a Row. Try to get five stones in a row, column or diagonal. It uses a
Try to keep as many apples hanging as you can! Choose mathematics, or dogs, or many other subjects.
Guess if the next card is higher or lower, and earn points! (Note: after each bet it skips cards that are very...
How good are you at getting an exact quantity in a jug?
Drag and drop
the jugs left or right to fill, ...
Can you help make Lightybulb's world brighter by figuring out each puzzle?
Use your arrow keys to make the ball hit the right answer, but avoid the green squares. A wrong answer makes the...
How many ways can you make a Dollar (or Pound or Euro ... and more!) Drag and Drop the Coins.
Try to make the chosen number given 6 other numbers. Can use add, subtract, multiply, divide and parentheses
Erase all the numbers by matching totals
Test your memory AND your math skills, all in one game!
Think of a number between 1 and 63, answer 6 simple questions, and the Mind Reader will reveal your number!
How good are your money handling skills? How fast can you give change?
Make four in a line using multiplication.
Move the number blocks around just as you wish.
Move the blocks to match the shape. You have to get them exactly right.
See if you can match the pattern.
See if you can match the pattern.
See if you can match the pattern.
Rotate the pipes so the plumbing goes just everywhere. Click the pipes.
Rotate the pipes so the plumbing goes just everywhere. Start from the middle. Click the pipes.
Enter the number shown. Except it disappears. Also gets harder.
Roll the ball through the maze
Now you can send secret message to your friends ... using simple cryptogtaphy.
How long a sequence can you remember?
Click the answer on the number line. Gain points.
Move the blocks around, play with equations. Click a block to change it.
Eat the food at the coordinate point, but don't eat yourself!
Click on a nearby tile to spread the happiness.
Practice parking! Steer the car into each space. Up-down keys for speed, left-right for steering, space to brake.
Play Tic-Tac-Toe against another player or the computer. Different board sizes and computer strength!
Play the classic Tic-Tac-Toe game against the computer. The first player to get three squares in a row wins.
Jump one peg over another into an empty spot, removing the jumped peg from the board. The goal is to finish with...
The lines are crossing each other all over the place! Move the vertices (corner points) and bring order.
Classic word search puzzle, with different size and difficulty options. Also number search!
A dice game like Yahtzee. How high can you score?
Collect metals, trade in at station at a unit price less costs.
Copyright © 2023 Rod Pierce | {"url":"http://wegotthenumbers.org/index-elementary.html","timestamp":"2024-11-09T00:24:57Z","content_type":"text/html","content_length":"35423","record_id":"<urn:uuid:17cc35a8-94a5-4f0d-a5f1-1840802683f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00791.warc.gz"} |
Physics Lab | CertainError
Learning about physics is enhanced by experimentation, where measurement of actual events provides a verification of theories. A set of simple physics experiments can help teach us about things such
as length, area and volume measurement, the acceleration of gravity, oscillations, magnetism and light. The experiments can provide quantitative evidence when instruments of length, time, mass, etc,
are used and confirmed by repeated runs of an experiment. Uncertainty analysis is often an important component of introductory physics to cultivate a healthy skepticism. Measurement quality is
justified by how the experiments are designed and conducted. The common approach to uncertainty analysis in freshman-level physics is to use partial differentiation of the formula used to calculate
results from the raw measurements. This creates a problem because the student might not have completed a multi-variable calculus course yet (usually completed by the end of the sophomore year).
The Physics Lab project was adopted to apply the results of duals arithmetic’s automatic error propagation capability. The goal is to provide examples of practical physics experiments where data-
with-error is input to the CertainError APP and error propagated to results. The hope is that these examples demonstrate that students can perform uncertainty analysis in the freshman year physics
course prior to completing the sophomore level multi-variable calculus. This would change the way uncertainty analysis is taught and used.
The project started with a set of example experiments with data. This includes experiments with titles such as length measurement, gravity from a simple pendulum, index of refraction, acceleration
of a cart and electrical resistors in series. These are accompanied by theoretical formula used to calculate results from the measurements. For example, the gravity, g, is calculated from the
pendulum’s measured length, L, and measured period of oscillation, T. The numbers and formula are converted to duals arithmetic and a step-by-step recipe is adopted to use the CertainError APP.
This project has been a success. For example, a video provided on the CertainError Youtube channel, shows the ‘CertainError Calculator vs. Scientific Calculator’ for the pendulum experiment. This
assumes the uncertainty formula that use the Scientific Calculator have already been prepared (the physics student is either capable of working out the uncertainty math beforehand or the course
instructors provide the necessary formula). The CertainError APP is over twice as fast in solving the error propagation problem and does not require any preparation of uncertainty formula. Another
level of success could be achieved when the CertainError APP gains widespread use anywhere a student is asked to perform uncertainty analysis for a physics lab. | {"url":"https://certainerror.com/portfolio-view/physics-lab/","timestamp":"2024-11-07T12:59:52Z","content_type":"text/html","content_length":"37066","record_id":"<urn:uuid:949cb140-d064-4f50-812c-fce8d2864297>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00487.warc.gz"} |
How much does it cost to run a kettle? | Use Calculator
How much does it cost to run a kettle?
Want to know how much it costs to run a kettle? Enter your details into the electricity cost calculator below to find out.
Electricity rates
To calculate your kettle’s precise cost, you need to know the cost of electricity per unit (kilowatt-hour) from your provider. In the UK, this value can vary from provider to provider, this price is
currently capped at 29p per kWh.
How much electricity does a kettle use?
The average kettle uses between 1,000 to 3,000 watts of power to boil water. This means that if you were to use your kettle for one hour per day, it would consume 3000 watts or 3 kilowatt-hours (kWh)
of electricity.
The higher the wattage, the faster your kettle will boil, but this speed comes at the price of, well, speed in depleting your electricity supply.
Calculating the cost of running a kettle
To calculate the cost, you need to know the energy consumption of your kettle, which can typically be found in the user manual or on the manufacturer’s website.
Next, you need to know the cost of electricity per kWh in your location. This information can usually be found on your electricity bill or by contacting your utility provider.
Here’s a simple formula:
kWh × kWh cost ÷ 60 × minutes
Let’s say your kettle has a power rating of 3 kW, it takes 3 minutes to boil, and the cost of electricity is 29 pence per kWh. The calculation would look like this:
3 × 0.29 ÷ 60 × 3 = £0.043
So, it would cost you approximately 4.3 pence to boil your kettle for 3 minutes.
How much does it cost per month to run an electric kettle?
Using the same figures shown above, it would cost approximately:
• £1.30 to use a 3000 watt kettle for 3 minutes per day for 30 days.
• £2.17 to use a 3000 watt kettle for 5 minutes per day for 30 days.
• £3.48 to use a 3000 watt kettle for 8 minutes per day for 30 days.
• £4.35 to use a 3000 watt kettle for 10 minutes per day for 30 days.
Factors influencing the cost
While the average cost of boiling a kettle may seem relatively low, there are factors that can significantly impact this cost.
1. Kettle Efficiency: Some kettles are more energy-efficient than others. Features like automatic shut-off and thermal insulation can help to reduce energy consumption.
2. Water Volume: The more water you boil, the longer it takes, and the more energy it consumes. Only boil the amount of water you need.
3. Electricity Rates: Electricity costs vary by location and provider. Peak hours may also attract higher rates.
4. Frequency of Use: Naturally, the more often you use your kettle, the higher your total energy costs will be.
Tips to reduce kettle energy consumption
Being mindful of how you use your kettle can lead to considerable savings. Here are a few tips to consider:
Fill only what you need
One of the easiest ways to reduce energy usage is to boil just the amount of water you need. Most kettles have volume measurements imprinted inside, so you can fill to your desired level without
overdoing it.
Clean regularly
Over time, limescale buildup can insulate the heating element, making your kettle work harder and longer to achieve the same boil. Regular cleaning will keep your kettle in peak operating condition.
Use properly and efficiently
Make sure to place the kettle on the base correctly, and don’t allow the water level to drop below the minimum level when boiling. Both can lead to inefficient energy use.
Avoid using it during peak hours
During peak hours, the cost of energy is typically higher. You can save money on your electricity bill by avoiding using your kettle during these times.
Invest in a more efficient kettle
If you’re in the market for a new kettle, consider purchasing one with an energy-saving feature or one that has a lower wattage. These kettles will use less energy, ultimately leading to cost | {"url":"https://powercostcalculator.co.uk/cost-to-run-kettle/","timestamp":"2024-11-13T05:58:08Z","content_type":"text/html","content_length":"142846","record_id":"<urn:uuid:ec1a328b-399b-46bb-b86f-98ff4ea5e923>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00171.warc.gz"} |
02-10-14 - Understanding ANS - 9
If you just want to understand the basics of how ANS works, you may skip this post. I'm going to explore some unsolved issues about the sort order.
Some issues about constructing the ANS sort order are still mysterious to me. I'm going to try to attack a few points.
One thing I said wrote last time needs some clarification - "Every slot has an equal probability of 1/M."
What is true is that every character of the output string is equiprobable (assuming again that the Fs are the true probabilities). That is, if you have the string S[] with L symbols, each symbol s
occurs Fs times, then you can generate symbols with the correct probability by just drawing S[i] with random i.
The output string S[] also corresponds to the destination state of the encoder in the renormalization range I = [L,2L-1]. What is not true is that all states in I are equally probable.
To explore this I did 10,000 random runs of encoding 10,000 symbols each time. I used L=1024 each time, and gathered stats from all the runs.
This is the actual frequency of the state x having each value in [1024,2047] (scaled so that the average is 1000) :
The lowest most probable states (x=1024) have roughly 2X the frequency of the high least probable states (x=2047).
Note : this data was generated using Duda's "precise initialization" (my "sort by sorting" with 0.5 bias). Different table constructions will create different utilization graphs. In particular the
various heuristics will have some weird bumps. And we'll see what different bias does later on.
This is the same data with 1/X through it :
This probability distribution (1/X) can be reproduced just from doing this :
x = x*b + irandmod(b); // for any base b
while( x >= 2*K ) x >>= 1;
stats_count[x-K] ++;
though I'd still love to see an analytic proof and understand that better.
So, the first thing I should correct is : final states (the x' in I) are not equally likely.
How that should be considered in sort construction, I do not know.
The other thing I've been thinking about was why did I find that the + 1.0 bias is better in practice than the + 0.5 bias that Duda suggests ("precise initialization") ?
What the +1 bias does is push low probability symbols further towards the end of the sort order. I've been contemplating why that might be good. The answer is not that the end of the sort order makes
longer codelens, because that kind of issue has already been accounted for.
My suspicion was that the +1 bias was beating the +0.5 bias because of the difference between normalized counts and unnormalized original counts.
Recall that to construct the table we had to make normalized frequences Fs that sum to L. These, however, are not the true symbol frequencies (except in synthetic tests). The true symbol frequencies
had to be scaled to sum to L to make the Fs.
The largest coding error from frequency scaling is on the least probable symbols. In fact the very worst case is symbols that occur only once in a very large file. eg. in a 1 MB file a symbol occurs
once; its true probability is 2^-20 and it should be coded in 20 bits. But we scale the frequencies to sum to 1024 (for example), it still must get a count of 1, so it's coded in 10 bits.
What the +1 bias does is take the least probable symbols and push them to the end of the table, which maximizes the number of bits they take to code. If the {Fs} were the true frequencies, this would
be bad, and the + 0.5 bias would be better. But the {Fs} are not the true frequencies.
This raises the question - could we make the sort order from the true frequencies instead of the scaled ones? Yes, but you would then have to either transmit the true frequencies to the decoder, or
transmit the sort order. Either way takes many more bits than transmitting the scaled frequencies. (in fact in the real world you may wish to transmit even approximations of the scaled frequencies).
You must ensure the encoder and decoder use the same frequencies so they build the same sort order.
Anyway, I tested this hypothesis by making buffers synthetically by drawing symbols from the {Fs} random distribution. I took my large testset, for each file I counted the real histogram, made the
scaled frequencies {Fs}, then regenerated the buffer from the frequencies {Fs} so that the statistics match the data exactly. I then ran tANS on the synthetic buffers and on the original file data :
synthetic data :
total bytes out : 146068969.00 bias=0.5
total bytes out : 146117818.63 bias=1.0
real data :
total bytes out : 144672103.38 bias=0.5
total bytes out : 144524757.63 bias=1.0
On the synthetic data, bias=0.5 is in fact slightly better. On the real data, bias=1.0 is slightly better. This confirms that the difference between the normalized counts & unnormalized counts is in
fact the origin of 1.0's win in my previous tests, but doesn't necessarily confirm my guess for why.
An idea for an alternative to the bias=1 heuristic is you could use bias=0.5 , but instead of using the Fs for the sort order, use the estimated original count before normalization. That is, for each
Fs you can have a probability model of what the original count was, and select the maximum-likelihood count from that. This is exactly analoguous to restoring to expectation rather than restoring to
middle in a quantizer.
Using bias=1.0 and measuring state occurance counts, we get this :
Which mostly has the same 1/x curve, but with a funny tail at the end. Note that these graphs are generated on synthetic data.
I'm now convinced that the 0.5 bias is "right". It minimizes measured output len on synthetic data where the Fs are the true frequencies. It centers each symbol's occurances in the output string. It
reproduces the 1/x distribution of state frequencies. However there is still the missing piece of how to derive it from first principles.
While I was at it, I gathered the average number of bits output when coding from each state. If you're following along with Yann's blog he's been explaining FSE in terms of this. tANS outputs bits to
get the state x down into the coding range Is for the next symbol. The Is are always lower than I (L), so you have to output some bits to scale down x to reach the Is. x starts in [L,2L) and we have
to output bits to reach [Fs,2Fs) ; the average number of bits required is like log2(L/Fs) which is log2(1/P) which is the code length we want. Because our range is [L,2L) we know the average output
bit count from each state must differ by 1 from the top of the range to the bottom. In fact it looks like this :
Another way to think about it is that at state=L , the state is empty. As state increases, it is holding some fractional bits of information in the state variable. That number of fraction bits goes
from 0 at L up to 1 at 2L.
Ryg just pointed me at a proof of the 1/x distribution in Moffat's "Arithmetic Coding Revisited" (DCC98).
The "x" in ANS has the same properties as the range ("R") in an arithmetic coder.
The bits of information in x is I ~= log( x )
I is in [0,1] and is a uniform random value, Pr(I) ~= 1
if log(x) has Pr ~= 1 , then Pr(x) must be ~= 1/x
The fact that I is uniform is maybe not entirely obvious; Moffat just hand-waves about it. Basically you're accumulating a random variable into I ( -log2(P_sym) ) and then dropping the integer part;
the result is a fractional part that's random and uniform.
6 comments:
Jarek Duda said...
Hi Charles,
I think your "bias 1 works better for the real data" corresponds to the fact that in real data there happen extremely low probable symbols - bias 1/2 approximates them as 1/L, while bias 1 a bit
If we encode p_s sequence with encoder optimal for q_s distribution, the least probable symbols are the most significant:
deltaH ~ sum_s (p_s-q_s)^2/p_s
So for a good initializer, I would first take the extremely low probable symbols (let say < 0.5/L) as a singletons to the end of the table, and then use precise initialization for the remaining
Additionally, as f[s]/L ~ p[s] quantization is not exact, for example the "bias" could be individually chosen: as a bit larger if p[s] < f[s]/L, or smaller otherwise.
... but the details need further work.
I find the 1/x distribution picture very interesting. Maybe there could be a way to use that in calculating more precisely distribution weight.
I tried to reproduce your result, but so far got only a hint that it was the "general direction", not something as clean as your distribution picture.
Some differences :
- I'm using real-world input, not synthetic
- FSE distribution is not the same as yours, and is likely introducing some noise & distortion.
Anyway, it shows that the probability of a state is not necessarily a clean 1/X distribution graph, and will vary a lot depending on symbol distribution over the state table, something you hinted
into your second graph.
Yes, you only get the perfect 1/x when the statistics of the data match the normalized counts exactly (eg. with synthetic data), and when the sort is "Duda's precise".
I still don't entirely understand the 1/x.
I'd like to see a proof that the optimal sort order must lead to a 1/x distribution, or vice-versa that a 1/x distribution must provide minimum code length.
note that you need to make sure that 'b' is not a power of 2. Otherwise, 'x' will be constant.
and, for the record, here is a C-version of the iteration, showing the emerging 1/x stationary distribution: | {"url":"https://cbloomrants.blogspot.com/2014/02/02-10-14-understanding-ans-9.html?showComment=1416227739115","timestamp":"2024-11-07T16:48:16Z","content_type":"application/xhtml+xml","content_length":"79412","record_id":"<urn:uuid:c0a5c3c2-1dc5-4bfd-9f7a-3aa34311f776>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00841.warc.gz"} |
Chapter 13: Population Growth
13 Chapter 13: Population Growth
Anastasia Chouvalova and Lisa Limeri
By the end of this section, students will be able to…
• Contrast linear, exponential, and logistic population growth models, including how rates of growth differ across the models and how density-dependent and density-independent factors can influence
the rate of population growth.
• Interpret the graphs, variables, and terms in the linear, exponential, and logistic population growth models to describe and predict how population size and growth rate will change over time.
Population Growth
Population ecologists make use of a variety of methods to model population dynamics mathematically. These more precise models can then be used to accurately describe changes occurring in a population
and better predict future changes. Certain long-accepted models are now being modified or even abandoned due to their lack of predictive ability, and scholars strive to create effective new models.
The simplest models of population growth use deterministic equations (equations that do not account for random events) to describe the rate of change in the size of a population over time. The first
of these models, exponential growth, describes populations that increase in numbers without any limits to their growth. The second model, logistic growth, introduces limits to reproductive growth
that become more intense as the population size increases. Neither model adequately describes natural populations, but they provide points of comparison.
Exponential Growth
Charles Darwin, in his theory of natural selection, was greatly influenced by the work of Thomas Malthus. Malthus published a book in 1798 stating that populations with unlimited natural resources
grow very rapidly, which represents an exponential growth, and then population growth decreases as resources become depleted, indicating a logistic growth.
The best example of exponential growth is seen in bacteria. Bacteria reproduce by prokaryotic fission. This division takes about an hour for many bacterial species. If 1,000 bacteria are placed in a
large flask with an unlimited supply of nutrients (so the nutrients will not become depleted), after an hour, there is one round of division and each organism divides, resulting in 2,000 organisms—an
increase of 1,000. In another hour, each of the 2,000 organisms will double, producing 4,000, an increase of 2,000 organisms. After the third hour, there should be 8,000 bacteria in the flask, an
increase of 4,000 organisms. The important concept of exponential growth is the accelerating population growth rate—the number of organisms added in each reproductive generation—that is, it is
increasing at a greater and greater rate. After 1 day and 24 of these cycles, the population would have increased from 1,000 to more than 16 billion. When the population size, N, is plotted over
time, a J-shaped growth curve is produced (Figure 13.1).
Figure 13.1 When resources are unlimited, populations exhibit exponential growth, resulting in a J-shaped curve (left). When resources are limited, populations exhibit logistic growth (right). In
logistic growth, population expansion decreases as resources become scarce, and it levels off when the carrying capacity of the environment is reached, resulting in an S-shaped curve.
Why is the example about bacterial proliferation, given above, a good example of exponential growth? Select all that apply.
A. There is no limit in nutrient supply.
B. There is a strict limit in nutrient supply.
C. The rate at which the population grows increases over time.
D. The rate at which the population grows is constant over time.
E. The rate at which the population grows decreases over time.
F. Plotting the growth trend over time would result in an S-shaped curve.
G. Plotting the growth trend over time would result in an J-shaped curve.
The bacteria example is a poor representation of the real world, in which resources are limited. Furthermore, some bacteria will die during the experiment and thus not reproduce, lowering the growth
rate. Therefore, when calculating the growth rate of a population, the death rate (D) (number organisms that die during a particular time interval) is subtracted from the birth rate (B) (number
organisms that are born during that interval). This is shown in the following formula:
The birth rate is usually expressed on a per capita (for each individual) basis. Thus, B (birth rate) = bN (the per capita birth rate “b” multiplied by the number of individuals “N”) and D (death
rate) = dN (the per capita death rate “d” multiplied by the number of individuals “N”). Additionally, ecologists are interested in the population at a particular point in time, an infinitely small
time interval. For this reason, the terminology of differential calculus is used to obtain the “instantaneous” growth rate, replacing the change in number and time with an instant-specific
measurement of number and time.
Notice that the “d” associated with the first term refers to the derivative (as the term is used in calculus) and is different from the death rate, also called “d.” The difference between birth and
death rates is further simplified by substituting the term “r” (intrinsic rate of increase) for the relationship between birth and death rates: The value of “r” can be positive, meaning the
population is increasing in size; or negative, meaning the population is decreasing in size; or zero, where the population’s size is unchanging, a condition known as zero population growth. A further
refinement of the formula recognizes that different species have inherent differences in their intrinsic rate of increase (often thought of as the potential for reproduction), even under ideal
conditions. Obviously, a bacterium can reproduce more rapidly and have a higher intrinsic rate of growth than a human. The maximal growth rate for a species is its biotic potential, or r[max], thus
changing the equation to:
What does the term r represent in exponential population growth equations? r represents the…
A. birth rate of the population
B. death rate of the population
C. number of individuals in the population
D. intrinsic rate of increase in the population
Logistic Growth
Exponential growth is possible only when infinite natural resources are available; this is not the case indefinitely in the real world. Charles Darwin recognized this fact in his description of the
“struggle for existence,” which states that individuals will compete (with members of their own or other species) for limited resources. The successful ones will survive to pass on their own
characteristics and traits (which we know now are transferred by genes) to the next generation at a greater rate (natural selection). To model the reality of limited resources, population ecologists
developed the logistic growth model.
Carrying Capacity and the Logistic Model
In the real world, with its limited resources, exponential growth cannot continue indefinitely. Exponential growth may occur in environments where there are few individuals and plentiful resources,
but when the number of individuals gets large enough, resources will be depleted, slowing the growth rate. Eventually, the growth rate will plateau or level off (Figure 13.1, right). This population
size, which represents the maximum population size that a particular environment can support, is called the carrying capacity, or K.
The formula we use to calculate logistic growth adds the carrying capacity as a moderating force in the growth rate. The expression “K – N” indicates how many individuals could be added to a
population at a given stage, and “K– N” divided by “K” is the fraction of the carrying capacity available for further growth. Thus, the exponential growth model is restricted by this factor to
generate the logistic growth equation:
Notice that when N is very small, (K-N)/K becomes close to K/K or 1, and the right side of the equation reduces to r[max]N, which means the population is growing exponentially and is not influenced
by carrying capacity. On the other hand, when N is large, (K-N)/K comes close to zero, which means that population growth will be slowed greatly or even stopped. Thus, population growth is greatly
slowed in large populations by the carrying capacity, K. This model also allows for the population of a negative population growth, or a population decline. This occurs when the number of individuals
in the population exceeds the carrying capacity (because the value of (K-N)/K is negative).
A graph of this equation yields an S-shaped curve (Figure 13.1, right), and it is a more realistic model of population growth than exponential growth. There are three different sections to an
S-shaped curve. Initially, growth is exponential because there are few individuals and thus ample resources available. Then, as resources begin to become limited, the growth rate decreases. Finally,
growth levels off at the carrying capacity of the environment, with little change in population size over time.
Role of Intraspecific Competition
The logistic model assumes that every individual within a population will have equal access to resources and, thus, an equal chance for survival. For plants, the amount of water, sunlight, nutrients,
and the space to grow are the important resources, whereas in animals, important resources include food, water, shelter, nesting space, and mates.
In the real world, phenotypic variation among individuals within a population means that some individuals will be better adapted to their environment than others. The resulting competition between
population members of the same species for resources is termed intraspecific competition (intra- = “within”; -specific = “species”). Intraspecific competition for resources may have little effect
populations that are well below their carrying capacity—resources are plentiful and all individuals can obtain what they need. However, as population size increases, this competition intensifies. In
addition, the accumulation of waste products can reduce an environment’s carrying capacity.
Examples of Logistic Growth
Yeast, a microscopic fungus used to make bread and alcoholic beverages, exhibits the classical S-shaped curve when grown in a test tube (Figure 13.2a). Its growth levels off as the population
depletes the nutrients. In the real world, however, there are variations to this idealized curve. Examples in wild populations include sheep and harbor seals (Figure 13.2b). In both examples, the
population size exceeds the carrying capacity for short periods of time and then falls below the carrying capacity afterwards. This fluctuation in population size continues to occur as the population
oscillates around its carrying capacity.
In the logistic population growth equation, what does K represent? Select all correct answers.
A. Rate at which the population changes over time
B. Carrying capacity
C. Biotic potential
D. The greatest population of a species that can be maintained by a given environment
E. The maximum rate at which a species can grow in a given environment.
Figure 16.2
What is the primary difference between exponential and logistic population growth models?
A. Exponential growth models account for resource limitations whereas logistic growth models do not.
B. Logistic growth models account for resource limitations whereas exponential growth models do not.
C. The exponential growth model describes growth in bacteria whereas the logistic growth model describes growth in all other organisms.
D. The exponential growth model uses r while the logistic growth model uses r[max]
Population dynamics and regulation
The logistic model of population growth, while valid in many natural populations and a useful model, is a simplification of real-world population dynamics. Implicit in the model is that the carrying
capacity of the environment does not change, which is not the case. Carrying capacity for naturally occurring populations can vary annually: for example, some summers are hot and dry whereas others
are cold and wet. In many areas, the carrying capacity during the winter is much lower than it is during the summer. Also, natural events such as earthquakes, volcanoes, and fires can alter an
environment and hence its carrying capacity. Additionally, populations do not usually exist in isolation. They engage in interspecific competition: that is, they share the environment with other
species competing for the same resources. These factors are also important to understanding how a specific population will grow.
Population growth is regulated in a variety of ways. These are grouped into density-dependent factors, in which the density of the population at a given time affects growth rate and mortality, and
density-independent factors, which influence mortality in a population regardless of population density. Note that in the former, the effect of the factor on the population depends on the density of
the population at onset. Conservation biologists want to understand both types because this helps them manage populations and prevent extinction or overpopulation.
Density-dependent Regulation
Most density-dependent factors are biological in nature (biotic), and include predation, inter- and intraspecific competition, accumulation of waste, and diseases such as those caused by parasites.
Usually, the denser a population is, the greater its mortality rate. For example, during intra- and interspecific competition, the reproductive rates of the individuals will usually be lower,
reducing their population’s rate of growth. In addition, low prey density increases the mortality of its predator because it has more difficulty locating its food source.
An example of density-dependent regulation is shown in Figure 13.3 with results from a study focusing on the giant intestinal roundworm (Ascaris lumbricoides), a parasite of humans and other mammals.
Denser populations of the parasite exhibited lower fecundity; they contained fewer eggs. One possible explanation for this is that females would be smaller in more dense populations (due to limited
resources) and that smaller females would have fewer eggs. This hypothesis was tested and disproved in a 2009 study which showed that female weight had no influence. The actual cause of the
density-dependence of fecundity in this organism is still unclear and awaiting further investigation.
Figure 13.3 In this population of roundworms, fecundity (number of eggs) decreases with population density.
Density-Independent Regulation and Interaction with Density-Dependent Factors
Many factors, typically abiotic (e.g., physical or chemical), influence the mortality of a population regardless of its density, including weather, natural disasters, and pollution. An individual
deer may be killed in a forest fire regardless of how many deer happen to be in that area. Its chances of survival are the same whether the population density is high or low. The same holds true for
cold winter weather.
In real-life situations, population regulation is very complicated and density-dependent and independent factors can interact. A dense population that is reduced in a density-independent manner by
some environmental factor(s) will be able to recover differently than a sparse population. For example, a population of deer affected by a harsh winter will recover faster if there are more deer
remaining to reproduce.
Why Did the Woolly Mammoth Go Extinct?
Figure 13.4. (a) 1916 mural of a mammoth herd from the American Museum of Natural History, (b) the only stuffed mammoth in the world, from the Museum of Zoology located in St. Petersburg, Russia, and
(c) a one-month-old baby mammoth, named Lyuba, discovered in Siberia in 2007. (credit a: modification of work by Charles R. Knight; credit b: modification of work by “Tanapon”/Flickr; credit c:
modification of work by Matt Howry)
It’s easy to get lost in the discussion about why dinosaurs went extinct 65 million years ago. Was it due to a meteor slamming into Earth near the coast of modern-day Mexico, or was it from some
long-term weather cycle that is not yet understood? One hypothesis that will never be proposed is that humans had something to do with it. Mammals were small, insignificant creatures of the forest 65
million years ago, and no humans existed. Scientists are continually exploring these and other theories.
Woolly mammoths began to go extinct much more recently, about 10,000 years ago, when they shared the Earth with humans who were no different anatomically than humans today (Figure 13.4). Mammoths
survived in isolated island populations as recently as 1700 BC. We know a lot about these animals from carcasses found frozen in the ice of Siberia and other regions of the north. Scientists have
sequenced at least 50 percent of its genome and believe mammoths are between 98 and 99% identical to modern elephants.
It is commonly thought that climate change and human hunting led to their extinction. A 2008 study estimated that climate change reduced the mammoth’s range from 3,000,000 square miles 42,000 years
ago to 310,000 square miles 6,000 years ago (Nogués-Bravo et al. 2008). It is also well documented that humans hunted these animals. A 2012 study showed that no single factor was exclusively
responsible for the extinction of these magnificent creatures. In addition to human hunting, climate change, and reduction of habitat, these scientists demonstrated another important factor in the
mammoth’s extinction was the migration of humans across the Bering Strait to North America during the last ice age 20,000 years ago.
The maintenance of stable populations was and is very complex, with many interacting factors determining the outcome. It is important to remember that humans are also part of nature. We once
contributed to a species’ decline using only primitive hunting technology.
It is thought that climate change contributed to the extinction of Wooly Mammoths. Which of the following describes climate change?
A. An exponential growth factor
B. A logistic growth factor
C. A density-dependent regulator
D. A density-independent regulator
Adapted from
Clark, M.A., Douglas, M., and Choi, J. (2018). Biology 2e. OpenStax. Retrieved from https://openstax.org/books/biology-2e/pages/45-3-environmental-limits-to-population-growth
Fisher, M.R., and Editor. (n.d.) Environmental Biology. PressBooks. Retrieved from https://iu.pressbooks.pub/environmentalbiology/chapter/4-2-population-growth-and-regulation/ | {"url":"https://raider.pressbooks.pub/biology2/chapter/13-population-growth/","timestamp":"2024-11-14T01:13:56Z","content_type":"text/html","content_length":"99602","record_id":"<urn:uuid:faf55398-131a-48ed-b509-dd8ef116ca66>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00633.warc.gz"} |
In previous articles, I've discussed
empirical Bayesian estimation for the beta-binomial model
. Empirical Bayesian analysis is useful, but it's only an approximation to the full hierarchical Bayesian analysis. In this post, I'm going to work through the entire process of doing an equivalent
full hierarchical Bayesian analysis with MCMC, from looking at the data and picking a model to creating the MCMC to checking the results. There are, of course, great packages and programs out there
such as
that will fit the MCMC for you, but I want to give a basic and complete "under the hood" example.
Before I get started, I want to be clear that coding a Bayesian analysis with MCMC from scratch involves many choices and multiple checks at almost all levels. I'm going to hand wave some choices
based on what I know will work well (though I'll try to be clear where and why I'm doing so) and I'm not going to attempt to show every possible way of checking an MCMC procedure in one post - so
statistics such as $\hat{R}$ and effective sample size will not be discussed. For a fuller treatment of Bayesian estimation using MCMC, I recommend Gelman et. al's
Bayesian Data Analysis
and/or Carlin and Louis's
Bayesian Methods in Data Analysis
As usual,
all my code and data can be found on my github
The Data and Notation
The goal is to fit a hierarchical model to batting averages in the 2015 season. I'm going to limit my data set to only the batting averages of all MLB hitters (excluding pitchers) who had at least
300 AB, as those who do not meet these qualifications can arguably be said to come from a different "population" of players. This data was collected from fangraphs.com and can be seen in the
histogram below.
For notation, I'm going let $i$ index MLB players in the sample and define $\theta_i$ as a player's "true" batting average in 2015. The goal is to use the observed number of hits $x_i$ in $n_i$
at-bats (AB) to estimate $\theta_i$ for player $i$. I'll assume that I have $N$ total players - in 2015, there were $N = 254$ non-pitchers with at least 300 AB.
I'm also going to use a $\sim$ over a variable to represent the collection of statistics over all players in the sample. For example, $\tilde{x} = \{ x_1, x_2, ..., x_N\}$ and $\tilde{\theta} = \{\
theta_1, \theta_2, ..., \theta_N\}$.
Lastly, when we get to the MCMC part, we're going to take samples from the posterior distributions rather than calculating them directly. I'm going to use $\mu^*_j$ to represent the set of samples
from the posterior distribution for $\mu$, where $j$ indexes 1 to however many samples the computer is programmed to obtain (usually a very large number, since computation is relatively cheap these
days), and similarly $\phi^*_j$ and $\theta^*_{i,j}$ for samples from the posterior distribution of $\phi$ and $\theta_i$, respectively.
The Model
First, the model must be specified. I'll assume that for each each at-bat, a given player has identical probability $\theta_i$ of getting a hit, independent of other at-bats. The distribution of the
total number of hits in $n_i$ at-bats is then binomial.
$x_i \sim Bin(n_i, \theta_i)$
For the distribution of the batting averages $\theta_i$ themselves, I'm going to use a beta distribution. Looking at the histogram of the data, it looks relatively unimodal and bell-shaped, and
batting averages by definition must be between 0 and 1. Keep in mind that the distribution of
batting averages $x_i/n_i$ is not the same as the distribution of
batting averages $\theta_i$, but even after taking into account the binomial variation around the true batting averages, the distribution of the $\theta_i$ should also be unimodal, roughly
bell-shaped, and bounded by 0 and 1. The beta distribution - bounded by 0 and 1 by definition - will be able to take that shape (though
others have plausibly argued that a beta is not entirely correct
Most people are familiar with the beta distribution in terms of $\alpha$ and $\beta$:
$\theta_i \sim Beta(\alpha, \beta)$
There isn't anything wrong with coding an MCMC in this form (and would almost certainly work well in this scenario), but I know from experience that a different parametrization works better - I'm
going to use the beta distribution with parameters $\mu$ and $\phi$:
$\theta_i \sim Beta(\mu, \phi)$
where $\mu$ and $\phi$ are given in terms of $\alpha$ and $\beta$ as
$\mu = \dfrac{\alpha}{\alpha + \beta}$
$\phi = \dfrac{1}{\alpha + \beta + 1}$
In this parametrization, $\mu$ represents the expected value $E[\theta_i]$ of the beta distribution - the true league mean batting average - and $\phi$, known formally as the "dispersion parameter,"
is the correlation between two individual at-bats from the same randomly chosen player - in sabermetric speak, it's how much a hitter's batting average has "stabilized" after a single at-bat. The
value of $\phi$ controls how spread out the $\theta_i$ are around $\mu$.
The advantage of using this parametrization instead of the traditional one is that both $\mu$ and $\phi$ are bounded between 0 and 1 (whereas $\alpha$ and $\beta$ can take any value from 0 to $\
infty$) and a closed parameter space makes the process of specifying priors easier and will improve the convergence of the MCMC algorithm later on.
Finally, priors must be chosen for the parameters $\mu$ and $\phi$. I'm going to lazily choose diffuse beta priors for both.
$\mu \sim Beta(0.5,0.5)$
$\phi \sim Beta(0.5,0.5)$
The advantage of choosing beta distributions for both (possible with the parametrization I used!) is that both priors are proper (in the sense of being valid probability density functions), and
proper priors always yield proper posteriors - so that eliminates one potential problem to worry about. These prior distributions are definitely arguable - they put a fair amount of probability at
the ends of the distributions, and I know for a fact that the true league mean batting average isn't actually 0.983 or 0.017, but I wanted to use something that worked well in the MCMC procedure and
wasn't simply a flat uniform prior between 0 and 1.
The Math
Before jumping into the code, we need to do some math. Mass functions and densities of the binomial distribution for the $x_i$, beta distributions for $\theta_i$ (in terms of$\mu$ and $\phi$), and
beta priors for $\mu$ and $\phi$ are given by
$p(x_i | n_i, \theta_i) = \displaystyle {n_i \choose x_i} \theta_i^{x_i} (1-\theta_i)^{n_i - x_i}$
$p(\theta_i | \mu, \phi) = \dfrac{\theta_i^{\mu (1-\phi)/\phi - 1} (1-\theta_i)^{(1-\mu) (1-\phi)/\phi - 1}}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)}$
$\pi(\mu) = \dfrac{\mu^{-0.5}(1-\mu)^{-0.5}}{\beta(0.5,0.5)}$
$\pi(\phi) = \dfrac{\phi^{-0.5}(1-\phi)^{-0.5}}{\beta(0.5,0.5)}$
From Bayes' theorem, the joint posterior density of $\mu$, $\phi$, and all $N = 254$ of the $\theta_i$ is given by
$p(\tilde{\theta}, \mu, \phi | \tilde{x}, \tilde{n}) = \dfrac{p( \tilde{x}, \tilde{n}| \tilde{\theta} )p(\tilde{\theta} | \mu, \phi) \pi(\mu) \pi(\phi)}{\int \int ... \int \int p( \tilde{x}, \tilde
{n}| \tilde{\theta} )p(\tilde{\theta} | \mu, \phi) \pi(\mu) \pi(\phi) d\tilde{\theta} d\mu d\phi}$
The $...$ in the integrals means that every single one of the $\theta_i$ must be integrated out as well as $\mu$ and $\phi$, so the numerical integration here involves 256 dimensions. This is not
numerically tractable, hence Markov chain Monte Carlo will be used instead.
The goal of Markov chain Monte Carlo is to draw a "chain" of samples $\mu^*_j$, $\phi^*_j$, and $\theta^*_{i,j}$ from the posterior distribution $p(\tilde{\theta}, \mu, \phi | \tilde{x}, \tilde{n})$.
This is going to be accomplished in iterations, where at each iteration $j$ the distribution of the samples depends only on the values at the previous iteration $j-1$ (this is the "Markov" property
of the chain). There are two basic "building block" techniques that are commonly used to do this.
The first technique is called the Gibbs sampler. The full joint posterior $p(\tilde{\theta}, \mu, \phi | \tilde{x}, \tilde{n})$ may not be known, but suppose that given values of the other
parameters, the
posterior distribution $p(\tilde{\theta} | \mu, \phi, \tilde{x}, \tilde{n})$ is known - if so, it can be used to simulate $\tilde{\theta}$ values from $p(\tilde{\theta} | \mu^*_j, \phi^*_j, \tilde
{x}, \tilde{n})$.
Looking at the joint posterior described above, the denominator of the posterior density (after performing all integrations) is just a normalizing constant, so we can focus on the numerator:
$p(\tilde{\theta}, \mu, \phi | \tilde{x}, \tilde{n}) \propto p( \tilde{x}, \tilde{n}| \tilde{\theta} )p(\tilde{\theta} | \mu, \phi) \pi(\mu) \pi(\phi) $
$= \displaystyle \prod_{i = 1}^N \left( {n_i \choose x_i} \dfrac{\theta_i^{x_i + \mu (1-\phi)/\phi - 1} (1-\theta_i)^{n_i - x_i + (1-\mu) (1-\phi)/\phi - 1}}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)
/\phi)} \right) \dfrac{\mu^{-0.5}(1-\mu)^{-0.5}}{\beta(0.5,0.5)} \dfrac{\phi^{-0.5}(1-\phi)^{-0.5}}{\beta(0.5,0.5)}$
From here, we can ignore any of the terms above that do not have a $\phi$, a $\mu$, or a $\theta_i$ in them, since those will either cancel out or remain constants in the full posterior as well:
$\displaystyle \prod_{i = 1}^N \left( \dfrac{\theta_i^{x_i + \mu (1-\phi)/\phi - 1} (1-\theta_i)^{n_i - x_i + (1-\mu) (1-\phi)/\phi - 1}}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)} \right) \mu^
Now we're going to check and see if there are any terms that, when looked at as variables with
everything else
treated as a constant, take the form of a recognizable distribution. It turns out that the function:
$\theta_i^{x_i + \mu (1-\phi)/\phi - 1} (1-\theta_i)^{n_i - x_i + (1-\mu) (1-\phi)/\phi - 1}$
is the kernel of an un-normalized beta distribution for $\theta_i$ with parameters
$\alpha_i = x_i + \mu \left(\dfrac{1-\phi}{\phi}\right)$
$\beta_i = n_i - x_i + (1-\mu) \left(\dfrac{1-\phi}{\phi}\right) $
since we are assuming $\mu$ and $\phi$ are
in the conditional distribution. Hence, we can say that the conditional distribution of the $\theta_i$ given $\mu$, $\phi$, and the data is beta.
This fact be used in the MCMC to draw an observation $\theta^*_{i,j}$ from the posterior distribution for each $\theta_i$ given draws $\mu^*_j$ and $\phi^*_j$ from the posterior distributions for $\
mu$ and $\phi$:
$\theta^*_{i,j} \sim Beta\left(x_i + \mu^*_j \left(\dfrac{1-\phi^*_j}{\phi^*_j}\right), n_i - x_i + (1- \mu^*_j )\left(\dfrac{1-\phi^*_j}{\phi^*_j}\right) \right)$
Note that this formulation uses the traditional $\alpha, \beta$ parametrization. This is a "Gibbs step" for the $\theta_i$.
Unfortunately, looking at $\mu$ and $\phi$ in isolation doesn't yield a similar outcome - observing just the terms involving $\mu$ and treating everything else as constant, for example, gives the
$\displaystyle \prod_{i = 1}^N \left( \dfrac{\theta_i^{\mu (1-\phi)/\phi - 1} (1-\theta_i)^{(1-\mu) (1-\phi)/\phi - 1}}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)} \right) \mu^{-0.5}(1-\mu)^
which is not recognizable as the kernel of any common density. Doing the same thing for $\phi$ gives a nearly identical function. Hence, the Gibbs technique won't be used for $\mu$ and $\phi$.
One advantage, however, of recognizing that the conditional distribution of the $\theta_i$ given all other parameters is beta is that we can integrate the $\theta_i$ out in the likelihood in order to
get at the distributions of $\mu$ and $\phi$ more directly:
$\displaystyle p(x_i, n_i | \mu, \phi) = \int_0^1 p(x_i, n_i | \theta_i) p(\theta_i | \mu, \phi) d\theta_i = \int_0^1 {n_i \choose x_i} \dfrac{\theta_i^{x_i + \mu (1-\phi)/\phi - 1)} (1-\theta_i)^
{n_i - x_i + (1-\mu) (1-\phi)/\phi - 1}}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)} d\theta_i $
$ = \displaystyle {n_i \choose x_i} \dfrac{\beta(x_i + \mu (1-\phi)/\phi), n_i - x_i + (1-\mu) (1-\phi)/\phi)}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)}$
In fact, we can do this for
every single one
of the $\theta_i$ in the formula above and rewrite the posterior function just in terms of $\mu$ and $\phi$:
$p(\mu, \phi | \tilde{x}, \tilde{n}) \propto \displaystyle \prod_{i = 1}^N \left( \dfrac{\beta(x_i + \mu (1-\phi)/\phi), n_i - x_i + (1-\mu) (1-\phi)/\phi)}{\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/
\phi)} \right) \mu^{-0.5}(1-\mu)^{-0.5}\phi^{-0.5}(1-\phi)^{-0.5}$
This leads directly into the second (and more general) technique for obtaining draws from the posterior distribution: the Metropolis-Hastings algorithm. Suppose that instead of the full posterior $p
(\mu, \phi | \tilde{x}, \tilde{n})$, you have a function that is
to the full posterior (like the numerator above)
$h(\mu, \phi | \tilde{x}, \tilde{n}) \propto p(\mu, \phi | \tilde{x}, \tilde{n})$
It's possible to construct a Markov chain of $\mu^*_j$ samples using the following steps:
1. Simulate a candidate value $\mu^*_c$ from some distribution $G(\mu^*_c | \mu^*_{j-1})$
2. Simulate $u$ from a uniform distribution between 0 and 1.
3. Calculate the ratio
$\dfrac{h(\mu^*_{c}, \phi^*_{j-1} | \tilde{x}, \tilde{n})}{h(\mu^*_{j-1}, \phi^*_{j-1} | \tilde{x}, \tilde{n})}$
If this ratio is larger than $u$, accept the candidate value and declare $\mu^*_j = \mu^*_{c}$.
If this ratio is smaller than $u$, reject the candidate value and declare $\mu^*_j = \mu^*_{j-1}$
A nearly identical step may be used to draw a sample $\phi^*_j$, only using $h(\mu^*_{j-1}, \phi^*_{c} | \tilde{x}, \tilde{n})$ instead. Note that at each Metropolis-Hastings step the value from the
iteration is used, even if a new value for another parameter was accepted in another step.
In practice, there are two things that are very commonly (but not always) done for Metropolis-Hastings steps: first, calculations are generally performed on the
scale, as the computations become much, much more numerically stable. To do this, we simply need to take the log of the function $h(\mu, \phi | \tilde{x}, \tilde{n})$ above:
$m(\mu, \phi | \tilde{x}, \tilde{n}) = \log[h(\mu, \phi | \tilde{x}, \tilde{n})] = \displaystyle \sum_{i = 1}^N \left[ \log(\beta(x_i + \mu (1-\phi)/\phi), n_i - x_i + (1-\mu) (1-\phi)/\phi))\right]$
$- N \log(\beta(\mu (1-\phi)/\phi, (1-\mu) (1-\phi)/\phi)) - 0.5\log(\mu) - 0.5\log(1-\mu) - 0.5\log(\phi) - 0.5\log(1-\phi)$
This $m$ function is called repeatedly throughout the code. Secondly, for the candidate distribution, a normal distribution is used centered at the previous value of the chain, with some pre-chosen
variance $\sigma^2$, which I will explain how to determine in the next section. Using $\mu$ as an example, the candidate distribution would be
$G(\mu^*_c | \mu^*_{j-1}) \sim N(\mu^*_{j -1}, \sigma^2_{\mu})$
Using these two adjustments, the Metropolis-Hastings step for $\mu$ then becomes
1. Simulate a candidate value from a $N(\mu^*_{j-1}, \sigma^2_{\mu})$ distribution
2. Simulate $u$ from a uniform distribution between 0 and 1.
3. If $m(\mu^*_{c}, \phi^*_{j-1} | \tilde{x}, \tilde{n}) - m(\mu^*_{j-1}, \phi^*_{j-1} | \tilde{x}, \tilde{n}) > \log(u)$, accept the candidate value and declare $\mu^*_j = \mu^*_{c}$. Otherwise,
reject the candidate value and declare $\mu^*_j = \mu^*_{j-1}$
With Metropolis-Hastings steps and Gibbs steps, we can create a Markov chain that converges to the posterior distribution.
Choosing Starting Values and Checking Output
Now that we have either the conditional posteriors we need for the Gibbs sampler or a function proportional to them for the Metropolis-Hastings steps, it's time to write code to sample from them.
Each iteration of the MCMC code will perform the following steps:
1. Draw a candidate value $\mu^*_c$ from $N(\mu^*_{j-1}, \sigma^2_{\mu})$
2. Perform a Metropolis-Hastings calculation to determine whether to accept or reject $\mu^*_c$. If accepted, set $\mu^*_j = \mu^*_c$. If rejected, set $\mu^*_j = \mu^*_{j - 1}$
3. Draw a candidate value $\phi^*_c$ from $N(\phi^*_{j-1}, \sigma^2_{\phi})$
4. Perform a Metropolis-Hastings calculation to determine whether to accept or reject $\phi_c$. If accepted, set $\phi^*_j = \phi^*_c$. If rejected, set $\phi^*_j = \phi^*_{j - 1}$
5. For each of the $\theta^*_i$, draw a new $\theta^*_{i,j}$ from the conditional beta distribution:
$\theta^*_{i,j} \sim Beta\left(x_i + \mu^*_j \left(\dfrac{1-\phi^*_j}{\phi^*_j}\right), n_i - x_i + (1- \mu^*_j )\left(\dfrac{1-\phi^*_j}{\phi^*_j}\right) \right)$
Again, note that this formulation of the beta distribution uses the traditional $\alpha, \beta$ parametrization.
A problem emerges - we need starting values $\mu^*_1$ and $\phi^*_1$ before we can use the algorithm (starting values for the $\theta^*_{i,1}$ aren't needed - the Gibbs sampler in step 5 above can be
used to simulate them given starting values for the other two parameters). Ideally, you would pick starting values in a high-probability area of the posterior distribution, but if you knew the
posterior distribution you wouldn't be performing MCMC!
You could just pick arbitrary starting points - statistical theory says that no matter what starting values you choose, the distribution of samples from the Markov chain will
converge to the distribution of the posterior you want (assuming certain regularity conditions which I will not go into), but there's no hard and fast rule on how long it will take. If you pick
values extremely far away from the posterior, it could take quite a while for your chain to converge. There's a chance you could have run for your code for 10,000 iterations and
not have reached the posterior distribution, and there's no way of knowing since you don't know the posterior to begin with!
Statisticians generally do two things to check that this hasn't occurred:
1. Use multiple starting points to create multiple chains of $\mu^*_j$, $\phi^*_j$, and $\theta^*_{i,j}$ that can be compared (visually or otherwise) to see if they all appear to have converged to
the same area in the parameter space.
2. Use a fixed number of "burn-in" iterations to give the chain a chance to converge to the posterior distribution before taking the "real" draws from the chain.
There is no definite answer on exactly how to pick the different starting points - you could randomly choose points in the parameter space (which is handily confined to between 0 and 1 for the
parametrization I used!), or you could obtain estimates from some frequentist statistical procedure (such as method of moments or marginal maximum likelihood) and use those, or you could pick values
based on your own knowledge of the problem - for example, choosing $\mu^*_1 = 0.265$ based on what knowing the league mean batting average is probably close that value. No matter how you do it,
starting points should be spread out over the parameter space to make sure the chains aren't all going to the same place just because they started off close to each other.
Two more questions must be answered to perform the Metropolis-Hastings step - how do you choose $\sigma^2_{\mu}$ and $\sigma^2_{\phi}$ in the normal candidate distributions? And how often should you
accept the candidate values?
The answers to these questions are closely tied to each other. For mathematical reasons that I will not go into in this article (and a bit of old habit), I usually aim for an acceptance rate of
roughly around 40%, though the specific value depends on the dimensionality of the problem (see
this paper
by Gelman, Roberts, and Wilks for more information). In practice, I'm usually not worried if it's 30% or 50% as long as everything else looks okay.
If the acceptance rate is good, then a plot of the value of the chain versus the iteration number (called a "trace plot") should look something like
I've used two chains for $\mu$ here, starting at different points. The "spiky blob" shape is exactly what we're looking for - the values of the chains jump around at a good pace, but still making
large enough jumps to effectively cover the parameter space.
If the acceptance rate is too small or too large, it can be adjusted by changing $\sigma^2$ in the normal candidate distribution. An acceptance rate that is too
means that the chains will not move around the parameter space effectively. If this is the case, a plot of the chain value versus the iteration number looks like
The plot looks nicer visually, but that's not a good thing - sometimes the chains stay at the same value for hundreds of iterations! The solution to this problem is to
$\sigma^2$ so that the candidate values are closer to the previous value, and more likely to be accepted.
Conversely, if the acceptance rate is too high then the chains will still explore the parameter space, but much too slowly. A plot of the chain value versus the iteration looks like
In this plot, it looks like the two the chains don't
converge to the posterior distribution until hundreds of iterations after the initial draws. Furthermore, the chains are jumping to new values at nearly every iteration, but the jumps are so small
that it takes an incredibly large number of iterations to explore the parameter space. If this is the case, the solution is to
$\sigma^2$ so that the candidates are further from the current value, and less likely to be accepted.
The value of $\sigma^2$, then, is often chosen by trial-and-error after the code has been written by manually adjusting the value in multiple runs of the MCMC so that the trace plots have the "spiky
blob" shape and the acceptance rate is reasonable. Through this method, I found that the following candidate distributions for $\mu$ and $\phi$ worked well.
$\mu^*_c \sim N(\mu^*_{j-1}, 0.005^2)$
$\phi^*_c \sim N(\phi^*_{j-1}, 0.001^2)$
The Code
Now that we know the steps the codes will take and what inputs are necessary, coding can begin. I typically code in R, and find it useful to write a function that has inputs of data vectors, starting
values for any parameters, and any MCMC tuning parameters I might want to change (such as the number of draws, length of the burn-in period, or the variance of the candidate distributions). In the
code below, I set the burn-in period and number of iterations to default to 1000 and 5000, respectively, and after running the code several times without defaults for candidate variances, I
determined values of $\sigma^2_{\mu}$ and $\sigma^2_{\phi}$ that produced reasonable trace plots and acceptance rates and set those as defaults as well.
For output, I used the
structure in R to return a vector chain of $\mu^*_j$, a vector chain of $\phi^*_j$, a matrix of chains $\theta^*_{i,j}$, and a vector of acceptance rates for the Metropolis-Hastings steps for $\mu$
and $\phi$.
The raw code for the MCMC function is shown below, and annotated code
may be found on my Github
betaBin.mcmc <- function(x, n, mu.start, phi.start, burn.in = 1000, n.draws = 5000, sigma.mu = 0.005, sigma.phi = 0.001) {
m = function(mu, phi, x, n) {
N = length(x)
l = sum(lbeta(mu*(1-phi)/phi + x, (1-mu)*(1-phi)/phi+n-x)) - N*lbeta(mu*(1-phi)/phi, (1-mu)*(1-phi)/phi)
p = -0.5*log(mu) - 0.5*log(1-mu) - 0.5*log(phi) - 0.5*log(1-phi)
return(l + p)
phi = rep(0, burn.in + n.draws)
mu = rep(0, burn.in + n.draws)
theta = matrix(rep(0, length(n)*(burn.in + n.draws)), length(n), (burn.in + n.draws))
acceptance.mu = 0
acceptance.phi = 0
mu[1] = mu.start
phi[1] = phi.start
for(i in 1:length(x)) {
theta[i, 1] = rbeta(1, mu[1]*(1-phi)[1]/phi[1] + x[i], (1-phi)[1]/phi[1]*(1-mu[1]) + n[i] - x[i])
for(j in 2:(burn.in + n.draws)) {
phi[j] = phi[j-1]
mu[j] = mu[j-1]
cand = rnorm(1, mu[j-1], sigma.mu)
if((cand > 0) & (cand < 1)) {
m.old = m(mu[j-1],phi[j-1],x,n)
m.new = m(cand,phi[j-1],x,n)
u = runif(1)
if((m.new - m.old) > log(u)) {
mu[j] = cand
acceptance.mu = acceptance.mu+1
cand = rnorm(1,phi[j-1],sigma.phi)
if( (cand > 0) & (cand < 1)) {
m.old = m(mu[j-1],phi[j-1],x,n)
m.new = m(mu[j-1],cand,x,n)
u = runif(1)
if((m.new - m.old) > log(u)) {
phi[j] = cand
acceptance.phi = acceptance.phi + 1
for(i in 1:length(n)) {
theta[i, j] = rbeta(1, (1-phi[j])/phi[j]*mu[j] + x[i], (1-phi[j])/phi[j]*(1-mu[j]) + n[i] - x[i])
mu <- mu[(burn.in + 1):(burn.in + n.draws)]
phi <- phi[(burn.in + 1):(burn.in + n.draws)]
theta <- theta[,(burn.in + 1):(burn.in + n.draws)]
return(list(mu = mu, phi = phi, theta = theta, acceptance = c(acceptance.mu/(burn.in + n.draws), acceptance.phi/(burn.in + n.draws))))
This, of course, is not the only way it may be coded, and I'm sure that others with more practical programming experience could easily improve upon this code. Note that I add an additional wrinkle to
the formulation given in the previous sections to address a practical concern - I immediately reject a candidate value if it is less than 0 or larger than 1. This is not the only possible way to deal
with this potential problem, but works well in my experience, and the acceptance rate and/or starting points can be adjusted if the issue becomes serious.
There is a bit of redundancy in the code - the quantity
is calculated twice, when it is used identically in both Metropolis-Hastings steps - and I'm inflating the acceptance rate slightly by including the burn-in iterations, but the chains should converge
quickly so the effect will be minimal, and more draws can always be taken to minimize the effect.
Though coded in R, the principles should apply no matter which language you use - hopefully you could take this setup and write code in C or python if you wanted to.
The Results
Using the function defined above, I ran three separate chains of 5000 iterations each after a burn-in of 1000 draws. For starting points, I picked values near where I thought the posterior means
would end up, plus values both above and below, to check that all chains converged to the same distributions.
> chain.1 <- betaBin.mcmc(x,n, 0.265, 0.002) > chain.2 <- betaBin.mcmc(x,n, 0.5, 0.1) > chain.3 <- betaBin.mcmc(x,n, 0.100, 0.0001)
Checking the acceptance rates for $\mu$ and $\phi$ from each of the three chains, all are reasonable:
> chain.1$\$$acceptance
[1] 0.3780000 0.3613333
> chain.2$\$$acceptance
[1] 0.4043333 0.3845000
> chain.3$\$$acceptance
[1] 0.3698333 0.3768333
(Since the $\theta_i$ were obtained by a Gibbs sampler, they do not have an associated acceptance rate)
Next, plots of the chain value versus iteration for $\mu$, $\phi$, and $\theta_1$ show all three chains appear to have converged to the same distribution, and the trace plots appear to have the
"spiky blob" shape that indicates good mixing:
Hence, we can use our MCMC draws to estimate properties of the posterior. To do this, combine the results of all three chains into one big set of draws for each variable:
mu <- c(chain.1$\$$mu, chain.2$\$$mu, chain.3$\$$mu)
phi <- c(chain.1$\$$phi, chain.2$\$$phi, chain.3$\$$phi)
theta <- cbind(chain.1$\$$theta, chain.2$\$$theta, chain.3$\$$theta)
Statistical theory says that posterior distributions should converge to a normal distribution as the sample size increases. With a sample size of $N = 254$ batting averages, posteriors should be
close to normal in the parametrization I used - though normality of the posteriors is in general not a guarantee that everything has worked well, nor is non-normality evidence that something has gone
First, the posterior distribution for league batting average can be seen just by taking a histogram:
> hist(mu)
The histogram looks almost perfectly normally distributed - about as close to the ideal as is reasonable.
Next, we want to get an estimator for the league mean batting average. There are a different few ways to turn the posterior sample $\mu^*_j$ into an estimator $\hat{\mu}$, but I'll give the simplest
here (and since the posterior distribution looks normal, other methods should give very similar results) - taking the sample average of the $\mu^*_j$ values:
> mean(mu)
[1] 0.2660155
Similarly, we can get an estimate of the standard error for $\hat{\mu}$ and a 95% credible interval for $\mu$ by taking the standard deviation and quantiles from $\mu^*_j$: > sd(mu)
[1] 0.001679727
> quantile(mu,c(.025,.975))
2.5% 97.5%
0.2626874 0.2693175
For $\phi$, do the same thing - first look at the histogram:
There is one outlier on the high side - which can happen in an MCMC chain simply by chance - and a slight skew to the right, but otherwise, the posterior looks close to normal. The mean, standard
deviation, and a 95% credible interval are given by
> mean(phi)
[1] 0.001567886
> sd(phi)
[1] 0.000332519
> quantile(phi,c(.025,.975))
2.5% 97.5%
0.0009612687 0.0022647623
Furthermore, let's say that instead of $\phi$, I had a particular function of one of the parameters in mind instead - for example, I mentioned at the beginning that $\phi$ is, in sabermetric speak,
the proportion of stabilization after a single at-bat. This can be turned into the general so-called "stabilization point" $M$ by
$M = \dfrac{1-\phi}{\phi}$
and so to get a posterior distribution for $M$, all we need to do is apply this transformation to each draw from $\phi^*_j$. A histogram of $M$ is given by
> hist((1-phi)/phi)
The histogram is skewed clearly to the right, but that's okay since $M$ is not one of the parameters in the model.
An estimate and 95% credible for the stabilization point is given by taking the average and quantiles of the transformed values
> mean((1-phi)/phi)
[1] 667.8924 > quantile((1-phi)/phi, c(0.025,0.975))
2.5% 97.5%
440.5474 1039.2918
This estimate is different than the value I gave in my article
2016 Stabilization Points
because the calculations in that article used the past six years of data - this calculation only uses one. This is also why the uncertainty is so much larger.
Lastly, we can get at what we really want - estimates of the "true" batting averages $\theta_i$ for each player. I'm going to look at $i = 1$ (the first player in the sample), who happens to be Bryce
Harper, the National League MVP in 2015. His batting average was 0.330 (from $x_1 = 172$ hits in $n_1 = 521$ AB), but the effect of fitting the hierarchical Bayesian analysis is to shrink the
estimate of his "true" batting average $\theta_i$ towards the league mean $\mu$ - and by quite a bit in this case, since Bryce had nearly largest batting average in the sample. A histogram of the $\
theta^*_{1,j}$ shows, again, a roughly normal distribution.
> hist(theta[1,])
and an estimate of his true batting average, standard error of the estimate, and 95% credible interval for the estimate are given by
> mean(theta[1,])
[1] 0.2947706
> sd(theta[1,])
[1] 0.01366782
> quantile(theta[1,], c(0.025,0.975))
2.5% 97.5%
0.2687120 0.3222552
Other functions of the batting averages, functions of the league mean and variance, or
posterior predictive calculations
can be performed using the posterior samples $\mu^*$, $\phi^*$, and $\theta^*_i$.
Conclusion and Connections
MCMC techniques similar to the ones shown here have become fairly standard in Bayesian estimation, though there are more advanced techniques in use today that build upon these "building block" steps
by, to give one example, adaptively changing the acceptance rate as the code runs rather than guessing-and-checking to find a reasonable value.
The empirical Bayesian techniques from my article
Beta-binomial empirical Bayes
represent an approximation to this full hierarchical method. In fact, using the empirical Bayesian estimator from that article on the baseball set described in this article gives $\hat{\alpha} =
172.5478$ and $\hat{\beta} = 476.0831$ (equivalent to $\hat{\mu} = 0.266$ and $\hat{\phi} = 0.001539$), and gives Bryce Harper an estimated true batting average of $\theta_1 = 0.2946$, with a 95%
credible interval of $(0.2688, 0.3210)$ - only slightly shorter than the interval from the full hierarchical model.
Lastly, the "regression toward the mean" technique common in sabermetrics also approximates this analysis. Supposing you had a "stabilization point" of around 650 AB for batting averages (650 is
actually way too large, but I'm pulling this number from my calculations above to illustrate a point), then the amount shrunk towards league mean of $\mu \approx 0.266$ is
$\left(\dfrac{521}{521 + 650}\right) \approx 0.4449$
So that the estimate of Harper's batting average is
$0.266 + 0.4449\left(\dfrac{172}{521} - 0.266\right) \approx 0.2945$
Three methods all going to the same place - all closely related in theory and execution.
Hopefully this helps with understanding MCMC coding. The article ended up much longer than I originally intended, but there were many parts I've gotten used to doing quickly that I realized required
a not-so-quick explanation to justify
I'm doing them. As usual, comments and suggestions are appreciated!
These predictions are based on my own silly estimator, which I know can be improved with some effort on my part. There's some work related to this estimator that I'm trying to get published
academically, so I won't talk about the technical details yet (not that they're particularly mind-blowing anyway).
I set the nominal coverage at 95% (meaning the way I calculated it the intervals should get it right 95% of the time), but based on tests of earlier seasons point in the season the actual coverage is
slightly under 94%, with intervals being one game off if and when they are off.
Intervals are inclusive. All win totals assume a 162 game schedule.
\begin{array} {c c c c}
\textrm{Team} & \textrm{Lower} & \textrm{Mean} & \textrm{Upper} & \textrm{True Win Total} & \textrm{Current Wins}\\ \hline
ARI & 65 & 79.58 & 94 & 81.81 & 21 \\
ATL & 48 & 61.91 & 77 & 67.95 & 12 \\
BAL & 74 & 89.11 & 104 & 85.19 & 26 \\
BOS & 80 & 94.48 & 109 & 92.65 & 27 \\
CHC & 88 & 102.56 & 117 & 99.31 & 29 \\
CHW & 75 & 89.64 & 104 & 87.36 & 26 \\
CIN & 49 & 63.47 & 78 & 66.52 & 15 \\
CLE & 71 & 85.5 & 100 & 85.03 & 22 \\
COL & 66 & 80.94 & 96 & 80.92 & 21 \\
DET & 66 & 80.55 & 95 & 81.05 & 21 \\
HOU & 57 & 71.08 & 86 & 74.86 & 17 \\
KCR & 64 & 79.12 & 94 & 77.75 & 22 \\
LAA & 62 & 76.87 & 91 & 78.08 & 20 \\
LAD & 68 & 82.33 & 97 & 83.53 & 22 \\
MIA & 67 & 81.46 & 96 & 80.95 & 22 \\
MIL & 57 & 71.51 & 86 & 73.47 & 18 \\
MIN & 47 & 61.06 & 76 & 68.16 & 11 \\
NYM & 72 & 87.09 & 102 & 84.52 & 25 \\
NYY & 63 & 77.99 & 93 & 77.6 & 21 \\
OAK & 58 & 71.82 & 86 & 73.12 & 19 \\
PHI & 67 & 81.6 & 96 & 77.71 & 25 \\
PIT & 69 & 83.7 & 98 & 81.94 & 23 \\
SDP & 59 & 72.83 & 87 & 74.53 & 19 \\
SEA & 76 & 90.55 & 105 & 87.86 & 26 \\
SFG & 72 & 85.92 & 100 & 82.28 & 27 \\
STL & 73 & 87.24 & 102 & 88.18 & 23 \\
TBR & 68 & 82.68 & 98 & 83.92 & 20 \\
TEX & 70 & 84.81 & 99 & 82.1 & 25 \\
TOR & 65 & 79.6 & 94 & 80.45 & 22 \\
WSN & 78 & 92.88 & 107 & 90.45 & 27 \\ \hline\end{array}
As you would expect, it's really, really difficult to predict how many games a team is going to win only a quarter of the way through the season, and intervals are necessarily going to be very wide.
A couple of things stand out, though - at this point we can be confident that the Chicago Cubs will finish above 0.500 and the Minnesota Twins, Cincinnati Reds, and Atlanta Braves will finish below
0.500. For every other team, we just don't have enough information yet.
To explain the difference between "Mean" and "True Win Total" - imagine flipping a fair coin 10 times. The number of heads you expect is 5 - this is what I have called "True Win Total," representing
my best guess at the true ability of the team over 162 games. However, if you pause halfway through and note that in the first 5 flips there were 4 heads, the predicted total number of heads becomes
$4 + 0.5(5) = 6.5$ - this is what I have called "Mean", representing the expected number of wins based on true ability over the remaining schedule added to the current number of wins (from the
beginning of the season until May 22).
These quantiles are based off of a distribution - I've uploaded a picture of each team's distribution to imgur. The bars in red are the win total values covered by the 95% interval. The blue line
represents my estimate of the team's "True Win Total" based on its performance - so if the blue line is to the left of the peak, the team is predicted to finish "lucky" - more wins than would be
expected based on their talent level - and if the blue line is to the right of the peak, the team is predicted to finish "unlucky" - fewer wins that would be expected based on their talent level. | {"url":"http://www.probabilaball.com/2016/05/","timestamp":"2024-11-06T02:13:06Z","content_type":"text/html","content_length":"105686","record_id":"<urn:uuid:1f1a8fdd-99e3-4c41-a536-3d003a1a2fdf>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00527.warc.gz"} |
Discrete Probability
Probability distribution
A probability distribution is a mathematical function that gives the probabilities of different possible outcomes for an experiment. It describes a random phenomenon in terms of its sample space and
the probabilities of events within it. Special distributions are used to compare relative occurances of many different random values, and can be defined for discrete or continuous variables.
2 courses cover this concept
This course dives deep into the role of probability in the realm of computer science, exploring applications such as algorithms, systems, data analysis, machine learning, and more. Prerequisites
include CSE 311, MATH 126, and a grasp of calculus, linear algebra, set theory, and basic proof techniques. Concepts covered range from discrete probability to hypothesis testing and bootstrapping.
CS 70 presents key ideas from discrete mathematics and probability theory with emphasis on their application in Electrical Engineering and Computer Sciences. It addresses a variety of topics such as
logic, induction, modular arithmetic, and probability. Sophomore mathematical maturity and programming experience equivalent to an Advanced Placement Computer Science A exam are prerequisites. | {"url":"https://cogak.com/concept/2762","timestamp":"2024-11-02T17:01:38Z","content_type":"text/html","content_length":"116255","record_id":"<urn:uuid:5542019d-1cad-48cc-ab2c-8abfc840ab9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00887.warc.gz"} |
Matrix addition - (Computational Mathematics) - Vocab, Definition, Explanations | Fiveable
Matrix addition
from class:
Computational Mathematics
Matrix addition is the operation of adding two matrices by adding their corresponding elements together. This process requires that both matrices have the same dimensions, meaning they must have the
same number of rows and columns. When dealing with sparse matrices, which contain a significant number of zero elements, matrix addition can be optimized to save on memory and computation time, since
only the non-zero elements need to be considered.
congrats on reading the definition of matrix addition. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. For two matrices A and B to be added together, they must both have the same dimensions; otherwise, the operation is undefined.
2. When performing matrix addition on sparse matrices, it is common to use data structures that only store non-zero elements, which reduces both memory usage and processing time.
3. Matrix addition is commutative; this means that A + B is equal to B + A for any two matrices A and B of the same size.
4. The result of matrix addition is also a matrix of the same size as the original matrices, preserving dimensions during the operation.
5. Matrix addition can be extended to scalar multiplication; if a matrix is multiplied by a scalar before addition, the same properties still hold.
Review Questions
• How does matrix addition apply specifically to sparse matrices, and what benefits does this offer in terms of computation?
□ Matrix addition for sparse matrices focuses on adding only the non-zero elements, which significantly reduces computational overhead and memory usage. By leveraging specialized data
structures like coordinate list (COO) or compressed sparse row (CSR), we can perform additions more efficiently. This optimization is particularly valuable in large-scale applications where
most elements are zeros, allowing for faster processing times while conserving resources.
• Discuss how the commutative property of matrix addition influences operations within computational mathematics.
□ The commutative property of matrix addition states that the order of addition does not affect the result; thus, A + B equals B + A. This property simplifies computations and algorithms in
computational mathematics because it allows for flexibility in order when performing multiple additions. In practice, this means programmers can optimize code without worrying about changing
the final result due to the sequence of operations, making mathematical modeling and simulations more efficient.
• Evaluate how understanding matrix addition contributes to better algorithm design when working with large datasets in sparse matrix scenarios.
□ Understanding matrix addition is crucial for algorithm design involving large datasets characterized by sparse matrices. By recognizing that many entries are zero and can be ignored,
developers can create algorithms that minimize unnecessary calculations and memory usage. This leads to more efficient algorithms that not only speed up computations but also allow for
handling larger datasets than would otherwise be feasible. As such, mastering this concept enables better performance in real-world applications like machine learning and data analysis.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/computational-mathematics/matrix-addition","timestamp":"2024-11-07T16:23:21Z","content_type":"text/html","content_length":"159361","record_id":"<urn:uuid:4f90c8be-fded-4d82-8e9b-053203ccd832>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00038.warc.gz"} |
Mathematical Data Science (B.S.)
At PSU Math, student success is our highest priority.
We are devoted to helping you succeed in college and begin your career. PSU Math delivers high impact education. Math is an ancient and evolving field, and thus covering math content alone is not
enough. We help you learn how to learn and how to question the world around you. We employ modern teaching practices and create many opportunities for exploration and collaboration.
Opportunities and support for math majors include:
• A Bachelor of Science degree in Mathematics or Mathematical Data Science.
• Student teaching and professional development through the Holmes Center.
Mathematical Data Sciences is an interdisciplinary mathematics program that emphasizes computer science, experimentation, and data collection. Mathematics provides students with methods and theory
that live at the heart of problem solving and data analysis in the physical sciences, engineering, and innovative industries. Combining mathematics with computer science gives students the practical
skills necessary to employ their theoretical mathematics knowledge and develop algorithms to address problems in the real world. Students in Mathematical Data Sciences will also complete 16 to 23
credits in an enrichment option of their choice. The enrichment option gives students experience in a particular field where mathematics and computer science can be applied, and the background to
properly implement their skills.
Curriculum & Requirements
Course List
Course Title Credits
Major Requirements
CS 2370 Introduction to Programming 4
CS 2381 Data Structures and Intermediate Programming 4
CS 3221 Algorithm Analysis 4
CS 3600 Database Management Systems 4
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
MA 2560 Calculus II (QRCO) 4
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 3355 Introduction to Mathematical Modeling (TECO) 4
MA 3540 Calculus III 4
MA 3600 Differential Equations with Linear Algebra 4
MA 4510 Introduction to Analysis 3
Complete one course from the following: 3
MA 3280 Regression Analysis
MA 3500 Probability and Statistics for Scientists
Complete one course from the following: 3-4
CS 4520 CyberEthics (DICO,INCO,WRCO)
CJ 3157 Society, Ethics, and the Law (DICO)
General Education 27-36
Option Requirements 30-41
Complete one of the following required options:
Criminal Justice
Physical Meteorology
Weather Analysis
Total Credits 120
Biology Option of BS in Mathematical Data Sciences
Through the Mathematical Data Sciences major with the Biology option, students learn fundamental biology and chemistry, and then focus on genetics and conservation. This degree prepares students for
a career or graduate study in computational bioinformatics, genomics, neurobiology, and other interdisciplinary biology and mathematics fields.
Course List
Course Title Credits
Option Requirements
BI 1110 Biological Science I (TECO) 4
BI 1120 Biological Science II 4
BI 3060 Genetics 4
BI 3240 Conservation (DICO,GACO,INCO) 3
BI 4980 Biology Seminar 2
CH 2335 General Chemistry I (QRCO) 4
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
WECO Wellness Connection 3-4
Elective 14-17
Total Credits 59-70
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Biology Option.
Chemistry Option of BS in Mathematical Data Sciences
Through the Mathematical Data Sciences major with the Chemistry option, students learn general chemistry and organic chemistry. Students then can choose to further study organic chemistry or to
instead focus on instrumentation or quantum mechanics. This degree prepares students for a career or graduate study in analytical chemistry, forensics, and other interdisciplinary chemistry and
mathematics fields.
Course List
Course Title Credits
Option Requirements
CH 1050 Laboratory Safety 1
CH 2335 General Chemistry I (QRCO) 4
CH 2255 Techniques in Laboratory 3
CH 2340 General Chemistry II 4
CH 3370 Organic Chemistry I 4
Choose one course from the following: 4
CH 3550 Instrumental Analysis (TECO,WRCO)
CH 3380 Organic Chemistry II
CH 3465 Physical Chemistry: Quantum Mechanics and Spectroscopy
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
GACO Global Awareness Connection 3-4
WECO Wellness Connection 3-4
Elective 15-18
Total Credits 62-74
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Chemistry Option.
Criminal Justice Option of BS in Mathematical Data Sciences
Criminal Justice is an inherently interdisciplinary field, and the Mathematical Data Sciences major with the Criminal Justice option prepares students for the analytical aspect of Criminal Justice.
Students have a choice of electives that prepare them for a career in law, government agencies, and private industries. Future career possibilities include criminologist, criminal intelligence
analyst, forensic scientist, and criminal investigator.
Course List
Course Title Credits
Option Requirements
CJ 3025 Forensic Science 4
CJ 2090 Criminal Law 4
Choose two courses from the following: 12
CJ 2025 Police and society
CJ 2080 Crime and Criminals
CJ 3005 Criminal Investigation
CJ 3015 Cybercrime
CJ 3405 Homeland Security
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
GACO Global Awareness Connection 3-4
WECO Wellness Connection 3-4
Elective 14-17
Total Credits 61-73
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Criminal Justice Option.
Physical Meteorology Option of BS in Mathematical Data Sciences
Meteorology is an inherently interdisciplinary field. Through the Mathematical Data Sciences major with the Physical Meteorology option, students learn fundamental physics and atmospheric science.
Students choose an elective that focuses on the physics of either atmospheric motions or precipitation and solar radiation. This degree prepares students for a career or graduate study in
meteorology, physical meteorology, and applied mathematics.
Course List
Course Title Credits
Option Requirements
PH 2510 University Physics I 4
PH 2520 University Physics II 4
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
MT 3230 Atmospheric Thermodynamics 3
Choose one course from the following: 3
MT 4310 Dynamic Meteorology I
MT 4410 Atmospheric Physics
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
WECO Wellness Connection 3-4
Elective 16-19
Total Credits 57-68
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Physical Meteorology Option.
Psychology Option of BS in Mathematical Data Sciences
Through the Mathematical Data Sciences major with the Psychology option, students learn general, cognitive, and learning psychology, and then focus on psychological measurement. This degree prepares
students for a career or graduate study in psychology, quantitative psychology, neuroscience, market research, and other interdisciplinary psychology and mathematics fields.
Course List
Course Title Credits
Option Requirements
PS 2015 Introduction to General Psychology 4
PS 3210 Learning 4
PS 3220 Cognitive Psychology 4
PS 4440 3
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
GACO Global Awareness Connection 3-4
WECO Wellness Connection 3-4
Elective 16-18
Total Credits 58-69
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Psychology Option.
Weather Analysis Option of BS in Mathematical Data Sciences
Meteorology is an inherently interdisciplinary field. Through the Mathematical Data Sciences major with the Weather Analysis option, students learn fundamental physics and atmospheric science.
Students then have a choice of electives that focus on weather and instrumentation. This degree prepares students for a career or graduate study in meteorology, weather analysis, insurance analysis,
and other fields in meteorology and applied mathematics.
Course List
Course Title Credits
Option Requirements
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
MT 2250 Introduction to Weather Analysis and Forecasting 4
MT 3230 Atmospheric Thermodynamics 3
PH 2510 University Physics I 4
MT 3725 Instruments and Observations in Meteorology 3
General Education
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
CTDI Creative Thought Direction 3-4
PPDI Past and Present Direction 3-4
SSDI Self and Society Direction 3-4
Directions (choose from CTDI, PPDI, SSDI) ^1 4-8
WECO Wellness Connection 3-4
Elective 15-22
Total Credits 56-71
^ 1
Directions should total 16 credits because SIDI is waived for BS Mathematical Data Sciences, Weather Analysis Option.
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
To complete the bachelor’s degree in 4 years, you must successfully complete a minimum of 15 credits each semester or have a plan to make up credits over the course of the 4 years. For example, if
you take 14 credits one semester, you need to take 16 credits in another semester. Credits completed must count toward your program requirements (major, option, minor, certificate, general education
or free electives).
Required Options in this Major
Complete One Option
Biology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
IS 1115 Tackling a Wicked Problem 4
EN 1400 Composition 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
WECO Wellness Connection 4
CTDI Creative Thought Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
BI 1110 Biological Science I (TECO) 4
CH 2335 General Chemistry I (QRCO) 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
BI 1120 Biological Science II 4
BI 4980 Biology Seminar 2
Credits 14
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
BI 3060 Genetics 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
CS 3221 Algorithm Analysis 4
PPDI Past and Present Direction 4
Elective 8
Credits 16
Year Four
MA 4510 Introduction to Analysis 3
BI 3240 Conservation (DICO,GACO) 3
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
SSDI Self and Society Direction 3-4
Credits 12-14
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
Elective 11
Credits 14
Total Credits 120
Biology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
SSDI Self and Society Direction 3-4
CTDI Creative Thought Direction 3-4
CTDI Creative Thought Direction 4
Credits 17-19
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
BI 1110 Biological Science I (TECO) 4
CH 2335 General Chemistry I (QRCO) 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
BI 1120 Biological Science II 4
CH 2340 General Chemistry II 4
Credits 16
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
BI 3060 Genetics 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
CAMS Math elective 3
CS 3221 Algorithm Analysis 4
Elective 8
Credits 15
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
BI 3240 Conservation (DICO,GACO) 3
CAMS Ethics course 3-4
Elective 3-4
Credits 13-15
Directions (choose from CTDI, PPDI, SSDI) 4
Electives 6
Credits 10
Total Credits 120
Chemistry Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
SSDI Self and Society Direction 4
CTDI Creative Thought Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PPDI Past and Present Direction 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
GACO Global Awareness Connection 4
WECO Wellness Connection 4
Credits 16
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
CH 2335 General Chemistry I (QRCO) 4
CH 1050 Laboratory Safety 1
Elective 3
Credits 16
MA 3600 Differential Equations with Linear Algebra 4
CH 2340 General Chemistry II 4
CH 2255 Techniques in Laboratory 3
CS 3221 Algorithm Analysis 4
Elective 3
Credits 18
Year Four
MA 4510 Introduction to Analysis 3
CH 3370 Organic Chemistry I 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
Elective 3-4
Credits 13-15
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
CH 3550 Instrumental Analysis (TECO,WRCO)
or CH 3380 or Organic Chemistry II 4
or CH 3465 or Physical Chemistry: Quantum Mechanics and Spectroscopy
Elective 3-4
Credits 10-11
Total Credits 120
Chemistry Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
WECO Wellness Connection 4
CTDI Creative Thought Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
GACO Global Awareness Connection 3-4
CH 2335 General Chemistry I (QRCO) 4
CH 1050 Laboratory Safety 1
Elective 3
Credits 15-16
MA 3540 Calculus III 4
Directions (choose from CTDI, PPDI, SSDI) 4
CH 2340 General Chemistry II 4
CH 2255 Techniques in Laboratory 3
Elective 1
Credits 16
Year Three
MA 4510 Introduction to Analysis 3
CS 2370 Introduction to Programming 4
PPDI Past and Present Direction 4
SSDI Self and Society Direction 4
Credits 15
MA 3600 Differential Equations with Linear Algebra 4
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
CH 3550 Instrumental Analysis (TECO,WRCO)
or CH 3380 or Organic Chemistry II 4
or CH 3465 or Physical Chemistry: Quantum Mechanics and Spectroscopy
CS 2381 Data Structures and Intermediate Programming 4
Elective 1
Credits 16
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
CS 3600 Database Management Systems 4
Elective 3-4
Credits 14-16
Elective 9-12
CS 3221 Algorithm Analysis 4
Credits 13-16
Total Credits 120
Criminal Justice Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
WECO Wellness Connection 4
CTDI Creative Thought Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PPDI Past and Present Direction 4
Credits 12
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
SSDI Self and Society Direction 4
Directions (choose from CTDI, PPDI, SSDI) 4-8
Credits 16-20
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
CJ 2090 Criminal Law 4
GACO Global Awareness Connection 4
Credits 16
MA 3600 Differential Equations with Linear Algebra 4
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
CS 3221 Algorithm Analysis 4
Elective 4
Credits 16
Year Four
MA 4510 Introduction to Analysis 3
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
Elective 3-4
Credits 13-15
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
Elective 5-6
Credits 12-13
Total Credits 120
Criminal Justice Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
WECO Wellness Connection 4
CTDI Creative Thought Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PPDI Past and Present Direction 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
SSDI Self and Society Direction 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
Year Three
MA 4510 Introduction to Analysis 3
CS 3600 Database Management Systems 4
CJ 2090 Criminal Law 4
GACO Global Awareness Connection 4
Credits 15
MA 3600 Differential Equations with Linear Algebra 4
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
Elective 6
Credits 17
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
Elective 3-4
Credits 14-16
CJ 2080 Crime and Criminals
or CJ 2025 or Police and society
or CJ 3005 or Criminal Investigation 4
or CJ 3015 or Cybercrime
or CJ 3025 or Forensic Science
or CJ 3405 or Homeland Security
CS 3221 Algorithm Analysis 4
Elective 3-4
Credits 11-12
Total Credits 120
Physical Meteorology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PH 2510 University Physics I 4
SSDI Self and Society Direction 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
PH 2520 University Physics II 4
GACO Global Awareness Connection 3-4
Credits 15-16
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
WECO Wellness Connection 3
Credits 14
CS 3221 Algorithm Analysis 4
MT 3230 Atmospheric Thermodynamics 3
MA 3600 Differential Equations with Linear Algebra 4
Elective 4
Credits 15
Year Four
MA 4510 Introduction to Analysis 3
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
MT 4310 Dynamic Meteorology I
or MT 4410 or Atmospheric Physics
Elective 4-6
Credits 13-16
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
Directions (choose from CTDI, PPDI, SSDI) 3
Elective 10
Credits 16
Total Credits 120
Physical Meteorology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PH 2510 University Physics I 4
SSDI Self and Society Direction 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
PH 2520 University Physics II 4
GACO Global Awareness Connection 3
Credits 15
Year Three
MA 4510 Introduction to Analysis 3
CS 3600 Database Management Systems 4
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
WECO Wellness Connection 3
Credits 13
CS 3221 Algorithm Analysis 4
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
MT 3230 Atmospheric Thermodynamics 3
MA 3600 Differential Equations with Linear Algebra 4
Elective 3
Credits 17
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
MT 4310 Dynamic Meteorology I
or MT 4410 or Atmospheric Physics
Elective 0-2
Elective 3
Credits 13-16
Directions (choose from CTDI, PPDI, SSDI) 3-4
Elective 12
Credits 15-16
Total Credits 120
Psychology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
SSDI Self and Society Direction 4
Directions (choose from CTDI, PPDI, SSDI) 3-4
Credits 15-16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
PS 2015 Introduction to General Psychology 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
PS 3210 Learning 4
GACO Global Awareness Connection 3-4
Credits 15-16
MA 3600 Differential Equations with Linear Algebra 4
PS 3220 Cognitive Psychology 4
CS 3221 Algorithm Analysis 4
Elective 4
Credits 16
Year Four
MA 4510 Introduction to Analysis 3
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
PS 4440 3
Elective 4-6
Credits 13-16
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
WECO Wellness Connection 3
Elective 7-8
Credits 13-14
Total Credits 120
Psychology Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
Directions (choose from CTDI, PPDI, SSDI) 4
SSDI Self and Society Direction 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
PS 2015 Introduction to General Psychology 4
Directions (choose from CTDI, PPDI, SSDI) 4
Credits 16
Year Three
MA 4510 Introduction to Analysis 3
CS 3600 Database Management Systems 4
PS 3210 Learning 4
GACO Global Awareness Connection 4
Credits 15
CS 3221 Algorithm Analysis 4
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
PS 3220 Cognitive Psychology 4
WECO Wellness Connection 4
Credits 15
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
PS 4440 3
Elective 4-6
Credits 14-16
MA 3600 Differential Equations with Linear Algebra 4
Elective 9
Credits 13
Total Credits 120
Weather Analysis Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an odd start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PH 2510 University Physics I 4
SSDI Self and Society Direction 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
Directions (choose from CTDI, PPDI, SSDI) 3-4
GACO Global Awareness Connection 3-4
Credits 14-16
Year Three
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 3600 Database Management Systems 4
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
MT 2250 Introduction to Weather Analysis and Forecasting 4
Elective 3
Credits 18
MA 3600 Differential Equations with Linear Algebra 4
MT 3230 Atmospheric Thermodynamics 3
CS 3221 Algorithm Analysis 4
WECO Wellness Connection 3-4
Elective 3
Credits 17-18
Year Four
MA 4510 Introduction to Analysis 3
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
MT 3725 Instruments and Observations in Meteorology 3
Elective 3-4
Credits 12-13
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
Elective 8
Credits 11
Total Credits 120
Weather Analysis Option of BS in Mathematical Data Sciences
Check all course descriptions for prerequisites before planning course schedule. Course sequence is suggested but not required.
Please use the following sequence for as an even start year:
Plan of Study Grid
Year One
Fall Credits
MA 2450 Mathematical Reasoning 4
MA 2550 Calculus I (QRCO) 4
EN 1400 Composition 4
IS 1115 Tackling a Wicked Problem 4
Credits 16
MA 2700 Introduction to Mathematical Proof Writing (WRCO) 3
MA 2560 Calculus II (QRCO) 4
CTDI Creative Thought Direction 4
PPDI Past and Present Direction 4
Credits 15
Year Two
MA 3600 Differential Equations with Linear Algebra 4
CS 2370 Introduction to Programming 4
PH 2510 University Physics I 4
SSDI Self and Society Direction 4
Credits 16
MA 3540 Calculus III 4
CS 2381 Data Structures and Intermediate Programming 4
Directions (choose from CTDI, PPDI, SSDI) 3-4
GACO Global Awareness Connection 3-4
Credits 14-16
Year Three
MA 4510 Introduction to Analysis 3
CS 3600 Database Management Systems 4
MT 2000 Fundamentals of Meteorology and Climatology (GACO) 3
MT 2250 Introduction to Weather Analysis and Forecasting 4
Elective 3
Credits 17
MA 3600 Differential Equations with Linear Algebra 4
MT 3230 Atmospheric Thermodynamics 3
MA 3280 Regression Analysis
or MA 3500 or Probability and Statistics for Scientists
WECO Wellness Connection 3-4
Elective 3
Credits 16-17
Year Four
MA 3355 Introduction to Mathematical Modeling (TECO) 4
CS 4520 CyberEthics (DICO,WRCO)
or CJ 3157 or Society, Ethics, and the Law (DICO)
MT 3725 Instruments and Observations in Meteorology 3
Elective 3-4
Credits 13-15
CS 3221 Algorithm Analysis 4
Elective 8
Credits 12
Total Credits 120
• An ability to apply acquired knowledge, appropriate to the discipline, to solve problems.
• An ability to function effectively on teams to accomplish a common goal.
• An understanding of professional, ethical, legal, security, and social issues and responsibilities.
• An ability to communicate effectively with a wide range of audiences.
• An ability to apply current theory, practice, and skills in the design of computer-based systems in a way that demonstrates comprehension of the trade-offs involved in design choices.
A major in mathematical data sciences is a good preparation for a variety of careers based in the utilization of data. Plymouth State’s mathematical data sciences program provides student with
sufficient background in mathematical theory, computer skills, and an applied discipline to be able work with the vast quantities of data in the modern business world. Students are prepared for and
various types of industry positions, or to pursue graduate work or research.
Sample Jobs include, but are not limited to: Mathematical Scientist, Actuary, Game Designer, Supply Chain Analyst, Retirement Plan Designer, Numerical Analyst, Financial Planner, Data Base Manager,
Cryptologist, Forensic Analyst, Computer Research Scientist, Physician, Information Scientist, Bioinformatician, Quality Control Analyst, Economist, Information Systems Analyst, Robotics Engineer,
Cost Estimator, Epidemiologist, Software Engineer, Risk Analyst, Claims Specialist, Controller, Quantitative Pharmacologist, Forecast Analyst, Environmental Scientist, Data Engineer, Auditor, Budget
Analyst, Systems Modeler, Methods Developer, Scientific Consultant, Underwriter, Geomagnetic Engineer, Forest/Fisheries Scientist, Mathematical Biologist, Modeler
See the U.S. Department of Labor Outlook for a complete list.
Useful Skills for Jobs in the Mathematics Fields:
• Accuracy and attention to detail
• Strong mathematical and computer skills
• Proficiency in analytical reasoning
• Facility with data and large quantities of information
• Strong organization and communication skills | {"url":"https://www.plymouth.edu/mathematics/program/bs/mathematical-data-science","timestamp":"2024-11-14T00:13:45Z","content_type":"text/html","content_length":"278664","record_id":"<urn:uuid:b1e8250d-2177-4566-8a0f-0562bfbbb12b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00262.warc.gz"} |
Increasing statistical power in psychological research without increasing sample size
Increasing statistical power in psychological research without increasing sample size
Note: I wrote this ages ago back in 2013, and it lives on the now defunct open science collaboration blog. Storing it here on my own website for archiving purposes. Original link: http://
What is statistical power?
As scientists, we strive to avoid errors. Type I errors are false positives: Finding a relationship when one probably doesn’t exist. Lots has been said about these kinds of errors, and the field of
psychology is sometimes accused of excessive Type I error rates through publication biases, p-hacking, and failing to account for multiple comparisons (Open Science Collaboration, in press). Type II
errors are false negatives: Failing to find a relationship when one probably does exist. Type II errors are related to statistical power. Statistical Power is the probability that the test will
reject the null hypothesis when the null hypothesis is false. Many authors suggest a statistical power rate of at least .80. This corresponds to an 80% probability of not committing a Type II error.
Accuracy in parameter estimation (APIE) is closely related to statistical power, and it refers to the width of the confidence interval for the effect size (Maxwell et al., 2008). The smaller this
width, the more precise your results are. This means that low statistical power not only increases Type II errors, but also Type I errors because underpowered studies have wide confidence intervals.
Simply put, underpowered studies are imprecise and are unlikely to replicate (Button et al., 2013).
Studies in psychology are grossly underpowered
Psychological research has been grossly underpowered for a long time. Fifty years ago, Cohen (1962) estimated the statistical power to detect a medium effect size in abnormal psychology was about
.48. The situation has improved slightly, but it’s still a serious problem today. For instance, one review suggested only 52% of articles in the applied psychology literature achieved .80 power for a
medium effect size (Mone et al., 1996). This is in part because psychologists are studying small effects. One massive review of 322 meta-analyses including 8 million participants suggested that the
average effect size in social psychology is relatively small (r = .21). To put this into perspective, you’d need 175 participants to have .80 power for a simple correlation between two variables at
this effect size. This gets even worse when we’re studying interaction effects. One review suggests that the average effect size for interaction effects is even smaller (f^2 = .009), which means that
sample sizes of around 875 people would be needed to achieve .80 power.
What can we do to increase power?
Traditional recommendations for increasing statistical power suggest either (a) Increasing sample size, (b) maximizing effect size or (c) using a more liberal p-value criteria. However, increasing
the effect size has no impact on the width of the confidence interval (Maxwell et al., 2008), and using a more liberal p-value comes at the expense of increased Type I error. Thus, most people assume
that increasing the sample size is the only consistent way to increase statistical power. This isn’t always feasible due to funding limitations, or because researchers are studying rare populations
(e.g., people with autism spectrum disorder). Fortunately, there are other solutions. Below, I list three ways you can increase statistical power without increasing sample size. You might also check
out Hansen and Collins (1994) for a lengthier discussion.
Recommendation 1: Decrease the mean square error
Decreasing the mean square error will have the same impact as increasing sample size (if you want to see the math, check out McClelland, 2000). Okay. You’ve probably heard the term “mean square
error” before, but the definition might be kind of fuzzy. Basically, your model makes a prediction for what the outcome variable (Y) should be, given certain values of the predictor (X). Naturally,
it’s not a perfect prediction because you have measurement error, and because there are other important variables you probably didn’t measure. The mean square error is the difference between what
your model predicts, and what the true values of the data actually are. So, anything that improves the quality of your measurement or accounts for potential confounding variables will reduce the mean
square error, and thus improve statistical power. Let’s make this concrete. Here are three specific techniques you can use:
a. Reduce measurement error by using more reliable measures (i.e., better internal consistency, test-retest reliability, inter-rater reliability, etc.). You’ve probably read that .70 is the
“rule-of-thumb” for acceptable reliability. Okay, sure. That’s publishable. But consider this: Let’s say you want to test a correlation coefficient. Assuming both measures have a reliability of
.70, your observed correlation will be about 1.43 times smaller than the true population parameter (I got this using Spearman’s correlation attenuation formula). Because you have a smaller
observed effect size, you end up with less statistical power. Why do this to yourself? Reduce measurement error.
• Control for confounding variables. With correlational research, this means including control variables that predict the outcome variable, but are relatively uncorrelated with other predictor
variables. In experimental designs, this means taking great care to control for as many possible confounds as possible. In both cases, this reduces the mean square error and improves the overall
predictive power of the model – and thus, improves statistical power.
• Use repeated-measures designs. Repeated measures designs reduce the mean square error by partitioning out the variance due to subjects. Depending on the kind of analysis you do, it can also
increase the degrees of freedom for the analysis substantially (e.g., some multi-level models). I’m a big fan of repeated measures designs, because they allow researchers to collect a lot of data
from fewer participants.
Recommendation 2: Increase the variance of your predictor variable
Another less-known way to increase statistical power is to increase the variance of your predictor variables (X).
a. In correlational research, use more comprehensive continuous measures. That is, there should be a large possible range of values endorsed by participants. However, the measure should also capture
many different aspects of the construct of interest; artificially increasing the range of X by adding redundant items (i.e., simply re-phrasing existing items to ask the same question) will
actually hurt the validity of the analysis. Also, avoid dichotomizing your measures (e.g., median splits), because this reduces the variance and typically reduces power.
• In experimental research, unequally allocating participants to each condition can improve statistical power. For example, say you were designing an experiment with 3 conditions (let’s say low,
medium, or high self-esteem). Most of us would equally assign participants to all three groups, right? Well, as it turns out, that reduces statistical power. The optimal design for a linear
relationship would be 50% low, 50% high, and omit the medium condition. The optimal design for a quadratic relationship would be 25% low, 50% medium, and 25% high. The proportions vary widely
depending on the design and the kind of relationship you expect, but I recommend you check out McClelland (1997) to get more information on efficient experimental designs. You might be surprised.
Recommendation 3: Make sure predictor variables are uncorrelated with each other
A final way to increase statistical power is to increase the proportion of the variance in X not shared with other variables in the model. When predictor variables are correlated with each other,
this is known as colinearity. For example, depression and anxiety are positively correlated with each other; including both as simultaneous predictors (say, in multiple regression) means that
statistical power will be reduced, especially if one of the two variables actually doesn’t predict the outcome variable. Lots of textbooks suggest that we should only be worried about this when
colinearity is extremely high (e.g., correlations around > .70). However, studies have shown that even modest intercorrlations among predictor variables will reduce statistical power (Mason et al.,
1991). Bottom line: If you can design a model where your predictor variables are relatively uncorrelated with each other, you can improve statistical power.
Increasing statistical power is one of the rare times where what is good for science, and what is good for your career actually coincides. It increases the accuracy and replicability of results, so
it’s good for science. It also increases your likelihood of finding a statistically significant result (assuming the effect actually exists), making it more likely to get something published. You
don’t need to torture your data with obsessive re-analysis until you get p < .05. Instead, put more thought into research design in order to maximize statistical power. Everyone wins, and you can use
that time you used to spend sweating over p-values to do something more productive. Like volunteering with the Open Science Collaboration.
Button, K. S., Ioannidis, J. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature
Reviews Neuroscience, 14(5), 365-376. doi: 10.1038/nrn3475
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65, 145-153. doi:10.1037/h0045186
Hansen, W. B., & Collins, L. M. (1994). Seven ways to increase power without increasing N. In L. M. Collins & L. A. Seitz (Eds.), Advances in data analysis for prevention intervention research (NIDA
Research Monograph 142, NIH Publication No. 94-3599, pp. 184–195). Rockville, MD: National Institutes of Health.
Mason, C. H., & Perreault, W. D. (1991). Collinearity, power, and interpretation of multiple regression analysis. Journal of Marketing Research, 28, 268-280. doi:10.2307/3172863
Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537-563. doi:10.1146/
McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2, 3-19. doi:10.1037/1082-989X.2.1.3
McClelland, G. H. (2000). Increasing statistical power without increasing sample size. American Psychologist, 55, 963-964. doi:10.1037/0003-066X.55.8.963
Mone, M. A., Mueller, G. C., & Mauland, W. (1996). The perceptions and usage of statistical power in applied psychology and management research. Personnel Psychology, 49, 103-120. doi:10.1111/
Open Science Collaboration. (in press). The Reproducibility Project: A model of large-scale collaboration for empirical research on reproducibility. In V. Stodden, F. Leisch, & R. Peng (Eds.),
Implementing Reproducible Computational Research (A Volume in The R Series). New York, NY: Taylor & Francis. doi:10.2139/ssrn.2195999 | {"url":"https://savvystatistics.com/increasing-statistical-power-in-psychological-research-without-increasing-sample-size/","timestamp":"2024-11-02T23:51:04Z","content_type":"text/html","content_length":"56246","record_id":"<urn:uuid:1f6193a1-aeb2-48a8-a2ae-79d8b57a1a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00844.warc.gz"} |
This function calculates quantiles of the probability distribution whose probability density has been estimated and stored in the object x. The object x must belong to the class "density", and would
typically have been obtained from a call to the function density.
The probability density is first normalised so that the total probability is equal to 1. A warning is issued if the density estimate was restricted to an interval (i.e. if x was created by a call to
density which included either of the arguments from and to).
Next, the density estimate is numerically integrated to obtain an estimate of the cumulative distribution function \(F(x)\). Then for each desired probability \(p\), the algorithm finds the
corresponding quantile \(q\).
The quantile \(q\) corresponding to probability \(p\) satisfies \(F(q) = p\) up to the resolution of the grid of values contained in x. The quantile is computed from the right, that is, \(q\) is the
smallest available value of \(x\) such that \(F(x) \ge p\). | {"url":"https://www.rdocumentation.org/packages/spatstat.explore/versions/3.0-6/topics/quantile.density","timestamp":"2024-11-06T13:30:28Z","content_type":"text/html","content_length":"70403","record_id":"<urn:uuid:9999eb74-5672-405f-aff3-bea1b67b88c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00061.warc.gz"} |
#include <boost/math/distributions/skew_normal.hpp>
namespace boost{ namespace math{
template <class RealType = double,
class Policy = policies::policy<> >
class skew_normal_distribution;
typedef skew_normal_distribution<> normal;
template <class RealType, class Policy>
class skew_normal_distribution
typedef RealType value_type;
typedef Policy policy_type;
// Constructor:
skew_normal_distribution(RealType location = 0, RealType scale = 1, RealType shape = 0);
// Accessors:
RealType location()const; // mean if normal.
RealType scale()const; // width, standard deviation if normal.
RealType shape()const; // The distribution is right skewed if shape > 0 and is left skewed if shape < 0.
// The distribution is normal if shape is zero.
}} // namespaces
The skew normal distribution is a variant of the most well known Gaussian statistical distribution.
The skew normal distribution with shape zero resembles the Normal Distribution, hence the latter can be regarded as a special case of the more generic skew normal distribution.
If the standard (mean = 0, scale = 1) normal distribution probability density function is
and the cumulative distribution function
then the PDF of the skew normal distribution with shape parameter α, defined by O'Hagan and Leonhard (1976) is
Given location ξ, scale ω, and shape α, it can be transformed, to the form:
and CDF:
where T(h,a) is Owen's T function, and Φ(x) is the normal distribution.
The variation the PDF and CDF with its parameters is illustrated in the following graphs:
skew_normal_distribution(RealType location = 0, RealType scale = 1, RealType shape = 0);
Constructs a skew_normal distribution with location ξ, scale ω and shape α.
Requires scale > 0, otherwise domain_error is called.
RealType location()const;
returns the location ξ of this distribution,
RealType scale()const;
returns the scale ω of this distribution,
RealType shape()const;
returns the shape α of this distribution.
(Location and scale function match other similar distributions, allowing the functions find_location and find_scale to be used generically).
While the shape parameter may be chosen arbitrarily (finite), the resulting skewness of the distribution is in fact limited to about (-1, 1); strictly, the interval is (-0.9952717, 0.9952717).
A parameter δ is related to the shape α by δ = α / (1 + α²), and used in the expression for skewness
All the usual non-member accessor functions that are generic to all distributions are supported: Cumulative Distribution Function, Probability Density Function, Quantile, Hazard Function, Cumulative
Hazard Function, mean, median, mode, variance, standard deviation, skewness, kurtosis, kurtosis_excess, range and support.
The domain of the random variable is -[max_value], +[min_value]. Infinite values are not supported.
There are no closed-form expression known for the mode and median, but these are computed for the
• mode - by finding the maximum of the PDF.
• median - by computing quantile(1/2).
The maximum of the PDF is sought through searching the root of f'(x)=0.
Both involve iterative methods that will have lower accuracy than other estimates.
The R Project for Statistical Computing using library(sn) described at Skew-Normal Probability Distribution, and at R skew-normal(sn) package.
Package sn provides functions related to the skew-normal (SN) and the skew-t (ST) probability distributions, both for the univariate and for the the multivariate case, including regression models.
Wolfram Mathematica was also used to generate some more accurate spot test data.
The skew_normal distribution with shape = zero is implemented as a special case, equivalent to the normal distribution in terms of the error function, and therefore should have excellent accuracy.
The PDF and mean, variance, skewness and kurtosis are also accurately evaluated using analytical expressions. The CDF requires Owen's T function that is evaluated using a Boost C++ Owens T
implementation of the algorithms of M. Patefield and D. Tandy, Journal of Statistical Software, 5(5), 1-25 (2000); the complicated accuracy of this function is discussed in detail at Owens T.
The median and mode are calculated by iterative root finding, and both will be less accurate.
In the following table, ξ is the location of the distribution, and ω is its scale, and α is its shape.
Function Implementation Notes
pdf Using:
where T(h,a) is Owen's T function, and Φ(x) is the normal distribution.
cdf complement Using: complement of normal distribution + 2 * Owens_t
quantile Maximum of the pdf is sought through searching the root of f'(x)=0
quantile from the complement -quantile(SN(-location ξ, scale ω, -shapeα), p)
location location ξ
scale scale ω
shape shape α
median quantile(1/2)
mode Maximum of the pdf is sought through searching the root of f'(x)=0
kurtosis kurtosis excess-3
kurtosis excess | {"url":"https://www.boost.org/doc/libs/1_82_0/libs/math/doc/html/math_toolkit/dist_ref/dists/skew_normal_dist.html","timestamp":"2024-11-14T22:11:03Z","content_type":"text/html","content_length":"28231","record_id":"<urn:uuid:68220a1e-2509-4c07-bfd1-e036c7450557>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00691.warc.gz"} |
Mastermind Strategies
• {0,1,2,3,4,5,6,7,8,9} (10 elements: paired with decimal system)
• {1,2,3,4,5,6,7,8,9,0} (10 elements: paired with decimal system)
• {head, tail} (2 elements: binary system)
• {red, yellow, green, blue, white, orange} (6 colors: senary system)
• {ant,bird,cat,dolphin,eagle,falcon} (6 symbols: senary system)
• {red ball, blue ball, red triangle, blue triangle, red square, blue square} (6 symbols: senary system)
• {a,b,c,...x,y,z} (26 character alphabet: hexavigesimal system)
• decimal system: Position 3: Permutation: 0003, 4th position, since "0" is 1st position in numeral set
• binary system: Position 10 (bin.): Permutation: 0010, convert binary to decimal: 3 = 4th position
• hexadec.system: Position B (hex.): Permutation: 000B, convert hexadecimal to decimal: 0011 = 12th position
It might get handy to have an index number for each code and being able to calculate the code from an index rather than to examine the codes, e.g. when picking up a random code or cover scanned
• Random guess each turn, guess without consistency check (at worst k^n = 6^4=1,296 guesses)
• Random guess, but chose a consistent code only (at worst ?? guesses).
• same, but variations like specified start guess, mix of scanning and random guesses
• Evaluation of all possible permutations using scoreboards
• Incomplete scan (not all permutations are evaluated)
• Shapiro (1983) as discussed in: Kooi, Barteld (2005)
• Knuth, Donald (1976–1977)
• Irving, R. (1978-1979) as discussed in: Kooi, Barteld (2005)
• Neuwerth as discussed in: Kooi, Barteld (2005)
• Koyama, Kenji; Lai, Tony (1993)
• Merelo et al. (1999)
• Kooi, Barteld (2005)
• Venco (2006)
1. The first guess is aabb.
2. Calculate which possibilities (from the 1296) would give the same score of colored and white pegs if they were the answer. Remove all the others.
3. For each possible guess (not necessarily one of the remaining possibilities) and for each possible colored/white score, calculate how many possibilities would be eliminated. The score of the
guess is the least of such values. Play the guess with the highest score (minimax algorithm).
4. Go back to step 2 until you have got it right."(Knuth, Donald (1976–1977)
Improving and Scaling Evolutionary Approaches to the MasterMind Problem
Juan J. Merelo, Carlos Cotta and Antonio Mora
An experimental study of exhaustive solutions for the Mastermind puzzle
J. J. Merelo, Antonio M. Mora, Carlos Cotta, Thomas P. Runarsson
(Submitted on 5 Jul 2012)http://arxiv.org/abs/1207.1315
My approach was simply to generate guesses compatible with results of previous guesses.
The problem is that I didn't find guarantied way to find this compatible guess in time. So I was looking for guess with as small error as possible.
The error I've used = sum(guesses, 10000*abs(guess.a - current.a)+abs(guess.b - current.b)), where b is number of perfectly matched pegs, and a is number of total matched colors (results[0]+results
[1] from nextGuess arguments).
Now, I generate each guess candidate by first generating compatible color counts (first part of error sum), then generating most compatible order.
In both of those steps I first quickly make some reasonable initial state, then optimize it by some number of randomized changes, each taking into account possible reduction of error. To speed up
things a lot I've maintained arrays of error changes for various modification of the state, and look up for the best moves in this array. I have found it's much more efficient to have this array
maintained, than to recalculate errors after each move.
I've managed to generate up to 50000000/(L*L*L) candidates per each guess, using reasonable runtime - up to 15 seconds in worst case.
If any candidate has perfect match (zero error) I return the guess immediately.
Another part of my solution is the code for getting as much as possible information from guesses analytically. The code estimates possible minimum and maximum values for color counts, possible
assignment bits of colors in each place, etc. It allowed me to determine exact numbers of each color in reasonably few guesses - about K.
Initially, I've tried to issue first K-1 unicolor guesses to determine counts, but it turns out it wasn't necessary. This approach gets no information about ordering from the first guesses, and I've
got ~100 score from it (my second submission).
I thought about brute-force solution for small cases, but I've found that my generic algorithm almost always finds compatible guess for lengths up to 20, and larger cases are not brute-forceable
Several coders have mentioned random swaps as the way to find compatible guess.
My code does a bit more. It repeatedly removes 2-8 worst pegs, then adds them back in the best places. In each step if there are several possibilities with the same change in error, I pick one
Similar algorithm was used in the first stage when I don't know color counts yet for finding compatible set of counts.
"venco used the following cost formula (note: hits = pegs with correct color and correct position; hints = pegs with correct color but incorrect position):
cost = sum { 10000 * abs(hits - hits') + abs(hints - hints') }
The 10000 factor is used to give more weight to hits than to hints." (gsais)
Actually it's the other way around. I give more weight to hints to find correct color counts as quick as possible:
cost = sum { 10000 * abs(hints - hints') + abs(hits - hits') }
After all counts are found (it usually happens slightly later than after K guesses) I start to use correct number of different colors, so the first part of cost always yield zero, and algorithm
concentrates on ordering.
"hits = feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position" (gsais)
Here is one more difference. I've used total number of matching colors as value hints in cost formula, no matter if it's they are on correct place or not.
It can be calculated as results[0]+results[1] from nextGuess argument.
This way hints is plain measure of how good is your guess in terms of choosing various colors of pegs, and hits is the measure of order quality.
gsais, 2006
As good and simple as the stepwise strategy is, it has a very important complication: altough finding a consistent code for small problems is rather easy, for larger problems is extremely hard. In
fact, it has been already proved that just deciding if there exists a consistent code (a subset of the problem of finding it) is a NP-complete problem; see lbackstrom's post above. Even for not so
large problems (~ K=10 L=15) you can't actually hope to find a consistent code by simply generating random codes, you really need to use some heuristic to guide the search. However, even with good
heuristics, for slightly larger problems it's almost impossible to find a consistent code (unless you already have enough hints), and for the very large problems the only thing that becomes easy is
understanding the true meaning of "finding the needle in a haystack". This is where the "almost" stepwise strategies come, and they come in basically two flavors:
* Divide-and-conquer: use a background color or an analytical method to divide the problem in smaller problems and apply a stepwise approach for the smallest problems; see the approaches of lyc1977
(background color; discarding the impossible combinations is a way to find the consistent codes) and OvejaCarnivora.
* Least inconsistent code: instead of finding a completely consistent code with all the previous feedbacks, find the code that is the least inconsistent (within a subset of the search space).
Essentially, a cost formula is used that measures how much inconsistent a code is, and any optimizing algorithm is used to minimize the cost. venco used the following cost formula (note: hits =
feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position):
cost = sum { abs(hits - hits') + 10000 * abs((hits + hints) - (hits' + hints')) }
The 10000 factor is used to give more weight in the first moves to the total number of correct colors (hits + hints) than to the number of correct positioned colors (later, once the color frequencies
are discovered the term becomes automatically zero). Now, since both vdave and I started the search already knowing how many pegs of each color were in the answer, the number of hints was completely
irrelevant (because hits + hints = L if the code is a permutation of the right colors), thus the cost formula reduces to:
cost = sum { abs(hits - hits') }
Furthermore, knowing the frequency of each color greatly limited the search space and also simplified the operation of finding the closest neighbours of a code (ie. codes with an approximate cost) to
swaps. But these advantages didn't come without a price: as venco himself has already stated, his 14+ points advantage over vdave (in the tests) are very likely due to his algorithm skipping the
color's frequencies discovery phase - yet he doesn't say that achieving that was far away from being an easy task at all (I can bet I wasn't the only one that unsuccessfully attempted that, so I'm
looking forward to read exactly how he did it in the match editorial - right now I'm clueless edit: ok, now I understand how he accomplished it: it's thanks to the brilliant 10000 * diff(hits +
hints) term in the cost function). On the other hand, vdave's "smart" discovery phase is rather intuitive (after one is told about it, of course...), easy to understand and implement, and it's also
likely what allowed him to take the second spot in the tests.
Our contest problem is also known as the "knapsack" problem, a special case of the bin-packing problem. Everybody knows how to write a brute force search algorithm to find the best fit song list.
However, since brute force search is obviously slow and our scoring system punishes algorithms with long CPU times, exhaustive search is ruled out. This leaves us with two other choices: limited
(heuristic) search and multiple tries.
Limited search means using some heuristic judging criteria to check a selected subset of all the combinations, and choose the ones that pass the criteria to test.
Multiple tries means generating many random combination of the song list, and then choosing the one with the best fit. This can also be considered as a kind of limited search, except there is no
heuristic judging criteria. Simply choose the combination with the shortest gap among all the selected combinations. This approach has a fancy technical name, namely, the Monte-Carlo method.
Monte Carlo Method
Earlier entries mostly used the limited search approach. However, they either had a large gap (due to limitations in their heuristics) or spent too much CPU time. They were soon out run by entries
using the Monte-Carlo method. All of the top 100 entries used the Monte-Carlo method.
The Monte-Carlo method for this problem can be summarized by the following three steps:
I. Generate some random permutations of the song list
II. Find the best packing among these permutations
III. Return the index corresponding to the best packing
Given these three steps, many strategies can be applied and many parameters can be adjusted. As one of the MathWorkers here said, the contest is like a genetic algorithm, except that it's the humans
who are doing the mixing and selection. There were many cases where someone found a new strategy to apply to one of the steps, achieving a quantum leap in the score, which was then followed by a
series of marginal improvements as other people tweaked (optimized) the strategy to its full power. This tweaking continued until a new strategy was found and another tuning war began. | {"url":"https://codebreaker-mastermind-superhirn.blogspot.com/2012/08/mastermind-strategies.html","timestamp":"2024-11-10T16:18:26Z","content_type":"text/html","content_length":"137940","record_id":"<urn:uuid:1c701c47-386f-4821-9c54-64d0e31256f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00697.warc.gz"} |
Beta function of the non-linear sigma model
1613 views
In chapter 7.1.1. in Tong's notes about String Theory could someone sketch how can I show the statements that he nmakes around eq. 7.5
• That the addition of the counterterm can be absorbed by renormalization the wavefunction and the metric
• How does he conclude from the renormalization $$G_{\mu \nu} \rightarrow G_{\mu \nu} + \dfrac{\alpha '}{\epsilon}\mathcal{R}_{\mu\nu} $$ that the beta function equals $$\beta_{\mu\nu}(G) = \alpha
' \mathcal{R}_{\mu \nu} \quad ? $$
This post imported from StackExchange Physics at 2014-10-01 22:39 (UTC), posted by SE-user Anne O'Nyme
Look up the computation of one-loop beta functions in dimensional regularisation.
This post imported from StackExchange Physics at 2014-10-01 22:39 (UTC), posted by SE-user suresh
The calculation is done in Riemann normal coordinates in GSW, and it is easiest this way. The infinitesimal change in metric under integrating out high-frequency components of the string coordinates
is the definition of the beta function of the nonlinear sigma model, although I found it obscure to do outside of Riemann coordinates. But the link you gave doesn't work, so I can't write an answer,
but the answer is only correct to leading order in the inverse string tension, there are higher order corrections. You can understand everything but the coefficient from a "what else can it be" | {"url":"https://www.physicsoverflow.org/24140/beta-function-of-the-non-linear-sigma-model","timestamp":"2024-11-03T06:20:57Z","content_type":"text/html","content_length":"110051","record_id":"<urn:uuid:14b92476-7ff6-43dd-b967-7657afcf9d6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00356.warc.gz"} |
If ∂f∂x=0∂𝑓∂𝑥=0, the function f(x,y)𝑓(𝑥,𝑦) has no dependence on the variable x𝑥.Select one:TrueFalse
If ∂f∂x=0∂𝑓∂𝑥=0, the function f(x,y)𝑓(𝑥,𝑦) has no dependence on the variable x𝑥.Select one:TrueFalse
Solution 1
True Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is
a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee
wee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a
powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a pow
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions. | {"url":"https://knowee.ai/questions/30108995-if-fx-the-function-fxy-has-no-dependence-on-the-variable-xselect","timestamp":"2024-11-06T11:41:46Z","content_type":"text/html","content_length":"364390","record_id":"<urn:uuid:1aaf892e-eb05-4cb4-acc8-d2bd88241904>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00218.warc.gz"} |
Combining Methods
Learn how to use methods within methods.
We'll cover the following
We’ve discussed how to define a method and how to call it. What if one method isn’t enough? What if methods need to do more complicated things?
Callings methods from another method
For example, we could rewrite our add_two method using another add_one method and simply call it twice:
def add_one(number)
number + 1
def add_two(number)
number = add_one(number)
puts add_two(3)
This outputs 5 just like our previous examples.
We could also solve this whole problem by simply using the + operator.
However, for the sake of the example, let’s have a look at how we could add a method that does the exact same thing as the + operator:
def sum(number, other)
number + other
Which, again, outputs 5.
Note that in this example, our sum method now takes two arguments. When we call it, we also need to pass two numbers.
With this method in place, we can change (refactor) our previous methods to use it:
def sum(number, other)
number + other
def add_one(number)
sum(number, 1)
def add_two(number)
sum(number, 2)
puts add_one(3)
puts add_two(3)
Again, these examples aren’t very realistic because we’d probably just use the + operator in practice.
However, this nicely demonstrates how we can call one method from another and how different methods require different numbers of arguments. | {"url":"https://www.educative.io/courses/learn-ruby-from-scratch/combining-methods","timestamp":"2024-11-08T14:43:38Z","content_type":"text/html","content_length":"813742","record_id":"<urn:uuid:2f7f6d5f-702d-4071-b076-8ad888f42447>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00860.warc.gz"} |
Mikhail Vasil'evich Ostrogradsky
b. 24 September 1801 at Pashennaya (in present day Ukraine)
d. 1 January 1862 at Poltava (in present day Ukraine)
Ostrogradsky together with Buniakovsky were early probabilists of Russia. Ostrogradsky studied first at Kharkov University during the years 1817 and 1820 where he became acquainted with the theory.
He departed for Paris in 1822 where he and Buniakovsky both became acquainted with Laplace. Ostrogradsky returned to St. Petersburg in the spring of 1828. Near the end of 1831 he became a member of
the St. Petersburg Academy.
Four papers of interest were written in French and published by the Academy at Saint Petersbourg. These are:
• "Extrait d'un mémoire sur la probabilité des erreurs des tribunaux," Sci. Math. Phys. l'Académie Impériale Sci. Saint-Pétersbourg, Bull. Sci. No. 3 (1834), pp. XIX-XXV. This was published in
The concern here lies in determining the probability of passing a wrong verdict for a tribunal consisting of a given number of judges. The problem is formulated quite poorly. For an
interpretation of the mathematics see the paper of Seneta given below.
• "Mémoire sur le calcul des fonctions géneratrices," L'Académie Impériale Sci. Saint-Petersbourg, Bull. Sci. No. 10 (1836), pp. 73-75.
Ostrogradsky points out an error made by Laplace in his work on generating functions.
• "Sur une question des probabilités. Extrait," Bull. Classe Phys.-Math. l'Acad. Impériale Sci. Saint-Petersbourg, VI, No. 141.142, No. 21.22, (1846) pp. 321-346. Published in 1848.
Ostrogradsky considers sampling from an urn without replacement. The context is one of assessing materials purchased by the military. It is often too time consuming to examine each item
individually. Rather, he proposes using random sampling to identify some portion of the purchases for close study.
• "Sur la probabilité des hypothèses d'après les évènements," Bull. Classe Phys.-Math. l'Acad. Impériale Sci. Saint-Petersbourg, XVII, No. 141.142, No. 21.22, (1846) pp. 321-346. Published in 1848.
Here Ostrogradsky remarks on inverse probability. The original formulation is due to Bayes. However, it is Laplace who seems to have independently established it as an important method of
inference. See his "Mémoire sur la probabilité des causes par les événemens," Savants étranges 6, 1774, p. 621-656. Oeuvres 8, p. 27-65. It has been translated by Steven Stigler. See "Laplace's
1774 Memoir on Inverse Probability," Statistical Science, Vol. 1, Issue 3 (Aug. 1986) 359-363 and "Memoir on the Probability of the Cause of Events," in the same issue, pp. 364-378.
In addition to these papers, he wrote several popular pieces. He also delivered a course on probability in a sequence of 20 lectures in 1858. The Complete Collected Works of Ostrogradsky has been
published by the Academy of Sciences of Ukrainian SSR. Most papers are in Russian, Two references may be mentioned, both of which I have found useful.
The paper "M. V. Ostrogradsky as probabilist" by Eugene Seneta appeared in the Ukrainian Mathematical Journal, Vol. 53, No. 8, (2001), pp. 1237-1247.
Gnedenko, writing in Russian, contributed "On Ostrogradsky's works on probability theory." This was published in 1951 in Istoriko.-Matematich. Issledovania., 4, 99-123. However, Oscar Sheynin
translated a portion of this into English. It may be found in his Russian Papers. | {"url":"https://gtfp.net/pulskamp/Ostrogradsky/Ostrogradsky.html","timestamp":"2024-11-05T05:48:32Z","content_type":"text/html","content_length":"6107","record_id":"<urn:uuid:b089776c-902e-4b5f-9dba-8e778f6103b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00035.warc.gz"} |
Spf (E*[[x]]): Your walk through a flower garden
Inspired by the extraordinary expository style of Dr. Kazuya Kato, I’ve started reading parts of a (translated) Japanese children’s book when I’m stuck on a tough paper or concept — revisiting the
concept with such a dreamlike world in mind usually unfolds an illustrative perspective. A misty world which begs to be put into firm ground via prolonged formal and concrete afterthought.
He embraces that teaching can be poetic and tantalizing, providing not a definition but a deep and creative hint that causes an exploratory shift in perspective, allowing you to walk down the path to
the conclusion yourself. I wanted to try to exposit with this philosophy: confusion is expected and encouraged as impetus for reaching understanding. With that in mind, step into your flower garden.
Planted in a line of earth (\(\text{Spec }R\)) there are flowers, \(C\), whose heads are smooth projective genus 1 curves with stems that can retract into the ground, s.t. the flower meets the earth
at one point (a marked point).
Cutting the flowers off at their stems \(C \xrightarrow{p} \text{Spec }R\) \(\Rightarrow\) you’re left with the line of earth (\(\text{Spec }R\))
Cutting into the petals a small ring around their stems (formal disk at marked point)
\(\Rightarrow\) you’re left with the remaining (infinitely-layered) base of the flower sitting on top of \(\text{Spec }R\)
Feeling frustrated that you can’t see clearly, you use your hands to move each disk lying flat on the ground to lay on its side, s.t. these bases are now stacked on top of each other like CDs and
form a loose layered cylinder, centered around the line of earth.
1st layer = \(\text{Spec }R[x]/x^2\), first infinitesimal neighborhood 1st&2nd layer = \(\text{Spec }R[x]/x^3\), second infinitesimal neighborhood;…
You stare at this line of earth, adorned with (infinitely-layered) flower bases on top forming a layered tube around the line of earth. Looking closer, you see how the layers fit together, \(\text
{Spf }R[[x]]\).
(If you’d cut out the disk and forgotten how the layers fit together, you’d find yourself with \(\text{Spec }R[[x]]\) — an awfully boring topology.)
Glance away, toward a different line of earth \(\text{Spec }E^\) with (infinitely-layered) flower bases on top, \(CP^\infty_E := \text{Spf }E^(CP^\infty)\).
Someone already cut the flowers down to their bases, before you had a chance to see them!
Flustered, you remember that \(CP^\infty\) is the colimit of \(CP^n\).
You reach into your pocket for your book-keeping device, and use it to look at the connectivity rings of each \(CP^n\).
Content, you label the layers of the flower base:
1st layer is Spec \(E^(CP^1)\) = Spec \(E^[x]/x^2\) 1st&2nd layer is Spec \(E^(CP^2)\) = Spec \(E^[x]/x^3\), …
Given the flower bases (\(CP^\infty_E\)), can you tell what flowers were over \(\text{Spec }E^*\)?
That is, is there a group object with a map to \(\text{Spec }E^*\) whose formal completion along its 0-section is \(CP^\infty_E\)?
I thought the answer was no, but I think the answer is instead tautological. There is a H-space flower which we can trim to \(CP^\infty_E\). What is it?
The notation for \(CP^\infty_E\) itself is extremely suggestive.
The H-space in question is \(CP^\infty\), and we’ve \(E\)-localized it!
You have enough to precisely decipher this story, each of your loose ends can be tied.
Written on May 27, 2015 | {"url":"https://rin.io/flower-story/","timestamp":"2024-11-05T21:36:44Z","content_type":"text/html","content_length":"8902","record_id":"<urn:uuid:3dc515f5-5fff-4bf3-8227-0f50a9b903bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00894.warc.gz"} |
11.6 Test of a Single Variance - Introductory Statistics | OpenStax
A test of a single variance assumes that the underlying distribution is normal. The null and alternative hypotheses are stated in terms of the population variance (or population standard deviation).
The test statistic is:
$( n - 1 ) s 2 σ 2 ( n - 1 ) s 2 σ 2$
• n = the total number of data
• s^2 = sample variance
• σ^2 = population variance
You may think of s as the random variable in this test. The number of degrees of freedom is df = n - 1. A test of a single variance may be right-tailed, left-tailed, or two-tailed. Example 11.10 will
show you how to set up the null and alternative hypotheses. The null and alternative hypotheses contain statements about the population variance.
Math instructors are not only interested in how their students do on exams, on average, but how the exam scores vary. To many instructors, the variance (or standard deviation) may be more important
than the average.
Suppose a math instructor believes that the standard deviation for his final exam is five points. One of his best students thinks otherwise. The student claims that the standard deviation is more
than five points. If the student were to conduct a hypothesis test, what would the null and alternative hypotheses be?
Even though we are given the population standard deviation, we can set up the test using the population variance as follows.
A SCUBA instructor wants to record the collective depths each of his students dives during their checkout. He is interested in how the depths vary, even though everyone should have been at the same
depth. He believes the standard deviation is three feet. His assistant thinks the standard deviation is less than three feet. If the instructor were to conduct a test, what would the null and
alternative hypotheses be?
With individual lines at its various windows, a post office finds that the standard deviation for normally distributed waiting times for customers on Friday afternoon is 7.2 minutes. The post office
experiments with a single, main waiting line and finds that for a random sample of 25 customers, the waiting times for customers have a standard deviation of 3.5 minutes.
With a significance level of 5%, test the claim that a single line causes lower variation among waiting times (shorter waiting times) for customers.
Since the claim is that a single line causes less variation, this is a test of a single variance. The parameter is the population variance, σ^2, or the population standard deviation, σ.
Random Variable: The sample standard deviation, s, is the random variable. Let s = standard deviation for the waiting times.
• H[0]: σ^2 = 7.2^2
• H[a]: σ^2 < 7.2^2
The word "less" tells you this is a left-tailed test.
Distribution for the test: $χ242χ242$, where:
• n = the number of customers sampled
• df = n – 1 = 25 – 1 = 24
Calculate the test statistic:
$χ 2 = (n − 1) s 2 σ 2 = (25 − 1) (3.5) 2 7.2 2 =5.67 χ 2 = (n − 1) s 2 σ 2 = (25 − 1) (3.5) 2 7.2 2 =5.67$
where n = 25, s = 3.5, and σ = 7.2.
Probability statement: p-value = P ( χ^2 < 5.67) = 0.000042
Compare α and the p-value:
α = 0.05; p-value = 0.000042; α > p-value
Make a decision: Since α > p-value, reject H[0]. This means that you reject σ^2 = 7.2^2. In other words, you do not think the variation in waiting times is 7.2 minutes; you think the variation in
waiting times is less.
Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that a single line causes a lower variation among the waiting times or with a single line, the
customer waiting times vary less than 7.2 minutes.
Using the TI-83, 83+, 84, 84+ Calculator
In 2nd DISTR, use 7:χ2cdf. The syntax is (lower, upper, df) for the parameter list. For Example 11.11, χ2cdf(-1E99,5.67,24). The p-value = 0.000042.
The FCC conducts broadband speed tests to measure how much data per second passes between a consumer’s computer and the internet. As of August of 2012, the standard deviation of Internet speeds
across Internet Service Providers (ISPs) was 12.2 percent. Suppose a sample of 15 ISPs is taken, and the standard deviation is 13.2. An analyst claims that the standard deviation of speeds is more
than what was reported. State the null and alternative hypotheses, compute the degrees of freedom, the test statistic, sketch the graph of the p-value, and draw a conclusion. Test at the 1%
significance level. | {"url":"https://openstax.org/books/introductory-statistics/pages/11-6-test-of-a-single-variance","timestamp":"2024-11-03T09:47:18Z","content_type":"text/html","content_length":"372107","record_id":"<urn:uuid:8d205439-377e-4b15-b353-7ddf24125a20>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00117.warc.gz"} |
Coffee Ratio Calculator
Last updated:
Coffee Ratio Calculator
Welcome to the coffee ratio calculator! We'll take you on an exciting journey to the world of coffee measurements and unique methods of coffee preparation. โ
We'll teach you how to calculate coffee-to-water ratios and show you how much coffee per cup you really need, depending on the given method.
Read on to discover how to use this tool, and visit our other coffee-related tools, such as cold brew ratio calculator and coffee kick calculator, to learn how to prepare your cold brew coffee
concentrate, and find out the level of alertness depending on the time you've slept and the dose of caffeine you've ingested.
How do I calculate coffee ratios?
It's already easy, but we're gonna make it even easier:
1. Choose your desired coffee ratio.
We decided on 9:40 because it seemed trickier than the others. ๐
2. Decide on the final volume of coffee you'd like to make.
We chose 250mL (8.5 US fl oz).
3. Let's calculate!
• Altogether, we need 49 parts of the drink (9:40, 9+40)
• We'll use two equations:
Coffee volume = (Total volume/Total parts) ร Coffee parts
Water volume = (Total volume/Total parts) ร Water parts
๐ ก We need to assume that 1 mL = 1 g.
The coffee-to-water ratio is 9:40, so we'll need 9 parts coffee and 40 parts water.
Coffee volume = (250 mL/49) ร 9
Coffee volume = 45.9 mL = 45.9 g
Water volume = (250 mL/49) ร 40
Water volume = 204.1 mL
There you go! We need 45.9 g of coffee and 204.1 mL of water.
๐ ก This calculation doesn't take into account the loss of water during the process. However, our coffee water calculator does take this into consideration.
While you're here, you might like to take some time to look at our other caffeine-filled tools. Assess the amount of coffee you need and find out what a lethal dose of caffeine is by using Omni
coffee calculator and caffeine calculator, respectively.
How do I calculate French press ratio?
The typical French press coffee to water ratio is 1:12. It means that for every 1 g of coffee, we'll need 12 mL of water.
You can simply calculate how much water you'll need for a given amount of coffee using the equation below:
Amount of water you need(mL) = Amount of coffee you have(g) ร 12
How do I calculate coffee ratio for espresso?
The coffee ratio for espresso is unique โ due to the small amount of water we use, we need to take care of every single drop. ๐ ง
That's why, to achieve a perfect 1:2 coffee grounds to water ratio, we need to use the 1:3.6 ratio instead. (We need to make up for water losses in the coffee machine โ the so-called coffee cake
absorbs about 80% of it!)
Amount of water you need(mL) = Amount of coffee you have(g) ร 3.6
Alternative coffee chart
Does anybody need a V60 calculator? ๐ ต Let's break down all the ratios one by one!
Type Ratio Comments
Ristretto 1:1.5 The smallest coffee, also called caffรจ corto.
Espresso 1:2 Watch out for water loss! To achieve a 1:2 ratio, use 1:3.6 instead.
Lungo 1:3 25 s, 88-92ยฐC, results in 25 mL of coffee.
Cold Brew 9:40 Takes 12-24 h to make!
Moka Pot 1:10 Coffee from the stovetop.
Aeropress 1:11 Making one cup at a time.
French Press 1:12 The easiest to use.
V60 3:50 So-called drip coffee.
Sinphon 3:50 Have you ever dreamt of becoming an alchemist?
Chemex 1:17 Blooming coffee in an hourglass.
How do I calculate coffee beans to water ratio?
Our coffee measurements need a few assumptions:
• 1 g = 1 mL
• 1 coffee bean weights 0.13g (0.0046oz)
If your desired ratio is, e.g., 1:6, you'll need 1 portion of coffee and 6 portions of water.
1. For every 1 g of coffee, you'll need 6 mL of water.
2. For every 1 g of coffee, you'll need ~8 coffee beans.
3. All together, you'll need around 8 coffee beans for every 6 mL of water you want to use. | {"url":"https://www.omnicalculator.com/food/coffee-ratio","timestamp":"2024-11-03T23:19:49Z","content_type":"text/html","content_length":"469872","record_id":"<urn:uuid:fe2bb77c-f330-4d7f-b716-b6148b1d956e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00761.warc.gz"} |
Azimuthal (or zenithal) projections are projections onto a plane that is tangent to some reference point on the globe. If the reference point is one of the poles the projections are polar azimuthal
(zenithal) views. If the reference point lies on the equator the projections are termed normal. For all other reference points the projections are oblique . There are five azimuthal projections
considered below: equal area, gnomonic, equidistant, stereographic, and orthographic.
A generalized set of equations may be written for the family of azimuthal projections (Snyder, 1993):
where is the longitude of the point of tangency and z is the great circle distance from the center.
One of the important azimuthal projections is the equal area projection developed by J.H. Lambert, for which projection is defined as
The gnomonic projection is a perspective view of the globe as seen by an observer at its center. For this projection is defined by
The equidistant projection has parallels that are the same distance apart and for which is defined by
The stereographic projection is a perspective view of the globe from a point on the surface that lies on the far side from the reference point. For example, if the reference point is the North Pole
then the viewpoint is the South Pole. For this projection is defined by
The orthographic projection is a perspective view from infinitely far above the surface of the globe. is defined by
It is interesting to compare the different projections by plotting these functions
The red line is the equal area projection, the blue is the gnomonic, the green is for the equidistant, black represents the stereographic and yellow the orthographic. This illustration tells us that
the gnomonic projection cannot be used to project very large areas (e.g. entire hemispheres). We may also need to exercise some caution with the stereographic projection as well.
We make the projections as follows. Note how the computations are optimized to eliminate repeated trigonometric calculations involving the same angles.
> for pname in [`equal area`,gnomonic,equidistant,stereographic,orthographic] do optcalc:=[optimize([`Azimuthal/z`,`Azimuthal/`.pname.`/rho`,x=`Azimuthal/x`,y=`Azimuthal/y`])]; newcalc:=[op(optcalc
[1..nops(optcalc)-2]),map(rhs,optcalc[nops(optcalc)-1..nops(optcalc)])]; mapcoords(`Azimuthal `.pname, input = [alpha, phi], coords = [Phi = '`if`(type(_Phi,name), 0.0, _Phi/180*Pi)', Lambda = '`if
`(type(_Lambda,name), 0.0, _Lambda/180*Pi)', lambda = 'readlib(`Maps/shiftf`)'(alpha - Lambda, Pi), op(newcalc)], params = [r, _Phi,_Lambda], view = [-180..180,-90..90,13,7,-120..120,-120..120]); od:
The code for the equal area projection is shown below for illustrative purposes.
The parameters are, respectively, the latitude and longitude (in degrees) of the point to appear at the centre of the projection. The default value of both angles is zero.
Below we show the equal area projection from a point of view above the North Pole. Note that the equations given above involve a division by zero if we specify the reference latitude as 90 degrees.
We can get around this using a traperror command as we have done elsewhere but in this case we avoid the problem by using a latitude that is just slightly less than 90 degrees. The resulting
projection is not affected by this simple minded trick. Note, further, that the grid is constructed separately in order to avoid the appearance of some spurious lines that would otherwise appear.
Note how the entire surface of the earth fits into a circle. Viewed from the North Pole as done here we see that the outer "circle" is, in fact, the outline of Antarctica. Below is the equal area
projection from the equator centered on North and South America.
Here is the azimuthal equidistant projection from latitude 0 degrees, longitude 0 degrees. The conversion of the grid leads to the appearance of a few straight lines that should not appear but which
have been retained as they don't significantly detract from the appearance of the projection.
We see here what was hinted at by our earlier comparison of the projections, some points are so far away that portion of interest is greatly compressed. Use of the view option allows us to to display
a projection that we find quite appealing.
We now look at the world using the orthographic projection viewed from a point very close to the North pole.
A passing glance at the above might suggest a succesful projection. More careful inspection of this image, however, reveals that all is not well. We can see the tip of South America occupying the
same space as parts of the USA and Canada; New Zealand is unreasonably close to the Siberian peninsula. We are, in fact, seeing southern hemisphere through the northern. The projection is quite
correct from the mathematical point of view; in practice, however, the earth is not transparent. What we need to do is exclude the land/water bodies that are not visible from infinitely far above the
reference point. For an orthographic projection of the northern (southern) hemisphere we can do this very simply be redefining our coordinate system so that only points with a positive (negative)
latitude are considered. The following, for example, would suffice for a view of the northern hemisphere.
> mapcoords(Az, input = [alpha, phi], coords = [q='`if`(phi<0, 0, RETURN([FAIL,FAIL]))', Phi = '`if`(type(_Phi,name), 0.0, _Phi/180*Pi)', Lambda = '`if`(type(_Lambda,name), 0.0, _Lambda/180*Pi)',
lambda = 'readlib(`Maps/shiftf`)'(alpha - Lambda, Pi), `Azimuthal/z`, `Azimuthal/`.pname.`/rho`, [r*`Azimuthal/x`, r*`Azimuthal/y`]], params = [r, _Phi,_Lambda], view =
Simply change the < to > for views of the southern hemisphere. Only slightly more complicated constructions will suffice for equatorial views. Sadly, such simple modifications will not work for
oblique projections (from points that are not above the poles or equator) and we need something more rigorous.
All points south of the equator are invisible to an observer stationed far above the North pole. Conversely, all points north of the equator are invisible to an observer located far below the south
pole. For an observer stationed high over the Atlantic Ocean just south of the North African bulge (0 degrees latitude, 0 degrees longitude) then all points within +/- 90 degrees east and west are
visible. It should be obvious that any point that is more than 90 degrees removed from the reference point directly below the observer cannot be seen by that observer. It is important to realize that
the 90 degrees must be measured along the great circle that joins the point in question to the reference point. Thus, in order to check the angular separation between two points on the surface of the
reference sphere we need to compute the great circle distance between them. This is a relatively simple problem in spherical or differential geometry with the following result.
where is the great circle distance (in angular units), are the longitude and latitude of the reference point and are the longitude and latitude of the point of interest. We encapsulate this result in
a procedure for repeated use in what follows.
The next task is to design a procedure to test whether or not a given point is more or less than 90 degrees ( radians) away from the reference point.
> greatcircletest := proc(A::list,Lambda,Phi) local gca; gca := evalf(greatcircleangle(Lambda*Pi/180,Phi*Pi/180,A[1]*Pi/180,A[2]*Pi/180)); if type(A,list(numeric)) and gca > evalf(Pi/2)+0.0000001
then RETURN([FAIL,FAIL]) else RETURN(A) fi end;
Note that the procedure takes as arguments the coordinates of the point in question in the form of a list of two numbers (this is the form used everywhere else in this material on map projections),
and the longitude and latitude (both in degrees) of the reference point. If the point is more than radians from the reference point then the procedure returns [FAIL, FAIL], otherwise it returns the
coordinates of the point unchanged. Maple's plot routines will ignore points with FAIL coordinates and lines that include points with such coordinates somewhere in the sequence will be broken (which
is exactly the desired behavior here). We now need a procedure that applies greatcircletest to all of the coordinates of our world maps:
> Hidehemisphere := proc (areamap::PLOT,Lambda,Phi) local newmap, allcurves, numcurves, datalist, numpoints, i, newpoints; newmap := areamap; allcurves := select(has,[op(areamap)],CURVES); numcurves
:= nops(allcurves); for i to numcurves do for datalist in [op(allcurves[i])] do if type(datalist,list) then newpoints:=map(greatcircletest,datalist,Lambda,Phi); newmap := subs(datalist =
newpoints,newmap) ; fi; od; od; RETURN(newmap) end;
Let us test our procedure by developing projections from a point somewhere near the Black Sea. The first step is to recreate the basic map showing only those points of interest.
The simple equirectangular projection of this map is shown below.
We can now display this map in any coordinate system.
The above shows the very large distortion that results from using the gnomonic projection over too great an area. This is another of those projections where we must restrict the view window.
Note the excessive distortion as we move further away from the reference point. Let us try again and restrict the view window still more.
As a second example of application of hiding parts of the world map we present azimuthal projections from above the North Pole.
We leave it as an exercise for readers to create south polar projections and normal equatorial projections of eastern and western hemispheres. | {"url":"https://www.maplesoft.com/applications/Preview.aspx?id=3583","timestamp":"2024-11-05T03:32:25Z","content_type":"application/xhtml+xml","content_length":"140222","record_id":"<urn:uuid:cfe9c1ca-2ce7-4cfd-96c2-2e1b93a3390e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00608.warc.gz"} |
LCM Calculator - Find Least Common Multiples | PineCalculator
Introduction to LCM Calculator
Lcm calculator with steps is a digital tool that helps you to find the lowest common multiple from the given number as input in a few seconds. Our tool determines two or more than two numbers to get
the least common multiple that can be divided by the given numbers.
You do not need to do complex calculations or use different methods one by one to check which method is best that gives you the least common multiple quickly and effortlessly. That is why we
introduce our least common multiple calculator to find the solution to LCM problems.
What is LCM in Math
Least common multiple(lcm) is the smallest factor from all the common multiple that gets from the given number. It is known as the lowest common factor or least common divisor because it is the least
number that is the divisor of all the given integers.
There are several techniques like prime factorization, a listing of factors, or a division method that are used to find the least common factors.
The lcm formula has two integers (a,b) in which gcd(a,b) is the greatest common divisor and lcm(a,b) is the least common multiple. In lcm calculator - least common multiple is calculated using the
following formula,
$$ lcm (a,b) \;=\; \frac{|a,b|}{gcd(a,b)} $$
How to Calculate LCM using Least Common Multiple Calculator
Lcm finder uses various types of methods to solve different numbers of integers to find the least common divisor. This is because our tool has advanced features in its software that give you
solutions for calculating least common multiple quickly and easily. These methods are:
How to Find the Lowest Common Multiple Using Prime Factorization
Let's understand the prime factorization method for finding the least common multiple with the help of an example. Here we have two integer numbers 6 and 9 for lcm.
First, the lowest common multiple calculator finds the factors of a given number then it writes these factors in exponential power. Lastly, it takes the product of 6 and 9 factors as shown in the
Lowest Common Multiple 6 and 9
Using the prime factorization method,
$$ 6 \;=\; 2 \times 3 $$
$$ 9 \;=\; 3 \times 3 $$
$$ 6 \;=\; 2 \times 3 $$
$$ 9 \;=\; 3 $$
$$ 3^2 \times 2 $$
$$ 9 \times 2 $$
$$ 18 $$
How to Find the Lowest Common Multiple using the Listing Factor
In the listing factor method, lcm calculator with steps calculates all the multiples of the given numbers 10 and 12. After that, it marks the common multiple from both the number factors list which
is 60. Then it selects the least number from common multiple factors.
In the below example, 60 is the only number which is present in both the numbers 10 and 12. The solution of the least common multiple is 60 for integers 10 and 12.
Lowest Common Multiple of 10 and 12
Using the list of multiplies method,
$$ multiplies\; of \; 10 $$
$$ 10,\; 20,\; 30,\; 40,\; 50,\; 70,\; 80 $$
$$ Multiplies\; of\; 12: $$
$$ 12,\; 24,\; 36,\; 48,\; 60,\; 72,\; 84 $$
$$ LCM(10, 12) \;=\; 60 $$
How to Find the Lowest Common Multiple using the Division Method
To find the lcm for division method, calculator lcm find the lowest prime number that can divide both the given number(it write the prime number at the left side as shown in the below example).
If prime number is divide only one number then second number write as it below (without any calculation).Otherwise least common factor calculator continues this process till the reminder not become 1
for both the numbers. Here 9,12 are divided with the same prime number.
Lastly, lcm calculator write all the factors(prime number) and multiply them to find the least common multiple which is 36.
Lowest Common Multiple 9 and 12
Use division method to find the lcm
\begin{array}{c|rr} 2 & 9 & 12 \\ 2 & 9 & 6 \\ 3 & 9 & 3 \\ 3 & 3 & 1 \\ & 1 & 1 \\ \end{array}
Factors are
$$ LCM \;=\; 2 \times 2 \times 3 \times 3 $$
$$ LCM \;=\; 36 $$
how to Find LCM in the LCM Calculator
Least multiple calculator has a user-friendly design that enables you to calculate the least common multiple of particular numbers.
Before using our least common multiple calculator, you must follow some simple steps, and you do not face any inconvenience during the calculation.These steps are:
1. Choose the method through which you want to evaluate numbers to get a solution of lcm.
2. Enter the numbers value in the input box.
3. Review your input number before hitting the calculate button to start the evaluation process in the calculator lcm.
4. Click the “Calculate” button to get the result of your given integer problem.
5. You should try out the smallest common multiple calculator, first, you can use the load example so that you must be assured that it provides an accurate solution.
6. Click on the “Recalculate” button to get a refresh page for more solutions for lcm question solution
Final Result of LCM Finder
LCM Calculator with steps gives you the solution of a given integer problem when you add the input to the function into it. It provides you with solutions with a detailed procedure for finding lcm
from the number value instantly. It may contain as:
Result option gives you a solution for the least common multiple problems
It provides you with a solution where all the evaluation processes are in a step-by-step method when you click on this option.
Benefits of Using Lowest Common Multiple Calculator
Least multiple calculator provides you with multiple benefits whenever you use it to calculate integers to find the value of least common divsor. These benefits are:
• Calculator lcm is a free-of-cost tool so you can use it anytime to find the lcm from a given number problem in real time without paying any thing.
• It is a handy tool that allows you to get the solution from various types of methods to find the lcm questions.
• You can try out our least common factor calculator to practice new examples and get a strong hold on the least common multiple concept
• Our least common multiple calculator saves you time and effort from doing lcm question complex calculations and provides the least common multiple.
• Lcm finder is a reliable tool that provides you with accurate solutions whenever you use it for the evaluation of integers for lcm without making any manmade mistakes.
• Lcm calculator provides the solution with a complete process in a stepwise method and you get clarity on the lowest common multiple method. | {"url":"https://pinecalculator.com/lcm-calculator","timestamp":"2024-11-12T06:19:59Z","content_type":"text/html","content_length":"54834","record_id":"<urn:uuid:16266a51-229f-4118-8d5d-87cebea2c3f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00515.warc.gz"} |
Wolfram|Alpha Examples: Common Core Math: High School Statistics & Probability: Using Probability to Make Decisions
Examples for
Common Core Math: High School Statistics & Probability: Using Probability to Make Decisions
In advanced high school courses, students learn to interpret computed probabilities of random events to anticipate possible outcomes. Students develop probability distributions for random variables.
Based on concepts of probabiity, students define strategies and compute payoffs for games of chance and other real-world contexts.
Common Core Standards
Get information about Common Core Standards.
Look up a specific standard:
Search for all standards in a domain:
Analyzing Outcomes
Use probability to investigate chance events.
Determine expected payoffs (CCSS.Math.Content.HSS-MD.B.5):
Make random selections (CCSS.Math.Content.HSS-MD.B.6):
Expected Values
Calculate expected values of random events.
Define and visualize probability distributions (CCSS.Math.Content.HSS-MD.A.1):
Compute expected values of random variables (CCSS.Math.Content.HSS-MD.A.2):
Find probabilities of specific outcomes (CCSS.Math.Content.HSS-MD.A.3): | {"url":"https://www.wolframalpha.com/examples/mathematics/common-core-math/common-core-math-high-school-statistics-and-probability/common-core-math-high-school-statistics-and-probability-using-probability-to-make-decisions","timestamp":"2024-11-09T07:16:16Z","content_type":"text/html","content_length":"84542","record_id":"<urn:uuid:96886177-4eb5-4a6b-b7e5-23a01cbc9f68>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00842.warc.gz"} |
Cloud WordNet Browser
Antonyms of adj nonintersecting
1 sense of nonintersecting
Sense 1
, nonintersecting -- ((of lines, planes, or surfaces) never meeting or crossing)
INDIRECT (VIA oblique, parallel) ->
-- (intersecting at or forming right angles; "the axes are perpendicular to each other")
INDIRECT (VIA parallel, perpendicular) ->
-- (slanting or inclined in direction or course or position--neither parallel nor perpendicular nor right-angled; "the oblique rays of the winter sun"; "acute and obtuse angles are oblique angles";
"the axis of an oblique cone is not perpendicular to its base")
Similarity of adj nonintersecting
1 sense of nonintersecting
Sense 1
, nonintersecting -- ((of lines, planes, or surfaces) never meeting or crossing)
(vs. perpendicular) (vs. oblique) -- (being everywhere equidistant and not intersecting; "parallel lines never converge"; "concentric circles are parallel"; "dancers in two parallel rows") | {"url":"https://cloudapps.herokuapp.com/wordnet/?q=nonintersecting","timestamp":"2024-11-04T09:08:42Z","content_type":"text/html","content_length":"16094","record_id":"<urn:uuid:efd8151a-947c-4e75-b120-7eac202ed19f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00647.warc.gz"} |
How Many Work Weeks In a Year? [2023 & 2024]
You might be here wondering, how many work weeks in a year? It’s a question that comes up when we think about how much time we spend working.
Also, whether you’re an employee curious about your annual commitment or an employer seeking to optimize productivity, knowing the number of work weeks can be crucial.
In this comprehensive guide, we will explore the calculation methods and the factors affecting workweek calculations, so that you can make the most of your time.
How Many Work Weeks In a Year?
How many work weeks are in a year? By definition, a work week is a commonly used unit to measure the duration of work in a professional setting.
Typically, a work week consists of five consecutive business days, excluding weekends. However, the exact number of work weeks in a year may vary depending on different factors such as public
holidays and regional variations.
To determine the number of work weeks in a year, we first need to understand how many weeks there are in a year.
On average, a year consists of 52 weeks. However, there are instances when a year may have 53 weeks, which occurs approximately every five to six years.
Although all 52 weeks of the year are workable, hardly anyone works all 52 weeks.
The average number of working weeks per year hovers around 36!
Considering the standard five-day workweek, we can estimate the number of workweeks in a year by subtracting the number of public holidays and vacation days.
How Many Hours a Week Does the Average American Work? (Full-time & Part-time)
Many articles and laws of federal employment legislation, such as the Affordable Care Act or “Obamacare,” use the 40-hour weekly level to determine what constitutes a full-time employee. This
criterion is commonly considered the benchmark for full-time employment.
And, the 2014 Employment and Education survey found that salaried employees put in five more hours per week than full-time hourly employees (49 hours vs. 44 hours, respectively).
The ACA and the IRS
Both the ACA, also known as Obamacare, and the IRS, specify requirements for full-time employment. An employee is regarded as full-time by the IRS/ACA if they:
• Work a total of 130 hours per month, or
• Work a minimum of 30 hours per week.
Employers generally designate an employee as full-time for these reasons for any three-to twelve-month period in which they averaged 30 or more hours per week. On the same grounds, an employee
working less than 30 hours per week is considered part-time!
Now, let’s delve deeper into the calculation and explore the factors that influence the final count.
Factors Affecting the Number of Work Weeks
Several factors play a role in determining the exact number of work weeks in a year. These factors may differ based on various aspects, including country, industry, and organization. Here are some of
the key factors:
1. Public Holidays
Public holidays are non-working days that vary from country to country. They include national holidays, cultural observances, and other significant events. The number of public holidays in a year
directly affects the total number of work weeks.
For example, a country with ten public holidays will have ten fewer work weeks in the year.
Below is a list of the 11 US federal holidays that are eligible for paid time off:
1. New Year’s Day – 1st of January.
2. Birthday of Martin Luther King, Jr. – 3rd Monday in the month of January.
3. Washington’s Birthday – 3rd Monday in the month of February.
4. Memorial Day – Last Monday in the month of May.
5. Juneteenth Independence Day – 19th of June.
6. Independence Day – 4th of July.
7. Labor Day – First Monday in the month of September.
8. Columbus Day – Second Monday in the month of October.
9. Veterans Day – 11th of November.
10. Thanksgiving Day – Fourth Thursday in the month of November.
11. Christmas Day – 25th of December.
2. Vacation & Sick Leave
Vacation days are additional days off granted to employees for rest and relaxation. The number of vacation days allocated can vary depending on the employment contract, company policy, and local
labor laws, the average being 12.
These days are typically chosen by employees and are subtracted from the total number of workdays in a year.
3. Part-Time Employment
In some cases, individuals may work part-time or have reduced hours compared to full-time employees. Part-time employees usually work fewer days or hours each week. Consequently, the number of work
weeks for part-time employees will be lower than that of full-time employees.
4. Industry Norms
Certain industries, such as retail and hospitality, may have different work patterns compared to standard office-based jobs. For example, some industries may require employees to work on weekends or
have irregular schedules. The industry in which you work can impact the number of work weeks in a year.
5. Company Policies
Company policies and practices also influence the number of work weeks. Some organizations may have unique policies regarding flexible work arrangements, compressed workweeks, or rotational shifts.
These variations can affect the overall number of work weeks experienced by employees.
Related: How Many Work Hours In a Year?
How Many Work Weeks In a Year?
To arrive at the number of work weeks in a year, we minus the holidays and weekends from the total number of days in a year and divide it by 7 (the number of days in a week).
Considering there are 365 days in a year, and if it isn’t a leap year, let’s get to the math:
The number of workdays = 365 (days in a year) – 11 (public holidays) – 52*2 (104 Saturdays & Sundays) = 250 days.
The total number of work weeks are 250/7 = 35.71
The number of work weeks in a year roughly translates to 35.71!
Note: This doesn’t include vacation or sick leaves (12 days PTO). Taking PTO into consideration, this number drops to 34.
How Many Work Weeks In 2023?
To calculate the number of weeks worked in a year, we minus the holidays and weekends from the total number of days in a year and divide it by 7 (the number of days in a week).
For 2023, the number of work weeks roughly translates to 35.57 (as the total number of workdays is 249 in 2023 and 249/7 = 35.57)
How Many Work Weeks In 2024?
As previously done for 2023, we subtract the weekends and holidays from the total number of days in 2024 and divide it by seven (the number of days in a week) to determine the number of work weeks in
the year.
For the year 2024, the number of work weeks roughly translates to 35.57 (as the total number of workdays is 249 in 2024 and 249/7 = 35.57)
Now that we have an understanding of the factors that influence the number of work weeks in a year, let’s address some frequently asked questions to further clarify this topic.
Related: How Many Weekends In a Year?
How Many Work Weeks In a Year? FAQs
1. How Many Work Weeks are there in a Standard Year?
In a standard year with no additional factors considered, there are approximately 36 work weeks. This assumes a five-day workweek and no public holidays falling on workdays.
2. How Many Work Weeks In a Month?
There are typically 21 (20.8) working days in a month if we include the weekends. With no additional factors considered, there are approximately 3 work weeks in a month. This assumes a five-day
workweek and no public holidays falling on workdays.
3. What Happens When a Year Has 53 Weeks?
When a year has 53 weeks, it means that there is an extra week beyond the usual 52 weeks. This occurs because the 365-day calendar does not perfectly align with the seven-day week. It happens
approximately every five to six years.
4. Do Work Weeks Vary Across Countries?
Yes, work weeks can vary across countries due to differences in public holidays and cultural practices. Some countries have a six-day workweek, while others may have shorter workweeks, such as a
four-day workweek.
5. How Many Hours Does the Average Person Work a Week?
The Bureau of Labor Statistics monitors the average number of hours each American works weekly. The most current data, collected before the repercussions of the coronavirus pandemic, revealed that
the average American worker put in 34.4 hours per week of work. Depending on the job, the area, and other things that affect the employer, this could be different.
6. Are Work Weeks the Same for All Industries?
No, work weeks can differ across industries. Certain industries, like healthcare and emergency services, often require employees to work on weekends and public holidays. On the other hand, industries
like banking and finance primarily operate on weekdays.
7. Can Company Policies Affect the Number of Work Weeks?
Absolutely. Company policies regarding flexible work arrangements, remote work, and shifts can influence the number of work weeks experienced by employees. Some organizations may have compressed
workweeks, allowing employees to complete their weekly hours in fewer days.
8. What if I work part-time? How does it affect the number of work weeks?
If you work part-time, the number of work weeks will be fewer compared to full-time employment. Part-time employees work fewer hours or days each week, resulting in a reduced number of work weeks.
Hey there, welcome to my blog!
I’m Swati, a mom, a personal finance enthusiast, and the owner of TheBlissfulBudget. My work has been featured in major publications including Fox 10, Credit Cards, Cheapism, How to Fire, Databox &
Referral Rock.
I help busy budgeters like you save and make money by utilizing simple yet effective methods that can create wonders.
My Mantra: You are entitled to live the life you desire, and financial bliss should be simple to obtain–check out my blog for helpful tips on acquiring wealth easily. | {"url":"https://theblissfulbudget.com/how-many-work-weeks-in-a-year/","timestamp":"2024-11-09T07:48:13Z","content_type":"text/html","content_length":"131696","record_id":"<urn:uuid:fd8fe908-4811-49f0-8919-ba3019b03bc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00263.warc.gz"} |
Which of the following is not an advantage of using credit cards? a. keeps you from having to carry cash or checks b. makes it easier to carry a low interest balance c. no interest charges accrue if balance is paid in full every cycle d.
Which of the following is not an advantage of using credit cards?
a. keeps you from having to carry cash or checks
b. makes it easier to carry a low interest balance
c. no interest charges accrue if balance is paid in full every cycle
d. none of the above
Find an answer to your question 👍 “Which of the following is not an advantage of using credit cards? a. keeps you from having to carry cash or checks b. makes it easier to ...” in 📗 Mathematics if
the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers | {"url":"https://cpep.org/mathematics/952268-which-of-the-following-is-not-an-advantage-of-using-credit-cards-a-kee.html","timestamp":"2024-11-15T00:32:48Z","content_type":"text/html","content_length":"23925","record_id":"<urn:uuid:2bf64503-110f-4c3c-b884-56a4010d6d9b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00823.warc.gz"} |
Multiply 2 Digit Numbers By 10 Worksheet 2024 - NumbersWorksheets.com
Multiply 2 Digit Numbers By 10 Worksheet
Multiply 2 Digit Numbers By 10 Worksheet – This multiplication worksheet is focused on instructing individuals how to mentally multiply entire phone numbers. Students can make use of customized grids
to fit specifically one particular issue. The worksheets also protectdecimals and fractions, and exponents. You will even find multiplication worksheets having a handed out property. These worksheets
really are a need to-have for your personal math class. They can be used in class to figure out how to mentally flourish total numbers and line them up. Multiply 2 Digit Numbers By 10 Worksheet.
Multiplication of whole amounts
If you want to improve your child’s math skills, you should consider purchasing a multiplication of whole numbers worksheet. These worksheets will help you learn this simple idea. You are able to
decide to use one particular digit multipliers or two-digit and a few-digit multipliers. Abilities of 10 will also be an excellent alternative. These worksheets will help you to practice extended
practice and multiplication reading through the amounts. They are also a wonderful way to assist your son or daughter recognize the necessity of knowing the different kinds of total figures.
Multiplication of fractions
Experiencing multiplication of fractions with a worksheet might help instructors program and put together training effectively. Using fractions worksheets permits instructors to rapidly determine
students’ idea of fractions. College students might be questioned to end the worksheet inside a a number of time and then symbol their strategies to see exactly where they require more instruction.
Individuals can be helped by word issues that associate maths to actual-daily life conditions. Some fractions worksheets involve samples of comparing and contrasting phone numbers.
Multiplication of decimals
If you grow two decimal phone numbers, ensure that you class them up and down. If you want to multiply a decimal number with a whole number, the product must contain the same number of decimal places
as the multiplicant. By way of example, 01 by (11.2) x 2 will be equivalent to 01 by 2.33 x 11.2 except when this product has decimal areas of below two. Then, the item is circular to the nearest
total amount.
Multiplication of exponents
A math concepts worksheet for Multiplication of exponents can help you practice dividing and multiplying phone numbers with exponents. This worksheet will likely give problems that will need pupils
to flourish two various exponents. By selecting the “All Positive” version, you will be able to view other versions of the worksheet. Besides, also you can get into specific directions on the
worksheet itself. When you’re finished, you can just click “Produce” and also the worksheet is going to be saved.
Section of exponents
The basic principle for section of exponents when multiplying numbers is usually to subtract the exponent inside the denominator from your exponent inside the numerator. You can simply divide the
numbers using the same rule if the bases of the two numbers are not the same. For example, $23 split by 4 will the same 27. However, this method is not always accurate. This technique can bring about
confusion when multiplying phone numbers that happen to be too large or too small.
Linear characteristics
You’ve probably noticed that the cost was $320 x 10 days if you’ve ever rented a car. So, the total rent would be $470. A linear function of this sort has got the kind f(by), where ‘x’ is the
quantity of times the auto was leased. Moreover, it provides the shape f(by) = ax b, in which ‘a’ and ‘b’ are true amounts.
Gallery of Multiply 2 Digit Numbers By 10 Worksheet
Multiplying By 10 And 100 Worksheets
Multiplying 2 Digit Numbers By 10 Game Debra Dean s Multiplication
Multiplying By Multiples Of 10 Worksheet
Leave a Comment | {"url":"https://numbersworksheet.com/multiply-2-digit-numbers-by-10-worksheet/","timestamp":"2024-11-03T07:52:19Z","content_type":"text/html","content_length":"54427","record_id":"<urn:uuid:89c3dfd0-8337-4103-b60c-a64a77d42b5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00552.warc.gz"} |
E - Game
E - Game 解説 /
実行時間制限: 2 sec / メモリ制限: 256 MB
There is a video game. In the game, there are N numbers of stages. There are 3 levels of difficulty and each stage has one of a difficulty in 1,2,3.
The number of stages of each difficulty is N_1,N_2,N_3 (N_1+N_2+N_3=N) For each difficulty, you know that the probability of completing the stage of that difficulty in one trial is P_1,P_2,P_3 (%).
You have to pay cost 1 to play the game. In one gameplay, you can play at least 1 stage, and at most 4 stages. The number of stages you can play with cost 1 depends as following.
• If you complete the first trial, you can continue to play the second trial. If you couldn't complete the first trial, that's the end of that play.
• If you complete the second trial, you can continue to play the final trial. If you couldn't complete the second trial, that's the end of that play.
• If you complete the final trial and also if the difficulty of that stage was 2 or 3, you can continue to play the extra trial. If not, that's the end of that play.
Before starting each first, second, final, and extra trial, you may choose any stage of any difficulty to play. Also, you may choose the stage you have already completed before.
For example, if you paid cost 1 and started the game, completed the first trial, and failed the second trial, that's the end of the play for that time. You can't play the final trial in that case.
Your aim is to complete all the N stages at least 1 time each. Regarding you followed a optimal strategy to minimize the total cost, calculate the expected value of cost you must pay to achieve that.
Whenever you choose the stage to play, you can choose any stage using the information about all stages you have tried including the stages you have completed at previous trial.
Input is given in the following format.
N_1 N_2 N_3
P_1 P_2 P_3
• On the first line, you will be given three integers N_1,N_2,N_3 (0 \leq N_1,N_2,N_3 \leq 100), the number of stages of each difficulty separated by space, respectively.
• On the second line, you will be given three integers P_1,P_2,P_3 (1 \leq P_1,P_2,P_3 \leq 100), the probability of completing the stage of each difficulty in percentage separated by space,
Output one line containing the expected value of cost you must pay to complete all stages when you followed the optimal strategy to minimize the total cost. Your answer is considered correct if it
has an absolute or relative error less than 10^{-7}. Make sure to insert a line break at the end of the line.
Input Example 1
Output Example 1
There are 4 stages. you can surely complete the stage of any difficulty at once.
Pay cost 1 to start the gameplay. For each trial, choose the stage as following and you can complete all the stages with no additional cost.
• For the first trial, choose the stage of difficulty 1.
• For the second trial, choose the stage of difficulty 1.
• For the final trial, choose the stage of difficulty 3.
• For the extra trial, choose the stage of difficulty 1.
Input Example 2
Input Example 3
Input Example 4
Output Example 4
Input Example 5
Output Example 5 | {"url":"https://atcoder.jp/contests/code-festival-2014-china-open/tasks/code_festival_china_e?lang=ja","timestamp":"2024-11-08T05:27:06Z","content_type":"text/html","content_length":"21348","record_id":"<urn:uuid:ce30cc27-c48b-400f-adbe-99e81f53b363>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00495.warc.gz"} |
Challenge Exercises for Pre-Algebra
Short Answer Directions: Read each question. Click once in an ANSWER BOX and type in your answer. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is
correct or incorrect. To start over, click CLEAR.
Multiple Choice Directions: Select your answer by clicking on its button. Feedback to your answer is provided in the RESULTS BOX. If you make a mistake, choose a different button.
1. An agent charges $150 per gig to book a rock band, plus $75 per month for travel expenses. What was his monthly fee if he booked 6 gigs for the band last month?
2. A gardener charges $8 per square foot to lay sod. If a square garden is 7 feet along each side, how much will he charge to lay sod on it?
3. Six people in a club will share the expenses of a party that costs $240. How much will Katie pay for her share of the party if the club owes her $8?
4. Evaluate the following expression:
5. Jesse spends $5 a day on lunch. Which algebraic expression correctly represents the amount of money he will spend on lunch in x days?
6. Which algebraic expression correctly represents this phrase?
The quotient of twelve and seven times a number, decreased by five.
7. Which algebraic equation correctly represents this sentence?
A number increased by eight is nineteen.
8. Which algebraic equation correctly represents this sentence?
Twenty-five is three times a number, decreased by eight.
9. Which sentence correctly represents this algebraic equation?
15 = 16y – 3
10. Which sentence correctly represents the algebraic equation below? | {"url":"https://mathgoodies.com/lessons/challenge_unit7/","timestamp":"2024-11-05T15:50:49Z","content_type":"text/html","content_length":"44831","record_id":"<urn:uuid:513b3562-441f-49ff-b3aa-3a7079ac39b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00714.warc.gz"} |
Palle Jorgensen
Some of Jorgensen's Publications in 2010
P. E. T. Jorgensen and M.-S. Song, "Matrix Factorization and Lifting,", submitted, 2010. pdf
Affine Fractals as Boundaries and Their Harmonic Analysis
D. E. Dutkay, P.E.T. Jorgensen, submitted
Functional Analysis
F. Tian, P.E.T. Jorgensen, submitted
Spectral measures and Cuntz algebras
D. E. Dutkay, P.E.T. Jorgensen, submitted
Some of Jorgensen's Publications in 2009
P. E. T. Jorgensen and M.-S. Song, "Scaling, image compression and encoding,", submitted, 2009. pdf
P. E. T. Jorgensen and M.-S. Song, "Spectral Theory of Discrete Processes," Cent. Eur. J. Phys. 8(3):340-363, 2010. pdf
P. E. T. Jorgensen and M.-S. Song, "An Extension of Wiener Integration with the Use of Operator Theory," J. Math. Phys. 50 103502, 2009. pdf
P. E. T. Jorgensen and M.-S. Song, "Analysis of Fractals, Image Compression, Entropy Encoding and Karhunen-Loeve Transforms," Acta Appli candae Mathematica, 108, 5:498-508, Springer, 2009. pdf
Families of spectral sets for Bernoulli convolutions
P.E.T. Jorgensen, K. Kornelson, K. Shuman, submitted
Fourier Duality for Fractals Measures with Affine Scales
D. E. Dutkay, P.E.T. Jorgensen, submitted.
Spectral Reciprocity and Matrix Representations of Unbounded Operators
P.E.T. Jorgensen, E. P. J. Pearse, submitted.
On Common Fundamental Domains
D. E. Dutkay, P.E.T. Jorgensen, D. Han, G. Picioroaga, submitted.
Spectral Operator Theory of Electrical Resistance Networds
P.E.T. Jorgensen, E. P. J. Pearse, submitted.
Some of Jorgensen's Publications in 2008
Spectral Theory for Discrete Lapacians
by Dorin E. Dutkay and Palle E. T. Jorgensen, , submitted.
Essential selfadjointness of the graph-Laplacian
by Palle E. T. Jorgensen, , submitted.
P. E. T. Jorgensen and M.-S. Song, "Optimal Decompositions of Translations of L^2 - Functions," Complex Analysis and Operator Theory, vo l 3, No 2:449-478, Birkhauser, 2008. pdf
P. E. T. Jorgensen and M.-S. Song, "Comparison of Discrete and Continuous Wavelet Transforms," Springer Encyclopedia of Complexity and S ystems Science, Springer, 2008. pdf
Jorgensen's Publications in 2007
C^*-Algebras Generated by Partial Isometries
by Ilwoo Cho and Palle E. T. Jorgensen, Journal of Mathematical Physics, to appear.
by Palle E. T. Jorgensen and Myung-Sin Song, Complex Analysis and Operator Theory Online First (2007).
Quasiperiodic Spectra and Orthogonality for Iterated Function System Measures
by Dorin E. Dutkay and Palle E. T. Jorgensen, Mathematische Zeitschrift, to appear.
Fourier series on fractals: a parallel with wavelet theory
by Dorin E. Dutkay and Palle E. T. Jorgensen, Mathematische Zeitschrift, to appear.
Affine systems: asymptotics at infinity for fractal measures
by Palle E. T. Jorgensen, Keri A. Kornelson and Karen L. Shuman, Acta Appl. Math. 3 (2007), 181--222.
Unitary Representations of Wavelet Groups and Encoding of Iterated Function Systems in Solenoids
by Dorin E. Dutkay, Palle E. T. Jorgensen and Gabriel Picioroaga,, submitted.
Multiresolution wavelet analysis of integer scale Bessel functions
by Sergio Albeverio, Palle E. T. Jorgensen and Anna M. Paolucci J. Math. Phys. 48 (2007), 073516.
A duality approach to representations of Baumslag-Solitar groups
by Dorin E. Dutkay and Palle E. T. Jorgensen, Contemp. Math, to appear.
Orthogonal Exponentials for Bernoulli Iterated Function Systems
by Palle E. T. Jorgensen, Keri Kornelson and Karen Shuman Representations, Wavelets and Frames A Celebration of the Mathematical Work of Lawrence Baggett (Editors Palle E.T. Jorgensen, Kathy D.
Merrill and Judith A. Packer) (2007), 217--238.
The Measure of a Measurement
by Palle E. T. Jorgensen, J. Math. Phys. 48 no. 10 (2007), 103506.
Harmonic analysis of iterated function systems with overlap
by Palle E. T. Jorgensen, Keri A. Kornelson and Karen L. Shuman J. Math. Phys. 48 no. 8 (2007), 083511.
Entropy Encoding, Hilbert Space and Karhunen-Loeve Transforms
by Palle E. T. Jorgensen and Myung-Sin Song, J. Math. Phys. 48 no. 10 (2007), 103503.
Kadison-Singer from mathematical physics: An introduction
by Palle E. T. Jorgensen, AIM Institute (2007).
Some recent trends from research mathematics and their connections to teaching: Case studies inspired by parallel developments in science and technology
by Palle E. T. Jorgensen,Recent Advances in Computational Sciences: Selected Papers from the International Workshop on Computational Sciences and Its Education, World Scientific Publishing Company
Jorgensen's Publications in 2006
Kadison-Singer from mathematical physics: An introduction
by Palle E. T. Jorgensen.
Analysis and Probability: Wavelets, Signals, Fractals
by Palle E. T. Jorgensen, Graduate Texts in Mathematics, vol. 234, Springer, New York, 2006, approx. 320 p., 58 illus., hardcover, ISBN 0-387-29519-4.
Wavelets on fractals
by Dorin E. Dutkay and Palle E. T. Jorgensen, Rev. Mat. Iberoamericana 22 (2006), 131--180.
Oversampling generates super-wavelets
by Dorin E. Dutkay and Palle E. T. Jorgensen, Proc. Amer. Math. Soc., to appear.
A non-MRA C^r-frame wavelet with rapid decay
by L.W. Baggett, Palle E.T. Jorgensen, K.D. Merrill, and J.A. Packer, Acta Appl. Math. 89 (2006), 251--270.
Hilbert spaces built on a similarity and on dynamical renormalization
by Dorin E. Dutkay and Palle E. T. Jorgensen, J. Math. Phys. 47 (2006), no. 5, 20 pp.
Harmonic analysis and dynamics for affine iterated function systems
by Dorin E. Dutkay and Palle E. T. Jorgensen, Houston J. Math., to appear.
Methods from multiscale theory and wavelets applied to non-linear dynamics
by Dorin E. Dutkay and Palle E. T. Jorgensen, Wavelets, Multiscale Systems and Hypercomplex Analysis (D. Alpay, ed.), Oper. Theory Adv. Appl., vol. 167, Birkhäuser, Boston, 2006, pp. 87--126.
Iterated function systems, Ruelle operators, and invariant projective measures
by Dorin E. Dutkay and Palle E. T. Jorgensen, Math. Comp. 75 (2006), 1931--1970.
Disintegration of projective measures
by Dorin E. Dutkay and Palle E. T. Jorgensen, Proc. Amer. Math. Soc., posted online June 22, 2006.
Certain representations of the Cuntz relations, and a question on wavelet decomposition
by Palle E. T. Jorgensen, accepted for Operator Theory, Operator Algebras, and Applications (Deguang Han, Palle Jorgensen, and David R. Larson, eds.), Contemp. Math., American Mathematical Society,
Providence, to appear.
Use of operator algebras in the analysis of measures from wavelets and iterated function systems
by Palle E. T. Jorgensen, accepted for Operator Theory, Operator Algebras, and Applications (Deguang Han, Palle Jorgensen, and David R. Larson, eds.), Contemp. Math., American Mathematical Society,
Providence, to appear.
Martingales, endomorphisms, and covariant systems of operators in Hilbert space
by Dorin E. Dutkay and Palle E. T. Jorgensen, J. Operator Theory, to appear (accepted November 2005, publication expected 2007).
The views and opinions expressed in this page are strictly those of the page author. The contents of this page have not been approved by the Division of Mathematical Sciences, the College of Liberal
Arts or The University of Iowa.
Work displayed on this page was supported in part by the U.S. National Science Foundation under grants DMS-9987777, DMS-0139473(FRG), and DMS-0457581.
This page was last modified on 14 January 2007 by Brian Treadway.
to the Department of Mathematics Faculty Pages | {"url":"http://homepage.divms.uiowa.edu/~jorgen/","timestamp":"2024-11-14T07:18:24Z","content_type":"text/html","content_length":"48198","record_id":"<urn:uuid:dd99fddf-b1d0-4069-a879-3901124b3d64>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00434.warc.gz"} |
Photon Statistics in Disordered Lattices
You are here
Photon Statistics in Disordered Lattices
Date Issued:
Propagation of coherent waves through disordered media, whether optical, acoustic, or radio waves, results in a spatially redistributed random intensity pattern known as speckle -- a statistical
phenomenon. The subject of this dissertation is the statistics of monochromatic coherent light traversing disordered photonic lattices and its dependence on the disorder class, the level of
disorder and the excitation configuration at the input. Throughout the dissertation, two disorder classes are considered, namely, diagonal and off-diagonal disorders. The latter exhibits
disorder-immune chiral symmetry -- the appearance of the eigenmodes in skew-symmetric pairs and the corresponding eigenvalues in opposite signs. When a disordered photonic lattice, an array of
evanescently coupled waveguides, is illuminated with an extended coherent optical field, discrete speckle develops. Numerical simulations and analytical modeling reveal that discrete speckle
shows a set of surprising features, that are qualitatively indistinguishable in both disorder classes. First, the fingerprint of transverse Anderson localization -- associated with disordered
lattices, is exhibited in the narrowing of the spatial coherence function. Second, the transverse coherence length (or speckle grain size) freezes upon propagation. Third, the axial coherence
depth is independent of the axial position, thereby resulting in a coherence voxel of fixed volume independently of position.When a single lattice site is coherently excited, I discovered that a
thermalization gap emerges for light propagating in disordered lattices endowed with disorder-immune chiral symmetry. In these systems, the span of sub-thermal photon statistics is inaccessible
to the input coherent light, which -- once the steady state is reached -- always emerges with super-thermal statistics no matter how small the disorder level. An independent constraint of the
input field for the chiral symmetry to be activated and the gap to be observed is formulated. This unique feature enables a new form of photon-statistics interferometry: by exciting two lattice
sites with a variable relative phase, as in a traditional two-path interferometer, the excitation-symmetry of the chiral mode pairs is judiciously broken and interferometric control over the
photon statistics is exercised, spanning sub-thermal and super-thermal regimes. By considering an ensemble of disorder realizations, this phenomenon is demonstrated experimentally: a
deterministic tuning of the intensity fluctuations while the mean intensity remains constant.Finally, I examined the statistics of the emerging light in two different lattice topologies: linear
and ring lattices. I showed that the topology dictates the light statistics in the off-diagonal case: for even-sited ring and linear lattices, the electromagnetic field evolves into a single
quadrature component, so that the field takes discrete phase values and is non-circular in the complex plane. As a consequence, the statistics become super-thermal. For odd-sited ring lattices,
the field becomes random in both quadratures resulting in sub-thermal statistics. However, this effect is suppressed due to the transverse localization of light in lattices with high disorder. In
the diagonal case, the lattice topology does not play a role and the transmitted field always acquires random components in both quadratures, hence the phase distribution is uniform in the steady
Title: Photon Statistics in Disordered Lattices.
Kondakci, Hasan, Author
Saleh, Bahaa, Committee Chair
Name(s): Abouraddy, Ayman, Committee Member
Christodoulides, Demetrios, Committee Member
Mucciolo, Eduardo, Committee Member
University of Central Florida, Degree Grantor
Type of text
Date Issued: 2015
Publisher: University of Central Florida
Language(s): English
Propagation of coherent waves through disordered media, whether optical, acoustic, or radio waves, results in a spatially redistributed random intensity pattern known as speckle -- a
statistical phenomenon. The subject of this dissertation is the statistics of monochromatic coherent light traversing disordered photonic lattices and its dependence on the disorder
class, the level of disorder and the excitation configuration at the input. Throughout the dissertation, two disorder classes are considered, namely, diagonal and off-diagonal
disorders. The latter exhibits disorder-immune chiral symmetry -- the appearance of the eigenmodes in skew-symmetric pairs and the corresponding eigenvalues in opposite signs. When a
disordered photonic lattice, an array of evanescently coupled waveguides, is illuminated with an extended coherent optical field, discrete speckle develops. Numerical simulations and
analytical modeling reveal that discrete speckle shows a set of surprising features, that are qualitatively indistinguishable in both disorder classes. First, the fingerprint of
transverse Anderson localization -- associated with disordered lattices, is exhibited in the narrowing of the spatial coherence function. Second, the transverse coherence length (or
speckle grain size) freezes upon propagation. Third, the axial coherence depth is independent of the axial position, thereby resulting in a coherence voxel of fixed volume
independently of position.When a single lattice site is coherently excited, I discovered that a thermalization gap emerges for light propagating in disordered lattices endowed with
Abstract/ disorder-immune chiral symmetry. In these systems, the span of sub-thermal photon statistics is inaccessible to the input coherent light, which -- once the steady state is reached --
Description: always emerges with super-thermal statistics no matter how small the disorder level. An independent constraint of the input field for the chiral symmetry to be activated and the gap to
be observed is formulated. This unique feature enables a new form of photon-statistics interferometry: by exciting two lattice sites with a variable relative phase, as in a traditional
two-path interferometer, the excitation-symmetry of the chiral mode pairs is judiciously broken and interferometric control over the photon statistics is exercised, spanning
sub-thermal and super-thermal regimes. By considering an ensemble of disorder realizations, this phenomenon is demonstrated experimentally: a deterministic tuning of the intensity
fluctuations while the mean intensity remains constant.Finally, I examined the statistics of the emerging light in two different lattice topologies: linear and ring lattices. I showed
that the topology dictates the light statistics in the off-diagonal case: for even-sited ring and linear lattices, the electromagnetic field evolves into a single quadrature component,
so that the field takes discrete phase values and is non-circular in the complex plane. As a consequence, the statistics become super-thermal. For odd-sited ring lattices, the field
becomes random in both quadratures resulting in sub-thermal statistics. However, this effect is suppressed due to the transverse localization of light in lattices with high disorder.
In the diagonal case, the lattice topology does not play a role and the transmitted field always acquires random components in both quadratures, hence the phase distribution is uniform
in the steady state.
Identifier: CFE0005968 (IID), ucf:50786 (fedora)
Note(s): Optics and Photonics, Optics and Photonics
This record was generated from author submitted information.
Subject(s): photonic lattices -- waveguide arrays -- coherence photon statistics -- disorder -- disordered lattices -- coupled waveguides -- photon-number distribution -- diagonal disorder --
off-diagonal disorder -- photonic thermalization gap -- discrete Anderson speckle -- Anderson localization -- transverse localization -- topology
Link to This http://purl.flvc.org/ucf/fd/CFE0005968
Restrictions campus 2016-12-15
on Access:
Host UCF
In Collections | {"url":"https://ucf.digital.flvc.org/islandora/object/ucf:50786","timestamp":"2024-11-02T20:50:45Z","content_type":"text/html","content_length":"41201","record_id":"<urn:uuid:4e380b6e-fcaf-47b1-ae8d-c1082d546c05>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00364.warc.gz"} |
[Solved] 2. A test has two portions: verbal and ma | SolutionInn
Answered step by step
Verified Expert Solution
2. A test has two portions: verbal and math. The scores for each portion are positively correlated with a correlation coefficient of 0.55. A scatter
2. A test has two portions: verbal and math. The scores for each portion are positively correlated with a correlation coefficient of 0.55. A scatter diagram of the scores is football shaped. Scores
on the verbal portion have an average of 550 points and an SD of 100 points. Scores on the math portion have an average of 525 points and an SD of 120 points. (a) One of the students scores 650 on
the verbal portion and 645 on the math portion. Is her math score less than greaterthan or equal to (circle one) the average math score of all students who got the same verbal score she did? Justify
your choice in terms of where these values fall relative to the SD line and regression line. (b) Approximately what percentage of all students score above 600 points on the math portion of the exam?
(c) Approximately what percentage of all students who score 600 points on the verbal portion score above 600 points on the math portion
There are 3 Steps involved in it
Step: 1
Given Information Two portions of the test Verbal and Math Verbal Scores Average mean 550 points Standard Deviation SD 100 points Math Scores Average mean 525 points Standard Deviation SD 120 points
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started | {"url":"https://www.solutioninn.com/study-help/questions/2-a-test-has-two-portions-verbal-and-math-the-3322529","timestamp":"2024-11-08T08:26:58Z","content_type":"text/html","content_length":"98059","record_id":"<urn:uuid:8f9c99ec-5369-4f65-8b5e-99105bde8ccb>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00186.warc.gz"} |
Reading Assignment 5
Reading and Implementation on Model Bias
For this assignment, please read the following book chapters and articles:
1. Fairness and Machine Learning, Solon Barocas et al. (ch. 3)
2. Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey, Max Hort et al., ACM Journal on Responsible Computing, 2024
3. Fairness Definitions Explained, Sahil Verma et al., IEEE/ACM FairWare, 2018
4. On the Apparent Conflict Between Individual and Group Fairness, Reuben Binns, ACM FAcct, 2020
5. Practical Fairness, Aileen Nielsen (ch. 3-4)
6. Machine Bias, Julia Angwin et al., ProPublica, 2016 [link]
7. AI Fairness 360 [link]
After completing these readings, please critical answer the following questions.
Based on paper [1], answer the following questions:
Formally demonstrate why Independence, Separation, and Sufficiency cannot all be satisfied together. Your analysis should include:
1. Proof that Independence and Separation are incompatible when base rates differ across groups.
2. Proof that Separation and Sufficiency are incompatible under differing base rates.
3. Proof that Independence and Sufficiency are mutually exclusive.
Based on paper [2], answer the following questions:
Paper [2] considers multiple approaches for in-processing and post-processing bias mitigation (see Section 4). Please provide a detailed explanation of each of these approaches, focusing on how they
work to mitigate bias in machine learning classifiers. Then, by defining a specific use case (e.g., hiring prediction, credit approval, healthcare diagnosis) explain how each approach could be
applied. Explain the advantages and disadvantages of using each approach in your chosen use case (for example, by considering factors such as effectiveness in bias reduction, impact on data quality,
scalability and complexity, etc.)
Based on paper [3], answer the following questions:
1. The paper argues that no single fairness definition can apply universally across all scenarios. Select a real-world application (e.g., healthcare, hiring, lending) and critically evaluate how one
fairness definition might introduce challenges or limitations in this context.
2. Even when an algorithm satisfies a fairness definition like demographic parity or equal opportunity, it may still inadvertently amplify biases present in the data. How can existing fairness
metrics fail to capture this phenomenon? Provide an example of bias amplification and discuss why it may go undetected when using traditional fairness metrics.
3. Many real-world datasets suffer from class imbalance (e.g., far fewer positive outcomes in one class). How do the fairness definitions discussed in the paper (such as equal opportunity or
demographic parity) handle imbalanced datasets? Are these fairness definitions robust to imbalance, or do they require modifications?
4. The paper discusses fairness in algorithmic models, but do not directly address the distinction between black-box models (e.g., deep learning) and transparent models (e.g., decision trees, linear
models). How does the complexity of a model influence the way fairness is measured and enforced?
Based on paper [4], answer the following questions:
1. Define individual fairness and group fairness as presented in the paper. How does the papers describe the tension between these two concepts, and why are they often considered in conflict in
fairness literature?
2. According to the paper, the apparent conflict between individual and group fairness is context-dependent. Select a real-world application (e.g., hiring, college admissions, criminal justice, or
loan approvals) and discuss how you would navigate the trade-offs between individual and group fairness in this specific context.
Based on papers [5-7], answer the following questions:
In this assignment, you will explore how the COMPAS algorithm performs in relation to three key fairness criteria: Independence, Separation, and Sufficiency. In addition, you will investigate how to
mitigate bias using in-processing and post-processing solutions from the AIF360 toolkit.
1. Independence (Statistical Parity): analyze whether the COMPAS algorithm’s predictions are independent of race. In other words, check if the likelihood of a positive outcome (high-risk
prediction) is the same across different racial groups, regardless of their true recidivism status. How well does the COMPAS algorithm satisfy the Independence criterion?
2. Separation (Equalized Odds): separation ensures that error rates are similar for all groups, so examine whether the COMPAS algorithm satisfies separation by comparing the false positive rates
(FPR) and false negative rates (FNR) across racial groups. Does the COMPAS algorithm have similar FPR and FNR across racial groups?
3. Sufficiency (Predictive Parity): investigate whether the COMPAS algorithm demonstrates predictive parity, meaning that the predicted risk scores are equally accurate for different racial groups.
Does the algorithm meet the Sufficiency criterion in terms of predictive accuracy across racial groups?
4. Ethical Considerations: based on your results, which fairness dimension(s) should be prioritized in redesigning the COMPAS algorithm, and why? Discuss the ethical and social considerations that
should guide these decisions.
5. Bias mitigation: using AIF360 toolkit, explore available in-processing (e.g., adversarial debiasing, prejudice remover, and meta fair classifier) and post-processing (e.g., equalized odds
post-processing and calibrated equalized odds post-processing) bias mitigation algorithms and apply them to the COMPAS dataset. How did applying this method affect the fairness dimensions
(Independence, Separation, Sufficiency) in the COMPAS algorithm? | {"url":"https://co-liberative-computing.github.io/events/courses_df_reading5/","timestamp":"2024-11-05T16:12:24Z","content_type":"text/html","content_length":"11711","record_id":"<urn:uuid:c9d99445-87fa-44fa-847f-b6b63a436eac>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00705.warc.gz"} |
3:4 – Dom Hans van der Laan
7 is divided in 3 and 4.
The difference between 3 and 4 is the minimum difference between two measures, so that one can compare them and name the difference clearly.
The necessary margin between two sizes is 1:4.
Van der Laan named this the PLASTIC NUMBER or GROUND RATIO. 3:4 is an approximation of 1,324718…
Mirror symmetry: in the West we are used to design like this: subdivide a length in equal parts.
This is a static way of approaching design. There is no rhythm or hierarchy, no dynamism or tension. Only counting. | {"url":"https://domhansvanderlaan.nl/theory-practice/theory/the-plastic-number-ratio/","timestamp":"2024-11-11T04:59:46Z","content_type":"text/html","content_length":"38698","record_id":"<urn:uuid:8966fbba-4d96-4af7-a0b3-7bb4462e48f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00224.warc.gz"} |
Unveiling the Power: Cosine Similarity vs Euclidean Distance
In the modern digital age, personalized suggestions are vital for enhancing user interactions. For instance, a music streaming application utilizes your listening habits to recommend new songs that
align with your taste, genre, or mood. However, how do these systems decide which songs are most suitable for you?
The answer lies in transforming these data points into vectors and calculating their similarity using specific metrics. By comparing the vectors representing songs, products, or user behaviours,
algorithms can effectively measure their closely related features. This process is fundamental in the fields of machine learning(opens new window) and artificial intelligence(opens new window), where
similarity metrics enable systems to deliver accurate recommendations, cluster similar data, and identify nearest neighbours, ultimately creating a more personalized and engaging experience for
# What Are Similarity Metrics?
Similarity metrics are instruments utilized to determine the level of similarity or dissimilarity between two entities. These objects could encompass text documents, images, or data points within a
dataset. Consider similarity metrics as a tool for assessing the closeness of relationships between items. They play a crucial role in various sectors, such as machine learning, by aiding computers
in recognizing patterns in data, clustering similar items, and providing suggestions. For instance, when attempting to discover films akin to one you enjoy, similarity measurements assist in
determining this by examining the characteristics of diverse films.
• Euclidean Distance:(opens new window) This measures how far apart two points are in space, like measuring the straight line between two locations on a map. It tells you the exact distance between
• Cosine Similarity:(opens new window) This checks how similar two lists of numbers (like scores or features) are by looking at the angle between them. If the angle is small, it means the lists are
very similar, even if they are different lengths. It helps you understand how closely related two things are based on their direction.
Now let’s explore both in detail to understand how they work.
# Euclidean Distance
The Euclidean Distance quantifies the distance between two points in a multi-dimensional space, measuring their separation and revealing similarity based on spatial distance. This measurement is
especially valuable in online shopping platforms that suggest products to customers depending on their browsing and buying habits. Each product can be depicted as a point in a multi-dimensional space
here, where different dimensions represent aspects like price, category, and user ratings.
The system computes the Euclidean Distance between product vectors when a user looks at or buys specific items. When two products are closer in distance, they are considered to be more alike, which
assists the system in suggesting items that closely match the user's preferences.
# Formula:
The Euclidean Distance d between two points A(x1,y1) and B(x2,y2) in two-dimensional space is given by:
For n-dimensional space, the formula generalizes to:
The formula calculates the distance by taking the square root of the sum of the squared differences between each corresponding dimension of the two points. Essentially, it measures how far apart the
two points are in a straight line, making it a straightforward way to evaluate similarity.
# Coding example
Now let’s code an example that generates a graph to calculate Euclidean Distance:
import numpy as np
import matplotlib.pyplot as plt
# Define two points in 2D space
point_A = np.array([1, 2])
point_B = np.array([2, 3])
# Calculate Euclidean Distance
euclidean_distance = np.linalg.norm(point_A - point_B)
# Create a figure and axis
fig, ax = plt.subplots(figsize=(8, 8))
# Plot the points
ax.quiver(0, 0, point_A[0], point_A[1], angles='xy', scale_units='xy', scale=1, color='r', label='Point A (1, 2)')
ax.quiver(0, 0, point_B[0], point_B[1], angles='xy', scale_units='xy', scale=1, color='b', label='Point B (2, 3)')
# Set the limits of the graph
ax.set_xlim(0, 3)
ax.set_ylim(0, 4)
# Add grid
# Add labels
ax.annotate('A', point_A, textcoords="offset points", xytext=(-10,10), ha='center', fontsize=12)
ax.annotate('B', point_B, textcoords="offset points", xytext=(10,-10), ha='center', fontsize=12)
# Draw a line representing Euclidean Distance
ax.plot([point_A[0], point_B[0]], [point_A[1], point_B[1]], 'k--', label='Euclidean Distance')
# Add legend
# Add title and labels
ax.set_title(f'Euclidean Distance: {euclidean_distance:.2f}')
# Show the plot
Upon executing this code it will generate the following output.
The plot above illustrates the Euclidean Distance between the points A(1,2) and B(2,3). The red vector denotes Point A, the blue vector denotes Point B, and the dashed line indicates the distance,
approximately 1.41. This visualization provides a clear representation of how Euclidean Distance measures the direct path between the two points.
Boost Your AI App Efficiency now
Sign up for free to benefit from 150+ QPS with 5,000,000 vectors
Free Trial
Explore our product
# Cosine Similarity
Cosine Similarity is a metric used to measure how similar two vectors are, regardless of their magnitude. It quantifies the cosine of the angle between two non-zero vectors in an n-dimensional space,
providing insight into their directional similarity. This measurement is particularly useful in recommendation systems, such as those used by content platforms like Netflix or Spotify, where it helps
suggest movies or songs based on user preferences. In this context, each item (e.g., movie or song) can be represented as a vector of features, such as genre, ratings, and user interactions.
When a user interacts with specific items, the system computes the Cosine Similarity between the corresponding item vectors. If the cosine value is close to 1, it indicates a high degree of
similarity, helping the platform recommend items that align with the user’s interests.
# Formula:
The Cosine Similarity S between two vectors A and B is calculated as follows:
• A⋅B is the dot product of the vectors.
• ∥A∥ and ∥B∥ are the magnitudes (or norms) of the vectors.
This formula computes the cosine of the angle between the two vectors, effectively measuring their similarity based on direction rather than magnitude.
# Coding Example
Now let’s code an example that calculates Cosine Similarity and visualizes the vectors:
import numpy as np
import matplotlib.pyplot as plt
# Define two vectors in 2D space
vector_A = np.array([1, 2])
vector_B = np.array([2, 3])
# Calculate Cosine Similarity
dot_product = np.dot(vector_A, vector_B)
norm_A = np.linalg.norm(vector_A)
norm_B = np.linalg.norm(vector_B)
cosine_similarity = dot_product / (norm_A * norm_B)
# Create a figure and axis
fig, ax = plt.subplots(figsize=(8, 8))
# Plot the vectors
ax.quiver(0, 0, vector_A[0], vector_A[1], angles='xy', scale_units='xy', scale=1, color='r', label='Vector A (1, 2)')
ax.quiver(0, 0, vector_B[0], vector_B[1], angles='xy', scale_units='xy', scale=1, color='b', label='Vector B (2, 3)')
# Draw the angle between vectors
angle_start = np.array([vector_A[0] * 0.7, vector_A[1] * 0.7])
angle_end = np.array([vector_B[0] * 0.7, vector_B[1] * 0.7])
ax.plot([angle_start[0], angle_end[0]], [angle_start[1], angle_end[1]], 'k--', color='gray')
# Annotate the angle and cosine similarity
ax.text(0.5, 0.5, f'Cosine Similarity: {cosine_similarity:.2f}', fontsize=12, color='black', ha='center')
# Set the limits of the graph
ax.set_xlim(0, 3)
ax.set_ylim(0, 4)
# Add grid
# Add annotations for the vectors
ax.annotate('A', vector_A, textcoords="offset points", xytext=(-10, 10), ha='center', fontsize=12)
ax.annotate('B', vector_B, textcoords="offset points", xytext=(10, -10), ha='center', fontsize=12)
# Add legend
# Add title and labels
ax.set_title('Cosine Similarity Visualization')
# Show the plot
Upon executing this code it will generate the following output.
The plot above illustrates the Cosine Similarity between the vectors A(1,2) and B(2,3). The red vector denotes Vector A, and the blue vector denotes Vector B. The dashed line indicates the angle
between the two vectors, with the calculated Cosine Similarity being approximately 0.98. This visualization effectively represents how Cosine Similarity measures the directional relationship between
the two vectors.
# Use of Similarity Metrics in Vector Databases
Vector databases play a crucial role in recommendation engines and AI-driven analytics by transforming unstructured data into high-dimensional vectors for efficient similarity searches. Quantitative
measurements such as Euclidean Distance and Cosine Similarity are used to compare these vectors, allowing systems to suggest appropriate content or identify irregularities. For instance,
recommendation systems pair user likes with item vectors, offering customized recommendations.
MyScale(opens new window) leverages these metrics to power its MSTG (Multi-Scale Tree Graph)(opens new window) algorithm, which combines tree and graph-based structures to perform highly efficient
vector searches, particularly in large, filtered datasets. MSTG is particularly effective in handling filtered searches, outperforming other algorithms like HNSW when the filtering criteria are
strict, allowing for quicker and more precise nearest-neighbour searches.
The metric type in MyScale allows users to switch between Euclidean (L2), Cosine, or Inner Product (IP) distance metrics, depending on the nature of the data and the desired outcome. For example, in
recommendation systems or NLP tasks, Cosine Similarity is frequently used to match vectors, while Euclidean Distance is favoured for tasks requiring spatial proximity like image or object detection.
By incorporating these metrics into its MSTG algorithm, MyScale optimizes vector searches across various data modalities, making it highly suitable for applications that need fast, accurate, and
scalable AI-driven analytics
# Conclusion
To summarize, similarity measurements like Euclidean Distance and Cosine Similarity play a crucial role in machine learning, recommendation systems, and AI applications. Through the comparison of
vectors representing data points, these metrics enable systems to uncover connections between objects, making it possible to provide personalized suggestions or recognize patterns in data. Euclidean
Distance calculates the linear distance between points, whereas Cosine Similarity examines the directional correlation, with each having distinct benefits based on the specific scenario.
MyScale enhances the effectiveness of these similarity metrics through its innovative MSTG algorithm, which optimizes both the speed and accuracy of similarity searches. By integrating tree and graph
structures, MSTG accelerates the search process, even with complex, filtered data, making MyScale a powerful solution for high-performance AI-driven analytics, large-scale data handling, and precise,
efficient vector searches.
Don’t Build Your Future on Specialized Vector Databases
With the rise of AI, vector databases have gained significant attention due to their ability to efficiently store, manage and retrieve large-scale, high-dimensional data. This capability is crucial
fo ... | {"url":"https://blog.myscale.com/blog/cosine-similarity-vs-euclidean-distance/","timestamp":"2024-11-15T03:30:40Z","content_type":"text/html","content_length":"96621","record_id":"<urn:uuid:5c1ced29-5f06-4835-b80c-92b51aa5c2d7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00660.warc.gz"} |
Printable Calendars AT A GLANCE
Chemistry Unit Conversion Practice Worksheet
Chemistry Unit Conversion Practice Worksheet - 1.00 km (1000 m / 1 km)( 100 cm / 1 m)(1 inch / 2.54 cm) = 3.94 x 104 or 39,400 inches. Unit conversions for the gas laws. Web unit conversions
worksheet 1. For conversions within the metric system, you must memorize the conversion (for example: Metric units of mass review (g and kg) metric units of length review (mm, cm, m, & km) metric
units of volume review (l and ml) u.s. Select your preferences below and click 'start' to give it a try! Web unit conversions worksheet: Rounding off numbers calculated using addition and subtraction
sample study sheet 2.3: Write down the following measruements 19 (a) 20 (b) 21 0 cm 1 cm 2 cm 3 cm 4 cm 5 cm a b write the answer in both cm and mm cm mm cm mm e. 1 kg = 2.205 lb.
Temperature conversions work answers 16. 3) how many miles are there in 3.45 x 1025 cm? For conversions within the metric system, you must memorize the conversion (for example: Web unit conversions
worksheet 1. Web 11) convert 3.4 inches to kilometers. Write down the following measruements 19 (a) 20 (b) 21 0 cm 1 cm 2 cm 3 cm 4 cm 5 cm a b write the answer in both cm and mm cm mm cm mm e. Web
chemical reactions and stoichiometry > stoichiometry.
Web dimensional analysis works because the given unit is always multiplied by a conversion factor that is equal to one. General chemistry i (chem 201) 51documents. ← scientific method and graphing.
For conversions within the metric system, you must memorize the conversion (for example: The conversion factor comes from an equation that relates the given unit to the wanted, or desired, unit.
️Algebra Conversions Worksheet Free Download Gmbar.co
1000 ml = 1 l, or 1000 g = 1 kg should be memorized) remember that metric conversions are exact ratios and thus will not limit your significant digits for the answer. Write down the following
measruements 19 (a) 20 (b) 21 0 cm 1 cm 2 cm 3 cm 4 cm 5 cm a b write the answer in.
Chemistry Unit Conversion Worksheet Solved Pressure Unit Conversion
This online quiz is intended to give you extra practice in dimensional analysis (converting between different si prefixes) using a variety of units. Web dimensional analysis works because the given
unit is always multiplied by a conversion factor that is equal to one. General chemistry i (chem 201) 51documents. First start with what you are given. Web derived unit conversions.
15 Best Images of Chemistry Unit 5 Worksheet 1 Chemistry Unit 1
As an example, you may be given a measurement of length in centimeters which must be converted to meters. Although an inch isn’t an si unit, it is still a measurement of length and can be converted
to any other unit of length. Unit conversions for the gas laws. 1000 ml = 1 l, or 1000 g = 1 kg.
13 Best Images of Unit Conversion Worksheet Metric Unit Conversion
Web please use these links for more practice with unit conversions! 3.45 km (1000 m / 1 km)( 100 cm / 1 m)(1 inch / 2.54 cm)(1 ft / 12 in) = 11,300 or 1.13 x 104 feet. 0.50 quarts (946 ml / quart) =
470 ml. Web worksheet chm 130 conversion practice problems. This quiz helps you practice converting.
free grade 4 measuring worksheets measurement worksheets math 20
Web please use these links for more practice with unit conversions! This online quiz is intended to give you extra practice in dimensional analysis (converting between different si prefixes) using a
variety of units. Express 5 mg/ml in kolograms/litre practice problems — derived unit conversions 1.convert 2.67 g/ml into kg/l. To convert a value reported in one unit to a.
Unit Conversion Worksheet Chemistry Answers
Crystal yau in the chemistry department at community college of baltimore county, has a worksheet that you can download called practice problems on unit conversions (acrobat (pdf) 110kb oct9 07). Web
chemical reactions and stoichiometry > stoichiometry. 2) write the definitions, symbols, and values for the following si unit prefixes: Web sample study sheet 2.1: Express 5 mg/ml in kolograms/litre.
Chemistry Worksheet Category Page 1
Web sample study sheet 2.1: 1 kg = 2.205 lb. Include units on your work, and write your final answers in the tables. Web dimensional analysis works because the given unit is always multiplied by a
conversion factor that is equal to one. Introductory, conceptual, and gob chemistry.
Cheat Sheet Chemistry Conversion Factors and Constants Cheat Sheet
Web dimensional analysis works because the given unit is always multiplied by a conversion factor that is equal to one. Gain knowledge from study, practice techniques, and test yourself using these
resources related to unit conversions used in chemistry. Express 3.4 x 104 mi 2 using only standard si units or its combination. Why has the numerical value remained unchanged?.
50 Unit Conversion Worksheet Chemistry
Students shared 51 documents in this course. Rounding off numbers calculated using multiplication and division sample study sheet 2.2: Share your results measurements and conversions chemistry quiz
You bet your megakelvin that this’ll help you learn. Express 5 mg/ml in kolograms/litre practice problems — derived unit conversions 1.convert 2.67 g/ml into kg/l.
Chemistry Unit Conversion Practice Worksheet - Unit conversions for the gas laws. Introductory, conceptual, and gob chemistry. Web ch 01 unit conv practice worksheet. Si units and unit conversions:
Web unit conversions worksheet 1. Convert 475 k to oc. General chemistry i (chem 201) 51documents. 0.50 quarts (946 ml / quart) = 470 ml. This online quiz is intended to give you extra practice in
dimensional analysis (converting between different si prefixes) using a variety of units. Web this process is frequently described as unit conversion.
Web sample study sheet 2.1: Temperature conversions work answers 16. Web this quiz aligns with the following ngss standard (s): There are 2.54 cm in 1 inch, 100 cm in 1 meter, and 1,000 meters in 1
kilometer. This quiz helps you practice converting between moles and a variety of units, a fundamental chemistry skill.
Metric units of mass review (g and kg) metric units of length review (mm, cm, m, & km) metric units of volume review (l and ml) u.s. Web please use these links for more practice with unit
conversions! How to convert moles to grams and vice versa. 0.50 quarts (946 ml / quart) = 470 ml.
Express 5 Mg/Ml In Kolograms/Litre Practice Problems — Derived Unit Conversions 1.Convert 2.67 G/Ml Into Kg/L.
Share your results measurements and conversions chemistry quiz Introductory, conceptual, and gob chemistry. Web 11) convert 3.4 inches to kilometers. Students shared 51 documents in this course.
3.45 Km (1000 M / 1 Km)( 100 Cm / 1 M)(1 Inch / 2.54 Cm)(1 Ft / 12 In) = 11,300 Or 1.13 X 104 Feet.
How many moles of salt are in 13.8 g of sodium chloride? Web worksheet chm 130 conversion practice problems. There are 2.54 cm in 1 inch, 100 cm in 1 meter, and 1,000 meters in 1 kilometer. Web
dimensional analysis works because the given unit is always multiplied by a conversion factor that is equal to one.
This Quiz Helps You Practice Converting Between Moles And A Variety Of Units, A Fundamental Chemistry Skill.
3) how many miles are there in 3.45 x 1025 cm? Include units on your work, and write your final answers in the tables. Web this quiz aligns with the following ngss standard (s): Though i’ve had good
luck with these resources with my own students, i can’t guarantee that they’ll work for you.
Write Down The Following Measruements 19 (A) 20 (B) 21 0 Cm 1 Cm 2 Cm 3 Cm 4 Cm 5 Cm A B Write The Answer In Both Cm And Mm Cm Mm Cm Mm E.
Si units and unit conversions: Although an inch isn’t an si unit, it is still a measurement of length and can be converted to any other unit of length. Complete the following tables, showing your
work for each lettered box beside the corresponding letter below. Rounding off numbers calculated using multiplication and division sample study sheet 2.2:
Related Post: | {"url":"https://ataglance.randstad.com/viewer/chemistry-unit-conversion-practice-worksheet.html","timestamp":"2024-11-04T07:38:06Z","content_type":"text/html","content_length":"37918","record_id":"<urn:uuid:87cb58ac-80c4-42bf-a759-da47b59d0afa>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00500.warc.gz"} |
STEM Ready - 3D Shapes & Projections
A 3D shape is a geometric figure that has three dimensions: length, width, and height. 3D shapes are also known as solid shapes. Some examples of 3D shapes include:
• Cube: A cube is a 3D shape with six square faces that are all the same size.
• Sphere: A sphere is a 3D shape that is perfectly round, like a ball. It has no flat faces or corners.
• Cylinder: A cylinder is a 3D shape with two circular faces that are parallel to each other, connected by a curved surface.
• Cone: A cone is a 3D shape with a circular base and a single point at the top.
• Pyramid: A pyramid is a 3D shape with a flat base and triangular faces that meet at a single point.
• Torus: A torus is a 3D shape that is shaped like a donut, with a circular hole in the middle.
These are just a few examples of 3D shapes. There are many other types of 3D shapes, including prisms, polyhedra, and more. 3D shapes are used in many different fields, including engineering,
architecture, and design.
A 3D shape net is a 2D representation of a 3D shape that shows its faces and edges. It is a diagram that can be used to visualize the structure of a 3D shape and to understand how it is made up of
flat faces and straight edges.
To create a 3D shape net, you start by drawing the top and bottom faces of the shape. Then, you add the sides and edges, connecting them to the top and bottom faces. Some 3D shapes, such as a cube or
a pyramid, can be easily represented with a 3D shape net. Other curved shapes, such as a sphere or a cone, may be more difficult to represent accurately with a 3D shape net.
3D shape nets can be useful for understanding the properties of 3D shapes, and for visualizing how they can be assembled from flat faces and edges. They can also be used in geometry and engineering
to design and construct 3D objects.
The following website has models of various polyhedra
Explore more about 3D shapes using Mathigon simulator: | {"url":"https://stemready.acads.iiserpune.ac.in/modules/mathematics/3d-shapes-projections","timestamp":"2024-11-08T04:59:57Z","content_type":"text/html","content_length":"117681","record_id":"<urn:uuid:423e4f66-0890-4c8b-bc27-b353859fe31e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00604.warc.gz"} |
Study documents, essay examples, research papers, course notes and other - studyres.com
- StudyRes
Imre Lakatos`s Philosophy of Mathematics
... intuition. In other words, the axioms are logically true statements (or theorems of logic), and their negations are self-contradictions. In this case, infallibility of the axioms is given on the
same basis as the validity of deduction (i.e. the power of logic), and since logic has some intimate rela ...
Imre Lakatos
Imre Lakatos (Hungarian: Lakatos Imre [ˈlɒkɒtoʃ ˈimrɛ]; November 9, 1922 – February 2, 1974) was a Hungarian philosopher of mathematics and science, known for his thesis of the fallibility of
mathematics and its 'methodology of proofs and refutations' in its pre-axiomatic stages of development, and also for introducing the concept of the 'research programme' in his methodology of
scientific research programmes. | {"url":"https://studyres.com/concepts/15728/imre-lakatos","timestamp":"2024-11-11T03:08:04Z","content_type":"text/html","content_length":"42143","record_id":"<urn:uuid:4aa32d38-2aef-4d05-81c2-9e2e043ababe>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00120.warc.gz"} |
7 Addition with Regrouping Strategies & Tools (1, 2, & 3-Digit)
I am a huge advocate for teaching students the WHY before the PROCEDURE when learning a new math concept. I am a firm believer that teachers owe it to their students to teach them multiple ways to
solve math problems. We all learn concepts in different ways and math is no different. With that said, this post is going to explain several different addition with regrouping strategies that you can
teach your students so that they can better understand this concept and easily transfer from 1-digit to 2 and 3-digit addition with regrouping problems.
Understanding place value is crucial before delving into addition with regrouping. Without a solid grasp of place value, progressing further in instruction becomes pointless.Depending on where you
teach, you may be required to teach 2-digit and 3-digit addition a bit differently. So, I am providing you will a variety of strategies that you can try with your students. If you see a strategy that
you cannot use, keep scrolling. I promise there will be something that you will find helpful.
1. Place Value Mats
Before I start my addition with regrouping unit, I spend a month teaching place value. Students must be solid in their place value understanding to truly understand the WHAT and WHY of addition with
The first day of teaching this skill, my students receive unifix cubes and they learn what regrouping looks like. You can also use place value blocks but I like that unifix cubes can be pulled apart
and pushed together. We always start with a two-digit number on top and a one-digit number on the bottom.
Here's how I teach it:
• Materials needed: manipulatives (unifix cubes, place value blocks or even popsicle sticks and beans) and a place value mat
• Build the top number with manipulatives, then add the second number on the same mat.
• When you fill a ten frame, you need to regroup, make a ten.
• Count your tens and ones to find your sum.
As I am going through this lesson, I have students who seem to be able to grasp the concept, come up and show the class how to do it all while explaining what they are doing. When students explain
their thinking and thought process, it is extremely powerful.
All Access member? Download FREE here.
After a couple of days using the place value mat to teach, I'll transition them to paper and pencil while still giving access to unifix cubes and the mat for problem-solving.
Download the mat!
Make sure to download a place value mat before going any further!
2 Ways to Get This Resource
Quick tip: Have students highlight the ones place for a visual reminder of where to start solving their math problems.
All Access member? Download FREE here.
Be watchful of those who just want to take the two-digit number and count on. Although this is a practical way to solve these problems, you must also be thinking about the concept that you are trying
to teach them…regrouping.
Next, we move to two-digit numbers on top and bottom. Of course, we go back to the math manipulatives to help them see and understand the process of regrouping.
All Access member? Download FREE here.
We continue to highlight the ones place at this stage. I want to make it a habit for my students to always start with the ones place first. This concept is more complicated for them then we realize.
They are trained to read and write from left to right so starting them on the right side of the problem tends to cause some students to struggle.
As I stated at the top of this post, I am a firm believer in teaching students multiple ways to solve problems because you never know what will connect with them. Here are a few more ways that I have
shown my students how to solve addition with regrouping problems.
2. Post-it Note Method
• Materials needed: paper, pencil, post-it notes and scissors
• Start with ones place and write the sum on the post it note
• Cut apart the post-it note between the tens and ones
• Place the tens post-it half on top of the tens row in the addition problem and find the sum
3. Break It Up Method
• Materials Needed: paper and pencil
• Write the addition problem horizontally
• Break apart your numbers into into expanded form (write it out)
• Add your tens, then ones to get the sum.
4. Slice & Split Method
• Materials needed: paper and pencil
• Write the problem vertically.
• Draw a line between the tens and oens.
• Add the ones, and split the sum into the tens and ones rows.
• Add the tens to get the sum.
Addition With Regrouping Resources for Your Classroom
Once your students find a method or two that they feel comfortable using, it's important to provide plenty of opportunities to practice! Here are some resources that you can start using in your
classroom today.
5. Math Puzzles
Each puzzle focuses on a math skill. There are 10 pieces to each puzzle. Each puzzle piece has a math problem for the student to solve. Then the student assembles the puzzle by putting their pieces
in order from least to greatest. Don't leave without grabbing the six free math puzzles at the top of this post!
All Access member? Download FREE here.
2 Ways to Get This Resource
6. Math Toothy
Toothy® task kits are highly engaging task card math games or math centers that allow students to practice math skills and answer questions in a fun, motivating way. The answers on the back of the
math task cards make the activity self-paced and self-correcting.
All Access member? Download FREE here.
2 Ways to Get This Resource
7. 1st & 2nd Grade Math Centers
Keep your students learning and diving deep with their math skills all year through the use of engaging, rigorous, and hands on center activities that your students will be sure to love! Our Lucky to
Learn Math Curriculum also has hands-on, collaborative activities to practice addition with regrouping. Check out this addition strategies mini lesson that features a fun theme – pizza shop!
Hands-on, practical activities will help increase student comprehension. You can check out more addition strategy mini lessons and activities here.
All Access member? Download FREE here.
2 Ways to Get This Resource
Or... Purchase the bundle in our shop.
Lucky to Learn Math Activities
Our Lucky to Learn Math curriculum features anchor charts for each addition strategy. You can print them out to use during whole group instruction, and even give each student a copy to add to their
personal math notebooks! And, FYI, these curriculum units are way more than just anchor charts!
Each unit includes:
• Daily teaching slides
• Daily lesson plans
• Independent activities
• Differentiated options
• Partner games
• Daily exit tickets
• Assessments
All Access members-Download all Lucky to Learn Math Addition with Regrouping resources FREE here.
2 Ways to Get These Resources
35 Comments
Angel Honts on January 11, 2016 at 5:39 am
I love these ideas!! Thank you so much!
Angie Olson on January 14, 2016 at 12:11 am
You are welcome, Angel!
Sebrina Burke on January 12, 2016 at 1:20 am
Angie, loved, loved, loved your scope! This is an amazing follow up post. Pinned and shared it so that others can see these great strategies in action. I would love to see a follow up post
involving the strategies you use for subtraction with regrouping! Thank you for being AMAZING!
Burke's Special Kids
Angie Olson on January 14, 2016 at 12:12 am
Thank you for leaving such sweet and thoughtful feedback, Sebrina! I'm so glad you caught my scope too! I plan to bring a subtraction with regrouping post this coming Monday!
Alyssa Duvall on November 2, 2020 at 2:46 pm
By scope do you mean how you started with blocks and then moved to pencil paper strategies? Don’t want to sound silly but want to clarify? Thank you for sharing!
Bailey Jordan on November 3, 2020 at 7:00 pm
Hi! We would love to help you with this question, please email us at customerservice@luckylittlelearners.com and we will do our best to answer it for you! Thanks so much!
Bailey Jordan
Lucky Little Learners
Jo-Anne Sweet on Second on January 12, 2016 at 12:04 pm
You did a great job presenting these strategies! I watched your presentation over my morning coffee and walked away with some great ideas. I've never used the highlighter strategy; love how
visual that is and that it costs nothing to add to our classroom routines. Love your place value mat too! Thanks for sharing your hard work! 🙂
Angie Olson on January 14, 2016 at 12:13 am
You're welcome Jo-Anne! Glad these could be helpful to you! There's nothing better than a free and practical tip, right?!
Karyl Lawrence on January 12, 2016 at 4:36 pm
Thank you so much for all you do and for sharing it! It is so appreciated! ?
Angie Olson on January 14, 2016 at 12:13 am
Thank YOU for taking the time to tell me that, Karyl! I appreciate you leaving me feedback…it keeps me inspired!
Leslie Moore on January 13, 2016 at 4:23 pm
Thank you so very much!!! I will be incorporating these into my unit. We are starting subtraction with regrouping now. I would really LOVE to know if you have some wonderful ideas for that too!
Angie Olson on January 14, 2016 at 12:14 am
Hi Leslie! I have some big plans for subtraction with regrouping and I plan to do a scope on that for Math Motivation Monday this coming Monday! I hope you can tune in!
Jen Bonner on February 23, 2016 at 3:04 am
This is brilliant! Thank you for the mat freebie! I think this is EXACTLY what my struggling learners need, a different visual and it's perfect!
Alice Tan on July 24, 2016 at 10:45 am
Hi Angie, thank you for the great tips! You really make my day. I simply love how you use the manipulatives and make Maths so fun! The manipulative I use most are the unifix cubes but the post-it
idea is simply awesome – so easy and so visual!
Ro on December 27, 2016 at 6:19 pm
I’m a kindergarten teacher and love these ideas. I know my kiddos are slowly getting regrouping but I definitely want to show them some of the cool
Ideas to show place value when adding! 🙂
Kym on March 12, 2017 at 7:29 pm
Wow! I love these ideas. I teach first grade, and the concepts is always so foreign to the students. Thank you so much!
Asha on October 18, 2017 at 11:41 am
excellent.. thanks alot.. i have a struggling child, i hope these works..
Clara on February 5, 2018 at 10:52 am
Omg, I love these idea to teach regrouping, my students are engage in the lesson rather than me the teacher have the load, thanks very much for the idea
Audrey Baker-Santos on March 11, 2018 at 11:54 pm
Thank you so much for putting the videos on there on how to teach regrouping. I am a visual learner and seeing a video of it really really helps!
jwilson9593 on September 21, 2018 at 6:22 am
I’m so glad that I found this! I remember using this in Kindergarten/First Grade and it helped my babies out so much!
I am now in 3rd grade and was wondering if you have a place value mat for 3 digits?
Angie Olson on February 27, 2019 at 8:25 pm
Hi there!
I have a place value set for second graders that may be useful for you. I have placed the link below for you to check out and see if it’s similar to what you are looking for! Thank you!
Angie Olson
Lucky Little Learners
Kandace on October 18, 2018 at 5:18 am
What strategy can you use if a student is adding 2 digit numbers in error. For example if they add 83+39 and get 1112 for the answer.
Angie Olson on October 26, 2018 at 8:54 pm
Hi Kandace. I suggest taking out a place value mat (I have a free template on my website) and showing the student how to solve with hands on manipulatives. Hope that helps!
francis on October 30, 2018 at 6:44 pm
great idea!
Tara on October 8, 2020 at 4:21 pm
I did the post -it technique with my small group today and they loved it! One, they got to use scissors; and two they were able to physically see why the tens had to move over to the tens spot.
Hopefully a few of them will be able to carry over this concept into the general education setting. =)
Jane on April 8, 2021 at 7:32 am
HI Angie,
A few years ago you send an email with a subtraction and addition round up. It was like a slide show with problem after problem. My kids LOVED it. I can’t find it now! Do you recall this? If so
do you know how I might get a copy?
Angie Olson on May 6, 2021 at 4:25 pm
Hi Jane. Thanks for reaching out. I do remember what file you are talking about but I can’t seem to find it anywhere! I’m so sorry!
yashaswini on July 26, 2021 at 10:22 am
thank so much for the brilliant ideas and strategies
Jasneet Kaur on August 14, 2021 at 8:19 am
Wonderful ideas.
Catie Benett on November 15, 2021 at 6:59 pm
Well, this is an awesome post and written very well. Your point of view is very good.
Maria on December 12, 2021 at 9:16 am
I love these ideas. I’m going to try them with my 2nd graders who are struggling. Thanks.
Heather on September 29, 2022 at 7:45 pm
Love, Love, Love
Thank You!
Gloria Boyd on October 16, 2022 at 11:54 am
I have used many of these strategies with my second graders. I am now retired and watching my 4th grade granddaughter doing some sort of crazy stuff that must be Common Core, though South
Carolina supposedly no longer uses Common Core. It is so complicated, and I want to jump in and simplify it for her.
Afsheen Mohammad on November 6, 2022 at 10:09 pm
Hi I love your work it is very helpful in teaching my 2nd grader. I have a question though, his teacher sent a uestion paper and in the end of the word problem ther is a question justify why you
used the strategy as compared to the rest. the base 10 blocks, number line, expanded form are all strategies for regrouping. how do I help him Justify his choice and provide and explanation
Jess Dalrymple on November 7, 2022 at 3:07 pm
Hi there. There are many strategies for regrouping, and some are more efficient for different equations than others. I would ask your child why they selected the strategy they used. It might
be because that strategy is the one the child feels most comfortable with, or the strategy that is easiest for that specific equation.
Submit a Comment | {"url":"https://luckylittlelearners.com/addition-with-regrouping-strategies/","timestamp":"2024-11-03T09:43:54Z","content_type":"text/html","content_length":"786051","record_id":"<urn:uuid:aac2d10d-bb06-440c-971d-b36c5a4d1e20>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00233.warc.gz"} |
What Are the Very basic features which everyone should know about the concept of the equation of a circle?What Are the Very basic features which everyone should know about the concept of the equation of a circle? - Daily Life
The circle equation formula will always refer to the equation of circle that will help in representing the Centre form of the circle. To recall in this particular area circle is considered to be the
round shape boundary where all the points on the boundary will be equidistant from the Centre and the equation can be perfectly represented into two main forms which are explained as follows:
• The standard form
• The general form
The standard form in the world of the equation of circle has been explained as:
X – A whole square + Y – B whole square = radius Square
The general form of the circle equation has been explained as:
X square + Y Square + AX + BY + C = 0
It is very much important for the kids to be clear about this particular concept because of the practical relevance of the equation of a circle and further be clear about different kinds of questions
of this particular area is very much important so that there is no query in the minds of kids at the time of solving the questions. Apart from this people also need to be clear about the different
kinds of properties of the circle so that they can have a good command over the entire thing very easily and efficiently. Some of those properties of the circles are explained as:
• The outline of the circle will be equidistant from the centre
• The diameter of the circle will help in dividing it into two equal parts
• Circles which are having equal radius will be congruent to each other
• Circles that will be different in terms of size will be having a similar or different radius
• The diameter of the circle is the largest code and will always be double the radius in the world of circles.
Following are some of the very basic terms associated with the circles which the people need to be clear about so that they can have a good command of the entire thing very easily
• The annulus is the region that has been perfectly bounded by the two concentric circles and this will be the ring-shaped object.
• Arc is the connected curve of the circle
• The sector is the region bounded by two radiuses as well as an arc
• The segment is the region bounded by the chord and the arc will be lying between the endpoints of the chord. It will be noted that segments will not contain the Centre
• The Centre of the circle is known as the midpoint of the circle
• The chord of the circle is the line segment whose endpoints will be lying on the circle
• A line which is having both the endpoints of the circle will be known as the largest chord or the diameter of the circle
• The line segment connecting the centre of the circle to any point will be the radius
• The straight-line cutting the circle at two points is known as secant
• Tangent is the coplanar straight line that will be touching the circle at a single point
Apart from this people also need to be clear about different kinds of formulas associated with the circle and these formulae for the circumference will be two into the value of pi into radius and the
formula for the area will be the value of pi into the square of the radius
Hence, depending upon platforms like Cuemath is the best way of having a good command over the entire thing of the equation of circle as well as equation of line so that kids can score well in the
mathematics exam. In this particular manner, kids will be able to have proper access to the right kind of worksheets in the whole industry which will allow them to develop their skills very easily
and ensure that they will be crystal clear about the concept of the equation of a circle | {"url":"https://www.4dailylife.com/what-are-the-very-basic-features-which-everyone-should-know-about-the-concept-of-the-equation-of-a-circle/","timestamp":"2024-11-07T05:49:00Z","content_type":"text/html","content_length":"108271","record_id":"<urn:uuid:1a3ab7bf-6495-46e8-879d-5df9ef13be23>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00751.warc.gz"} |
dD Spatial Searching
Chapter 35
dD Spatial Searching
Hans Tangelder and Andreas Fabri
35.1 Introduction
The spatial searching package implements exact and approximate distance browsing by providing implementations of algorithms supporting
• both nearest and furthest neighbor searching
• both exact and approximate searching
• (approximate) range searching
• (approximate) $k$-nearest and $k$-furthest neighbor searching
• (approximate) incremental nearest and incremental furthest neighbor searching
• query items representing points and spatial objects.
In these searching problems a set $P$ of data points in $d$-dimensional space is given. The points can be represented by Cartesian coordinates or homogeneous coordinates. These points are
preprocessed into a tree data structure, so that given any query item $q$ the points of $P$ can be browsed efficiently. The approximate spatial searching package is designed for data sets that are
small enough to store the search structure in main memory (in contrast to approaches from databases that assume that the data reside in secondary storage).
35.1.1 Neighbor Searching
Spatial searching supports browsing through a collection of $d$-dimensional spatial objects stored in a spatial data structure on the basis of their distances to a query object. The query object may
be a point or an arbitrary spatial object, e.g, a $d$-dimensional sphere. The objects in the spatial data structure are $d$-dimensional points.
Often the number of the neighbors to be computed is not know beforehand, e.g., because the number may depend on some properties of the neighbors (for example when querying for the nearest city to
Paris with population greater than a million) or the distance to the query point. The convential approach is $k$-nearest neighbor searching that makes use of a $k$-nearest neighbor algorithm, where
$k$ is known prior to the invocation of the algorithm. Hence, the number of nearest neighbors has to be guessed. If the guess is too large redundant computations are performed. If the number is too
small the computation has to be reinvoked for a larger number of neighbors, thereby performing redundant computations. Therefore, Hjaltason and Samet [HS95] introduced incremental nearest neighbor
searching in the sense that having obtained the $k$ nearest neighbors, the $k$ + 1$st$ neighbor can be obtained without having to calculate the $k$ + 1 nearest neighbor from scratch.
Spatial searching typically consists of a preprocessing phase and a searching phase. In the preprocessing phase one builds a search structure and in the searching phase one makes the queries. In the
preprocessing phase the user builds a tree data structure storing the spatial data. In the searching phase the user invokes a searching method to browse the spatial data.
With relatively minor modifications, nearest neighbor searching algorithms can be used to find the furthest object from the query object. Therefore, furthest neighbor searching is also supported by
the spatial searching package.
The execution time for exact neighbor searching can be reduced by relaxing the requirement that the neighbors should be computed exactly. If the distances of two objects to the query object are
approximately the same, instead of computing the nearest/furthest neighbor exactly, one of these objects may be returned as the approximate nearest/furthest neighbor. I.e., given some non-negative
constant the distance of an object returned as an approximate $k$-nearest neighbor must not be larger than , where $r$ denotes the distance to the real $kth$ nearest neighbor. Similar the distance of
an approximate $k$-furthest neighbor must not be smaller than . Obviously, for we get the exact result, and the larger is, the less exact the result.
Neighbor searching is implemented by the following four classes.
The class CGAL::Orthogonal_k_neighbor_search<Traits, OrthogonalDistance, Splitter, SpatialTree> implements the standard search strategy for orthogonal distances like the weighted Minkowski distance.
It requires the use of extended nodes in the spatial tree and supports only $k$ neighbor searching for point queries.
The class CGAL::K_neighbor_search<Traits, GeneralDistance, Splitter, SpatialTree> implements the standard search strategy for general distances like the Manhattan distance for iso-rectangles. It does
not require the use of extended nodes in the spatia tree and supports only $k$ neighbor searching for queries defined by points or spatial objects.
The class Orthogonal_incremental_neighbor_search<Traits, GeneralDistance, Splitter, SpatialTree> implements the incrementral search strategy for general distances like the weighted Minkowski
distance. It requires the use of extended nodes in the spatial tree and supports incremental neighbor searching and distance browsing for point queries.
The class CGAL::Incremental_neighbor_search<Traits, GeneralDistance, Splitter, SpatialTree> implements the incremental search strategy for general distances like the Manhattan distance for
iso-rectangles. It does not requires the use of extended nodes in the spatial tree and supports incremental neighbor searching and distance browsing for queries defined by points or spatial objects.
35.1.2 Range Searching
Exact range searching and approximate range searching is supported using exact or fuzzy $d$-dimensional objects enclosing a region. The fuzziness of the query object is specified by a parameter
denoting a maximal allowed distance to the boundary of a query object. If the distance to the boundary is at least , points inside the object are always reported and points outside the object are
never reported. Points within distance to the boundary may be or may be not reported. For exact range searching the fuzziness parameter is set to zero.
The class Kd_tree implements range searching in the method search, which is a template method with an output iterator and a model of the concept FuzzyQueryItem as CGAL::Fuzzy_iso_box_d or
CGAL::Fuzzy_sphere_d. For range searching of large data sets the user may set the parameter bucket_size used in building the $k$-$d$ tree to a large value (e.g. 100), because in general the query
time will be less then using the default value.
35.2 Splitting Rules
Instead of using the default splitting rule Sliding_midpoint described below, a user may, depending upon the data, select one from the following splitting rules, which determine how a separating
hyperplane is computed:
This splitting rule cuts a rectangle through its midpoint orthogonal to the longest side.
This splitting rule cuts a rectangle through $\left(Mind+Maxd\right)/2$ orthogonal to the dimension with the maximum point spread $\left[Mind,Maxd\right]$.
This is a modification of the midpoint of rectangle splitting rule. It first attempts to perform a midpoint of rectangle split as described above. If data points lie on both sides of the
separating plane the sliding midpoint rule computes the same separator as the midpoint of rectangle rule. If the data points lie only on one side it avoids this by sliding the separator, computed
by the midpoint of rectangle rule, to the nearest datapoint.
The splitting dimension is the dimension of the longest side of the rectangle. The splitting value is defined by the median of the coordinates of the data points along this dimension.
The splitting dimension is the dimension of the longest side of the rectangle. The splitting value is defined by the median of the coordinates of the data points along this dimension.
This splitting rule is a compromise between the median of rectangle splitting rule and the midpoint of rectangle splitting rule. This splitting rule maintains an upper bound on the maximal
allowed ratio of the longest and shortest side of a rectangle (the value of this upper bound is set in the constructor of the fair splitting rule). Among the splits that satisfy this bound, it
selects the one in which the points have the largest spread. It then splits the points in the most even manner possible, subject to maintaining the bound on the ratio of the resulting rectangles.
This splitting rule is a compromise between the fair splitting rule and the sliding midpoint rule. Sliding fair-split is based on the theory that there are two types of splits that are good:
balanced splits that produce fat rectangles, and unbalanced splits provided the rectangle with fewer points is fat.
Also, this splitting rule maintains an upper bound on the maximal allowed ratio of the longest and shortest side of a rectangle (the value of this upper bound is set in the constructor of the
fair splitting rule). Among the splits that satisfy this bound, it selects the one one in which the points have the largest spread. It then considers the most extreme cuts that would be allowed
by the aspect ratio bound. This is done by dividing the longest side of the rectangle by the aspect ratio bound. If the median cut lies between these extreme cuts, then we use the median cut. If
not, then consider the extreme cut that is closer to the median. If all the points lie to one side of this cut, then we slide the cut until it hits the first point. This may violate the aspect
ratio bound, but will never generate empty cells.
35.3 Example Programs
We give six examples. The first example illustrates k nearest neighbor searching, and the second example incremental neighbor searching. The third is an example of approximate furthest neighbor
searching using a $d$-dimensional iso-rectangle as an query object. Approximate range searching is illustrated by the fourth example. The fifth example illustrates k neighbour searching for a user
defined point class. The last example shows how to choose another splitting rule in the $k$-$d$ tree that is used as search tree.
35.3.1 Example of K Neighbor Searching
The first example illustrates k neighbor searching with an Euclidean distance and 2-dimensional points. The generated random data points are inserted in a search tree. We then initialize the k
neighbor search object with the origin as query. Finally, we obtain the result of the computation in the form of an iterator range. The value of the iterator is a pair of a point and its square
distance to the query point. We use square distances, or transformed distances for other distance classes, as they are computationally cheaper.
// file: examples/Spatial_searching/Nearest_neighbor_searching.C
#include <CGAL/Simple_cartesian.h>
#include <CGAL/point_generators_2.h>
#include <CGAL/Orthogonal_k_neighbor_search.h>
#include <CGAL/Search_traits_2.h>
#include <list>
#include <cmath>
typedef CGAL::Simple_cartesian<double> K;
typedef K::Point_2 Point_d;
typedef CGAL::Search_traits_2<K> TreeTraits;
typedef CGAL::Orthogonal_k_neighbor_search<TreeTraits> Neighbor_search;
typedef Neighbor_search::Tree Tree;
int main() {
const int N = 1;
std::list<Point_d> points;
Tree tree(points.begin(), points.end());
Point_d query(0,0);
// Initialize the search structure, and search all N points
Neighbor_search search(tree, query, N);
// report the N nearest neighbors and their distance
// This should sort all N points by increasing distance from origin
for(Neighbor_search::iterator it = search.begin(); it != search.end(); ++it){
std::cout << it->first << " "<< std::sqrt(it->second) << std::endl;
return 0;
35.3.2 Example of Incremental Searching
This example program illustrates incremental searching for the closest point with a positive first coordinate. We can use the orthogonal incremental neighbor search class, as the query is also a
point and as the distance is the Euclidean distance.
As for the $k$ neighbor search, we first initialize the search tree with the data. We then create the search object, and finally obtain the iterator with the begin() method. Note that the iterator is
of the input iterator category, that is one can make only one pass over the data.
// file: examples/Spatial_searching/Distance_browsing.C
#include <CGAL/Simple_cartesian.h>
#include <CGAL/Orthogonal_incremental_neighbor_search.h>
#include <CGAL/Search_traits_2.h>
typedef CGAL::Simple_cartesian<double> K;
typedef K::Point_2 Point_d;
typedef CGAL::Search_traits_2<K> TreeTraits;
typedef CGAL::Orthogonal_incremental_neighbor_search<TreeTraits> NN_incremental_search;
typedef NN_incremental_search::iterator NN_iterator;
typedef NN_incremental_search::Tree Tree;
// A functor that returns true, iff the x-coordinate of a dD point is not positive
struct X_not_positive {
bool operator()(const NN_iterator& it) { return ((*it).first)[0]<0; }
// An iterator that only enumerates dD points with positive x-coordinate
typedef CGAL::Filter_iterator<NN_iterator, X_not_positive> NN_positive_x_iterator;
int main() {
Tree tree;
Point_d query(0,0);
NN_incremental_search NN(tree, query);
NN_positive_x_iterator it(NN.end(), X_not_positive(), NN.begin()), end(NN.end(), X_not_positive());
std::cout << "The first 5 nearest neighbours with positive x-coord are: " << std::endl;
for (int j=0; (j < 5)&&(it!=end); ++j,++it)
std::cout << (*it).first << " at squared distance = " << (*it).second << std::endl;
return 0;
35.3.3 Example of General Neighbor Searching
This example program illustrates approximate nearest and furthest neighbor searching using 4-dimensional Cartesian coordinates. Five approximate nearest neighbors of the query rectangle $\left
[0.1,0.2\right]4$ are computed. Because the query object is a rectangle we cannot use the Orthogonal neighbor search. As in the previous examples we first initialize a search tree, create the search
object with the query, and obtain the result of the search as iterator range.
// file: examples/Spatial_searching/General_neighbor_searching.C
#include <CGAL/Cartesian_d.h>
#include <CGAL/point_generators_2.h>
#include <CGAL/Manhattan_distance_iso_box_point.h>
#include <CGAL/K_neighbor_search.h>
#include <CGAL/Search_traits_2.h>
typedef CGAL::Cartesian_d<double> K;
typedef K::Point_d Point_d;
typedef CGAL::Random_points_in_square_2<Point_d> Random_points_iterator;
typedef K::Iso_box_d Iso_box_d;
typedef K TreeTraits;
typedef CGAL::Manhattan_distance_iso_box_point<TreeTraits> Distance;
typedef CGAL::K_neighbor_search<TreeTraits, Distance> Neighbor_search;
typedef Neighbor_search::Tree Tree;
int main() {
const int N = 1000;
const int K = 10;
Tree tree;
Random_points_iterator rpg;
for(int i = 0; i < N; i++){
Point_d pp(0.1,0.1);
Point_d qq(0.2,0.2);
Iso_box_d query(pp,qq);
Distance tr_dist;
Neighbor_search N1(tree, query, K, 0.0, false); // eps=10.0, nearest=false
std::cout << "For query rectange = [0.1,0.2]^2 " << std::endl
<< "The " << K << " approximate furthest neighbors are: " << std::endl;
for (Neighbor_search::iterator it = N1.begin();it != N1.end();it++) {
std::cout << " Point " << it->first << " at distance = " << tr_dist.inverse_of_transformed_distance(it->second) << std::endl;
return 0;
35.3.4 Example of a Range Query
This example program illustrates approximate range querying for 4-dimensional fuzzy iso-rectangles and spheres using homogeneous coordinates. The range queries are member functions of the $k$-$d$
tree class.
// file: examples/Spatial_searching/Fuzzy_range_query.C
#include <CGAL/Cartesian_d.h>
#include <CGAL/point_generators_d.h>
#include <CGAL/Kd_tree.h>
#include <CGAL/Fuzzy_sphere.h>
#include <CGAL/Fuzzy_iso_box.h>
#include <CGAL/Search_traits_d.h>
typedef CGAL::Cartesian_d<double> K;
typedef K::Point_d Point_d;
typedef CGAL::Search_traits_d<K> Traits;
typedef CGAL::Random_points_in_iso_box_d<Point_d> Random_points_iterator;
typedef CGAL::Counting_iterator<Random_points_iterator> N_Random_points_iterator;
typedef CGAL::Kd_tree<Traits> Tree;
typedef CGAL::Fuzzy_sphere<Traits> Fuzzy_sphere;
typedef CGAL::Fuzzy_iso_box<Traits> Fuzzy_iso_box;
int main() {
const int D = 4;
const int N = 1000;
// generator for random data points in the square ( (-1000,-1000), (1000,1000) )
Random_points_iterator rpit(4, 1000.0);
// Insert N points in the tree
Tree tree(N_Random_points_iterator(rpit,0),
// define range query objects
double pcoord[D] = { 300, 300, 300, 300 };
double qcoord[D] = { 900.0, 900.0, 900.0, 900.0 };
Point_d p(D, pcoord, pcoord+D);
Point_d q(D, qcoord, qcoord+D);
Fuzzy_sphere fs(p, 700.0, 100.0);
Fuzzy_iso_box fib(p, q, 100.0);
std::cout << "points approximately in fuzzy range query" << std::endl;
std::cout << "with center (300.0, 300.0, 300.0, 300.0)" << std::endl;
std::cout << "and fuzzy radius <200.0,400.0> are:" << std::endl;
tree.search(std::ostream_iterator<Point_d>(std::cout, "\n"), fs);
std::cout << "points approximately in fuzzy range query ";
std::cout << "[<200,4000>,<800,1000>]]^4 are:" << std::endl;
tree.search(std::ostream_iterator<Point_d>(std::cout, "\n"), fib);
return 0;
35.3.5 Example Illustrating Use of User Defined Point and Distance Class
The neighbor searching works with all CGAL kernels, as well as with user defined points and distance classes. In this example we assume that the user provides the following 3-dimensional points
struct Point {
double vec[3];
Point() { vec[0]= vec[1] = vec[2] = 0; }
Point (double x, double y, double z) { vec[0]=x; vec[1]=y; vec[2]=z; }
double x() const { return vec[ 0 ]; }
double y() const { return vec[ 1 ]; }
double z() const { return vec[ 2 ]; }
double& x() { return vec[ 0 ]; }
double& y() { return vec[ 1 ]; }
double& z() { return vec[ 2 ]; }
bool operator==(const Point& p) const
return (x() == p.x()) && (y() == p.y()) && (z() == p.z()) ;
bool operator!=(const Point& p) const { return ! (*this == p); }
}; //end of class
namespace CGAL {
template <>
struct Kernel_traits<Point> {
struct Kernel {
typedef double FT;
typedef double RT;
struct Construct_coord_iterator {
const double* operator()(const Point& p) const
{ return static_cast<const double*>(p.vec); }
const double* operator()(const Point& p, int) const
{ return static_cast<const double*>(p.vec+3); }
We have put the glue layer in this file as well, that is a class that allows to iterate over the Cartesian coordinates of the point, and a class to construct such an iterator for a point. We next
need a distance class
struct Distance {
typedef Point Query_item;
double transformed_distance(const Point& p1, const Point& p2) const {
double distx= p1.x()-p2.x();
double disty= p1.y()-p2.y();
double distz= p1.z()-p2.z();
return distx*distx+disty*disty+distz*distz;
template <class TreeTraits>
double min_distance_to_rectangle(const Point& p,
const CGAL::Kd_tree_rectangle<TreeTraits>& b) const {
double distance(0.0), h = p.x();
if (h < b.min_coord(0)) distance += (b.min_coord(0)-h)*(b.min_coord(0)-h);
if (h > b.max_coord(0)) distance += (h-b.max_coord(0))*(h-b.max_coord(0));
if (h < b.min_coord(1)) distance += (b.min_coord(1)-h)*(b.min_coord(1)-h);
if (h > b.max_coord(1)) distance += (h-b.max_coord(1))*(h-b.min_coord(1));
if (h < b.min_coord(2)) distance += (b.min_coord(2)-h)*(b.min_coord(2)-h);
if (h > b.max_coord(2)) distance += (h-b.max_coord(2))*(h-b.max_coord(2));
return distance;
template <class TreeTraits>
double max_distance_to_rectangle(const Point& p,
const CGAL::Kd_tree_rectangle<TreeTraits>& b) const {
double h = p.x();
double d0 = (h >= (b.min_coord(0)+b.max_coord(0))/2.0) ?
(h-b.min_coord(0))*(h-b.min_coord(0)) : (b.max_coord(0)-h)*(b.max_coord(0)-h);
double d1 = (h >= (b.min_coord(1)+b.max_coord(1))/2.0) ?
(h-b.min_coord(1))*(h-b.min_coord(1)) : (b.max_coord(1)-h)*(b.max_coord(1)-h);
double d2 = (h >= (b.min_coord(2)+b.max_coord(2))/2.0) ?
(h-b.min_coord(2))*(h-b.min_coord(2)) : (b.max_coord(2)-h)*(b.max_coord(2)-h);
return d0 + d1 + d2;
double new_distance(double& dist, double old_off, double new_off,
int cutting_dimension) const {
return dist + new_off*new_off - old_off*old_off;
double transformed_distance(double d) const { return d*d; }
double inverse_of_transformed_distance(double d) { return std::sqrt(d); }
}; // end of struct Distance
We are ready to put the pices together. The class Search_traits<..> which you see in the next file is then a mere wrapper for all these types. The searching itself works exactly as for CGAL kernels.
//file: examples/Spatial_searching/User_defined_point_and_distance.C
#include <CGAL/basic.h>
#include <CGAL/Search_traits.h>
#include <CGAL/point_generators_3.h>
#include <CGAL/Orthogonal_k_neighbor_search.h>
#include "Point.h" // defines types Point, Construct_coord_iterator
#include "Distance.h"
typedef CGAL::Random_points_in_cube_3<Point> Random_points_iterator;
typedef CGAL::Counting_iterator<Random_points_iterator> N_Random_points_iterator;
typedef CGAL::Search_traits<double, Point, const double*, Construct_coord_iterator> Traits;
typedef CGAL::Orthogonal_k_neighbor_search<Traits, Distance> K_neighbor_search;
typedef K_neighbor_search::Tree Tree;
int main() {
const int N = 1000;
const int K = 5;
// generator for random data points in the cube ( (-1,-1,-1), (1,1,1) )
Random_points_iterator rpit( 1.0);
// Insert number_of_data_points in the tree
Tree tree(N_Random_points_iterator(rpit,0),
Point query(0.0, 0.0, 0.0);
Distance tr_dist;
// search K nearest neighbours
K_neighbor_search search(tree, query, K);
for(K_neighbor_search::iterator it = search.begin(); it != search.end(); it++){
std::cout << " d(q, nearest neighbor)= "
<< tr_dist.inverse_of_transformed_distance(it->second) << std::endl;
// search K furthest neighbour searching, with eps=0, search_nearest=false
K_neighbor_search search2(tree, query, K, 0.0, false);
for(K_neighbor_search::iterator it = search2.begin(); it != search2.end(); it++){
std::cout << " d(q, furthest neighbor)= "
<< tr_dist.inverse_of_transformed_distance(it->second) << std::endl;
return 0;
35.3.6 Example of Selecting a Splitting Rule and Setting the Bucket Size
This example program illustrates selecting a splitting rule and setting the maximal allowed bucket size. The only differences with the first example are the declaration of the Fair splitting rule,
needed to set the maximal allowed bucket size.
// file: examples/Spatial_searching/Using_fair_splitting_rule.C
#include <CGAL/Simple_cartesian.h>
#include <CGAL/point_generators_2.h>
#include <CGAL/Search_traits_2.h>
#include <CGAL/Orthogonal_k_neighbor_search.h>
#include <cmath>
typedef CGAL::Simple_cartesian<double> R;
typedef R::Point_2 Point_d;
typedef CGAL::Random_points_in_square_2<Point_d> Random_points_iterator;
typedef CGAL::Counting_iterator<Random_points_iterator> N_Random_points_iterator;
typedef CGAL::Search_traits_2<R> Traits;
typedef CGAL::Euclidean_distance<Traits> Distance;
typedef CGAL::Fair<Traits> Fair;
typedef CGAL::Orthogonal_k_neighbor_search<Traits,Distance,Fair> Neighbor_search;
typedef Neighbor_search::Tree Tree;
int main() {
const int N = 1000;
// generator for random data points in the square ( (-1,-1), (1,1) )
Random_points_iterator rpit( 1.0);
Fair fair(5); // bucket size=5
// Insert number_of_data_points in the tree
Tree tree(N_Random_points_iterator(rpit,0),
Point_d query(0,0);
// Initialize the search structure, and search all N points
Neighbor_search search(tree, query, N);
// report the N nearest neighbors and their distance
// This should sort all N points by increasing distance from origin
for(Neighbor_search::iterator it = search.begin(); it != search.end(); ++it){
std::cout << it->first << " "<< std::sqrt(it->second) << std::endl;
return 0;
35.4 Software Design
35.4.1 The $k$-$d$ tree
Bentley [Ben75] introduced the $k$-$d$ tree as a generalization of the binary search tree in higher dimensions. $k$-$d$ trees hierarchically decompose space into a relatively small number of
rectangles such that no rectangle contains too many input objects. For our purposes, a rectangle in real $d$ dimensional space, , is the product of $d$ closed intervals on the coordinate axes. $k$-
$d$ trees are obtained by partitioning point sets in using ($d$-1)-dimensional hyperplanes. Each node in the tree is split into two children by one such separating hyperplane. Several splitting rules
(see Section 35.2 can be used to compute a seperating ($d$-1)-dimensional hyperplane.
Each internal node of the $k$-$d$ tree is associated with a rectangle and a hyperplane orthogonal to one of the coordinate axis, which splits the rectangle into two parts. Therefore, such a
hyperplane, defined by a splitting dimension and a splitting value, is called a separator. These two parts are then associated with the two child nodes in the tree. The process of partitioning space
continues until the number of data points in the rectangle falls below some given threshold. The rectangles associated with the leaf nodes are called buckets, and they define a subdivision of the
space into rectangles. Data points are only stored in the leaf nodes of the tree, not in the internal nodes.
Friedmann, Bentley and Finkel [FBF77] described the standard search algorithm to find the $k$th nearest neighbor by searching a $k$-$d$ tree recursively.
When encountering a node of the tree, the algorithm first visits the child that is closest to the query point. On return, if the rectangle containing the other child lies within 1/ (1+) times the
distance to the $k$th nearest neighbors so far, then the other child is visited recursively. Priority search [AM93b] visits the nodes in increasing order of distance from the queue with help of a
priority queue. The search stops when the distance of the query point to the nearest nodes exceeds the distance to the nearest point found with a factor 1/ (1+). Priority search supports next
neighbor search, standard search does not.
In order to speed-up the internal distance computations in nearest neighbor searching in high dimensional space, the approximate searching package supports orthogonal distance computation. Orthogonal
distance computation implements the efficient incremental distance computation technique introduced by Arya and Mount [AM93a]. This technique works only for neighbor queries with query items
represented as points and with a quadratic form distance, defined by $dA\left(x,y\right)= \left(x-y\right)A\left(x-y\right)T$, where the matrix $A$ is positive definite, i.e. . An important class of
quadratic form distances are weighted Minkowski distances. Given a parameter $p>0$ and parameters , the weighted Minkowski distance is defined by for and defined by . The Manhattan distance ($p=1$,
$wi=1$) and the Euclidean distance ($p=2$, $wi=1$) are examples of a weighted Minkowski metric.
To speed up distance computations also transformed distances are used instead of the distance itself. For instance for the Euclidean distance, to avoid the expensive computation of square roots,
squared distances are used instead of the Euclidean distance itself. | {"url":"https://doc.cgal.org/Manual/3.2/doc_html/cgal_manual/Spatial_searching/Chapter_main.html","timestamp":"2024-11-13T07:49:05Z","content_type":"text/html","content_length":"47229","record_id":"<urn:uuid:74b9f1c0-558a-48dd-a697-78965b645097>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00825.warc.gz"} |
Chiller Design Model - Impact of chiller failure on the short-term temperature variation in the incubation of salmonidsModel Fit for Chiller FailureModel Fit for Chiller Failure for Pump FailureModel Fit for Chiller Failure with No ReservoirModel Fit for Chiller FailureModel Fit for Chiller Failure for Pump FailureModel Fit for Chiller Failure with No Reservoir
30879 Data Set Published / External 30657 EFS (Environmental and Fisheries Sciences) Division Project Completed 2016-10-24 In salmon recovery programs it is commonly necessary to chill incubation and
early rearing temperatures to match wild development times. The most common failure mode for a chiller system is failure of the chiller or circulating pumps. Following chiller failure, the water
temperature can rise from 5-7 C to 10-13 C depending on the well temperatures and ambient air temperatures. The speed and magnitude of the temperature increases depends on how the chillers are
designed. The simplest design is a direct-coupled chiller with chilled gas/process water heat exchanger. Other chiller designs include both chilled glycol and water reservoirs. The addition of these
reservoirs serves to reduce the maximum rate of temperature change following chiller failure. Increased deformities have been observed in direct-coupled chiller systems for sockeye salmon following
chiller failures. This model can be used to size glycol and water reservoirs to control the rise in temperature following chiller failure. Subject to Public Access to Research Results (PARR): Yes
Peer Reviewed Publication: Impact of chiller failure on the short-term temperature variation in the incubation of salmonids. Peer-reviewed article in fisheries or aquaculture journal Northwest
Fisheries Science Center Seattle WA USA Data Set Spreadsheet Table (digital) Thermistor Chain Platform Not Applicable Not Applicable 35538 Model Fit for Chiller Failure Published / External Planned
Chiller Failure Data and Model Results. Spreadsheet Yes PARR Chiller Failure Data and Model Results. 1 Count NUMBER No No Active Units for values are #. NUMBER # 2 Actual Time DATE No No Active Units
for values are days and minutes. DATE days and minutes 3 Data_Up_Rep_1 NUMBER No No Active Measured Temperature; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values
are C. NUMBER C 4 Data_Up_Rep_2 NUMBER No No Active Measured Temperature; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 5 Data_Up_Rep_3 NUMBER
No No Active Measured Temperature; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 6 Data_Down_Rep_1 NUMBER No No Active Measured Temperature;
Chiller Restart , Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 7 Data_Down_Rep_2 NUMBER No No Active Measured Temperature; Chiller Restart , Mean of Incubators
1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 8 Data_Down_Rep_3 NUMBER No No Active Measured Temperature; Chiller Restart , Mean of Incubators 1 and 17, Trough 1; Replicant 3.
Units for values are C. NUMBER C 9 Model_Results_Theory_Up_Rep_1 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 1.
Units for values are C. NUMBER C 10 Model_Results_Theory_Up_Rep_2 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 2.
Units for values are C. NUMBER C 11 Model_Results_Theory_Up_Rep_3 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough 1; Replicant 3.
Units for values are C. NUMBER C 12 Model_Results_Theory_Down_Rep_1 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough 1; Replicant
1. Units for values are C. NUMBER C 13 Model_Results_Theory_Down_Rep_2 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough 1;
Replicant 2. Units for values are C. NUMBER C 14 Model_Results_Theory_Down_Rep_3 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough
1; Replicant 3. Units for values are C. NUMBER C 15 Model_Results_Computed_Up_Rep_1 NUMBER No No Active Model Results, Computed Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough
1; Replicant 1. Units for values are C. NUMBER C 16 Model_Results_Computed_Up_Rep_2 NUMBER No No Active Model Results, Computed Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough
1; Replicant 2. Units for values are C. NUMBER C 17 Model_Results_Computed_Up_Rep_3 NUMBER No No Active Model Results, Computed Residence Time; Chiller Failure , Mean of Incubators 1 and 17, Trough
1; Replicant 3. Units for values are C. NUMBER C 18 Model_Results_Computed_Down_Rep_1 NUMBER No No Active Model Results, Computed Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough
1; Replicant 1. Units for values are C. NUMBER C 19 Model_Results_Computed_Down_Rep_2 NUMBER No No Active Model Results, Computed Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough
1; Replicant 2. Units for values are C. NUMBER C 20 Model_Results_Computed_Down_Rep_3 NUMBER No No Active Model Results, Computed Residence Time; Chiller Restart , Mean of Incubators 1 and 17, Trough
1; Replicant 3. Units for values are C. NUMBER C 35539 Model Fit for Chiller Failure for Pump Failure Published / External Planned Pump Failure Data and Model Results. Spreadsheet Yes PARR Pump
Failure Data and Model Results. 1 Count NUMBER No No Active Units for values are #. NUMBER # 2 Actual Time DATE No No Active Units for values are days and minutes. DATE days and minutes 3
Data_Up_Rep_1 NUMBER No No Active Measured Temperature; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 4 Data_Up_Rep_2 NUMBER No No Active Measured
Temperature; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 5 Data_Up_Rep_3 NUMBER No No Active Measured Temperature; Pump Failure, Mean of
Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 6 Data_Down_Rep_1 NUMBER No No Active Measured Temperature; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant
1. Units for values are C. NUMBER C 7 Data_Down_Rep_2 NUMBER No No Active Measured Temperature; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 8
Data_Down_Rep_3 NUMBER No No Active Measured Temperature; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 9 Model_Results_Theory_Up_Rep_1 NUMBER No
No Active Model Results, Theoretical Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 10 Model_Results_Theory_Up_Rep_2 NUMBER No No
Active Model Results, Theoretical Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 11 Model_Results_Theory_Up_Rep_3 NUMBER No No
Active Model Results, Theoretical Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 12 Model_Results_Theory_Down_Rep_1 NUMBER No No
Active Model Results, Theoretical Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 13 Model_Results_Theory_Down_Rep_2 NUMBER No No
Active Model Results, Theoretical Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 14 Model_Results_Theory_Down_Rep_3 NUMBER No No
Active Model Results, Theoretical Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 15 Model_Results_Computed_Up_Rep_1 NUMBER No No
Active Model Results, Computed Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 16 Model_Results_Computed_Up_Rep_2 NUMBER No No
Active Model Results, Computed Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 17 Model_Results_Computed_Up_Rep_3 NUMBER No No
Active Model Results, Computed Residence Time; Pump Failure, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 18 Model_Results_Computed_Down_Rep_1 NUMBER No No
Active Model Results, Computed Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 19 Model_Results_Computed_Down_Rep_2 NUMBER No No
Active Model Results, Computed Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 20 Model_Results_Computed_Down_Rep_3 NUMBER No No
Active Model Results, Computed Residence Time; Pump Restart, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 35540 Model Fit for Chiller Failure with No Reservoir
Published / External Planned Chiller Failure for No Reservoir Data and Model Results. Spreadsheet Yes PARR Chiller Failure for No Reservoir Data and Model Results. 1 Count NUMBER No No Active Units
for values are #. NUMBER # 2 Actual_Time DATE No No Active Units for values are days and minutes. DATE days and minutes 3 Data_Up_Rep_1 NUMBER No No Active Measured Temperature; Chiller Failure for
No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 4 Data_Up_Rep_2 NUMBER No No Active Measured Temperature; Chiller Failure for No Reservoir, Mean of
Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 5 Data_Up_Rep_3 NUMBER No No Active Measured Temperature; Chiller Failure for No Reservoir, Mean of Incubators 1 and 17,
Trough 1; Replicant 3. Units for values are C. NUMBER C 6 Data_Down_Rep_1 NUMBER No No Active Measured Temperature; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant
1. Units for values are C. NUMBER C 7 Data_Down_Rep_2 NUMBER No No Active Measured Temperature; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values
are C. NUMBER C 8 Data_Down_Rep_3 NUMBER No No Active Measured Tempertaure; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 9
Model_Results_Theory_Up_Rep_1 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are
C. NUMBER C 10 Model_Results_Theory_Up_Rep_2 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units
for values are C. NUMBER C 11 Model_Results_Theory_Up_Rep_3 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Failure for No Reservoir, Mean of Incubators 1 and 17, Trough 1;
Replicant 3. Units for values are C. NUMBER C 12 Model_Results_Theory_Down_Rep_1 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1
and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 13 Model_Results_Theory_Down_Rep_2 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart for No Reservoir,
Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 14 Model_Results_Theory_Down_Rep_3 NUMBER No No Active Model Results, Theoretical Residence Time; Chiller Restart
for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 15 Model_Results_Computed_Up_Rep_1 NUMBER No No Active Model Results, Computed Residence Time;
Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 16 Model_Results_Computed_Up_Rep_2 NUMBER No No Active Model Results, Computed
Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values are C. NUMBER C 17 Model_Results_Computed_Up_Rep_3 NUMBER No No Active Model
Results, Computed Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 3. Units for values are C. NUMBER C 18 Model_Results_Computed_Down_Rep 1 NUMBER No
No Active Model Results, Computed Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 1. Units for values are C. NUMBER C 19
Model_Results_Computed_Down_Rep_2 NUMBER No No Active Model Results, Computed Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 2. Units for values
are C. NUMBER C 20 Model_Results_Computed_Down_Rep_3 NUMBER No No Active Model Results, Computed Residence Time; Chiller Restart for No Reservoir, Mean of Incubators 1 and 17, Trough 1; Replicant 3.
Units for values are C. NUMBER C Data Steward 2015-10-01 Person Maynard, Desmond Des.Maynard@noaa.gov 7305 East Beach Drive Manchester WA 98366 360-871-8313 Distributor 2015-10-01 Organization
Northwest Fisheries Science Center NWFSC nmfs.nwfsc.metadata@noaa.gov 2725 Montlake Boulevard East Seattle WA 98112 USA 206-860-3200 http://www.nwfsc.noaa.gov NWFSC Home Online Resource Metadata
Contact 2015-10-01 Organization Northwest Fisheries Science Center NWFSC nmfs.nwfsc.metadata@noaa.gov 2725 Montlake Boulevard East Seattle WA 98112 USA 206-860-3200 http://www.nwfsc.noaa.gov NWFSC
Home Online Resource Originator 2015-10-01 Person Maynard, Desmond Des.Maynard@noaa.gov 7305 East Beach Drive Manchester WA 98366 360-871-8313 Point of Contact 2015-10-01 Person Maynard, Desmond
Des.Maynard@noaa.gov 7305 East Beach Drive Manchester WA 98366 360-871-8313 -122.5547 -122.5547 47.569 47.569 Burley Creek Field Station: Freshwater rearing facility for Red Fish Lake Sockeye Range
2015-01-01 2015-12-01 Unclassified At this time, contact the Data Manager for information on obtaining access to this data set. In the near future, the NWFSC will strive to provide all non-sensitive
data resources as a web service in order to meet the NOAA Data Access Policy Directive (https://nosc.noaa.gov/EDMC/PD.DA.php). NA 2016-10-24 https://www.webapps.nwfsc.noaa.gov/apex/parr/
model_fit_for_chiller_failure/data/page/ 2015-10-01 Organization Northwest Fisheries Science Center Model Fit for Chiller Failure (RESTful) Chiller Failure Data and Model Results. 2016-10-24 https://
www.webapps.nwfsc.noaa.gov/apex/parr/model_fit_for_chiller_failure_for_pump_failure/data/page/ 2015-10-01 Organization Northwest Fisheries Science Center Model Fit for Chiller Failure for Pump
Failure (RE Pump Failure Data and Model Results. 2016-10-24 https://www.webapps.nwfsc.noaa.gov/apex/parr/model_fit_for_chiller_failure_with_no_reservoir/data/page/ 2015-10-01 Organization Northwest
Fisheries Science Center Model Fit for Chiller Failure with No Reservoir (R Chiller Failure for No Reservoir Data and Model Results. 2016-10-24 https://www.webapps.nwfsc.noaa.gov/apex/parrdata/
inventory/tables/table/model_fit_for_chiller_failure 2015-10-01 Organization Northwest Fisheries Science Center Model Fit for Chiller Failure Chiller Failure Data and Model Results. 2016-10-24 https:
//www.webapps.nwfsc.noaa.gov/apex/parrdata/inventory/tables/table/model_fit_for_chiller_failure_for_pump_failure 2015-10-01 Organization Northwest Fisheries Science Center Model Fit for Chiller
Failure for Pump Failure Pump Failure Data and Model Results. 2016-10-24 https://www.webapps.nwfsc.noaa.gov/apex/parrdata/inventory/tables/table/model_fit_for_chiller_failure_with_no_reservoir
2015-10-01 Organization Northwest Fisheries Science Center Model Fit for Chiller Failure with No Reservoir Chiller Failure for No Reservoir Data and Model Results. https://www.webapps.nwfsc.noaa.gov/
apex/parrdata/inventory/datasets/dataset/103608 Chiller Design Model Online Resource Web site NWFSC Dataset Information page. This model can be used to size glycol and water reservoirs to control the
rise in temperature following chiller failure. The replicate temperatures were compared Yes 5 Yes No No 400 Access to raw data delayed until analysis is completed. NCEI-MD 365 The Northwest Fisheries
Science Center facilitates backup and recovery of all data and IT components which are managed by IT Operations through the capture of static (point-in-time) backup data to physical media. Once data
is captured to physical media (every 1-3 days), a duplicate is made and routinely (weekly) transported to an offsite archive facility where it is maintained throughout the data's applicable
life-cycle. The logger data was converted to cvs and Excel files. 35538 Entity 35539 Entity 35540 Entity gov.noaa.nmfs.inport:30879 Jeffrey Cowen 2016-02-24T10:06:37 SysAdmin InPortAdmin
2022-08-09T17:11:14 2019-06-04 Northwest Fisheries Science Center NWFSC 2725 Montlake Boulevard East Seattle WA 98112 USA 206-860-3200 http://www.nwfsc.noaa.gov 1001 Public No No 2019-06-04 1 Year | {"url":"https://www.fisheries.noaa.gov/inport/item/30879/inport-xml","timestamp":"2024-11-03T02:48:06Z","content_type":"application/xml","content_length":"56141","record_id":"<urn:uuid:96df4290-1dde-4e4e-9226-6a500bf97a7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00268.warc.gz"} |
Case Study Spreadsheet updates
First of all, thanks very much for a wonderful tool! I’ve only begun to scratch the surface on everything this provides!
I’ve uncovered a slight error on the Personal Finance Toolbox under the “Misc. calcs” tab, cell B165, Marginal tax rate of conversion when funds from a taxable account are used to pay for taxes due
to conversion.
I posted on the Bogleheads forum regarding this topic:
I’ll provide a short description:
To calculate the marginal tax rate, you divide the (change in tax) by (change in income). The formula in B165 correctly calculates the change in tax when selling funds in a taxable account to pay for
the taxes resulting from the tIRA transfer, but it does not account for the additional income resulting from the sale of taxable funds.
I’ll provide a quick example:
In my case, I pay any expenses not covered with interest and dividends by selling funds in my taxable account. Here's my assumptions:
- 24% federal only tax bracket
- NIIT of 3.8% and 50% cost basis/current value in the taxable account so the marginal tax becomes (15% + 3.8%)*50% = 9.4% on any additional fund sales,
- No other income, no SS, no pension, etc.
- $10k tIRA to Roth conversion
If I pay for the conversion with simple withholding, the marginal rate is a simple 24% or 2400/10000.
If I choose to pay the taxes by liquidating a taxable asset with 50% cost basis, I need to sell $2649 of the asset resulting in $2649 additional income and $249 in taxes.
Cell B165 currently calculates the marginal rate at 26.49% using the numbers above when the actual marginal rate should be (2400+249)/(10000+2649) = 20.94%
A formula that yields the correct marginal rate calculation is:
(note, I didn’t take time to algebraically simplify the formula)
Thanks again for the spreadsheet and attention to this error. | {"url":"https://forum.mrmoneymustache.com/forum-information-faqs/case-study-spreadsheet-updates/msg3200052/","timestamp":"2024-11-05T22:33:40Z","content_type":"application/xhtml+xml","content_length":"223685","record_id":"<urn:uuid:4e8ba9d4-1be8-4f32-8321-438f97bce551>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00078.warc.gz"} |
Xuhua He (University of Maryland), Geometric Methods in Representation Theory - Department of Mathematics
Xuhua He (University of Maryland), Geometric Methods in Representation Theory
March 24, 2017 @ 4:00 pm - 5:00 pm
Title: Cocenters and representations of p-adic groups
Abstract: It is known that the number of conjugacy classes of a finite group equals the number of irreducible representations (over complex numbers). The conjugacy classes of a finite group give a
natural basis of the cocenter of its group algebra. Thus the above equality can be reformulated as a duality between the cocenter (i.e. the group algebra modulo its commutator) and the finite
dimensional representations.Now let us move from the finite groups to the $p$-adic groups. In this case, one needs to replace the group algebra by the Hecke algebra. The work of Bernstein, Deligne
and Kazhdan in the 80’s establish the duality between the cocenter of the Hecke algebra and the complex representations. It is an interesting, yet challenging problem to fully understand the
structure of the cocenter of the Hecke algebra.In this talk, I will discuss a new discovery on the structure of the cocenter and then some applications to the complex and modular representations of
$p$-adic groups, including: a generalization of Howe’s conjecture on twisted invariant distributions, trace Paley-Wiener theorem for smooth admissible representations, and the abstract Selberg
Principle for projective representations. It is partially joint with D. Ciubotaru. | {"url":"https://math.unc.edu/event/xuhua-he-university-of-maryland-geometric-methods-in-representation-theory/","timestamp":"2024-11-04T21:17:08Z","content_type":"text/html","content_length":"113046","record_id":"<urn:uuid:1212d238-31bf-4f75-b224-39fb77d4692a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00513.warc.gz"} |
Theoretical Physicists Solve a 50-Year Homework Assignment - Research & Development World
Kids everywhere grumble about homework. But their complaints will hold no water with a group of theoretical physicists who’ve spent almost 50 years solving one homework problem — a calculation of one
type of subatomic particle decay aimed at helping to answer the question of why the early universe ended up with an excess of matter.
Without that excess, the matter and antimatter created in equal amounts in the Big Bang would have completely annihilated one another. Our universe would contain nothing but light — no homework, no
schools…but also no people, or planets, or stars!
Physicists long ago figured out something must have happened to explain the imbalance and our very existence.
“The fact that we have a universe made of matter strongly suggests that there is some violation of symmetry,” said Taku Izubuchi, a theoretical physicist at the U.S. Department of Energy’s (DOE)
Brookhaven National Laboratory.
The physicists call it charge conjugation-parity (CP) violation. Instead of everything in the universe behaving perfectly symmetrically, certain subatomic interactions happen differently if viewed in
a mirror (violating parity) or when particles and their oppositely charged antiparticles swap each other (violating charge conjugation symmetry). Scientists at Brookhaven — James Cronin and Val Fitch
— were the first to find evidence of such a symmetry “switch-up” in experiments conducted in 1964 at the Alternating Gradient Synchrotron, with additional evidence coming from experiments at CERN,
the European Laboratory for Nuclear Research. Cronin and Fitch received the 1980 Nobel Prize in physics for this work.
What was observed was the decay of a subatomic particle known as a kaon into two other particles called pions. Kaons and pions (and many other particles as well) are composed of quarks. Understanding
kaon decay in terms of its quark composition has posed a difficult problem for theoretical physicists.
“That was the homework assignment handed to theoretical physicists, to develop a theory to explain this kaon decay process a mathematical description we could use to calculate how frequently it
happens and whether or how much it could account for the matter-antimatter imbalance in the universe. Our results will serve as a tough test for our current understanding of particle physics,”
Izubuchi said.
Sophisticated computational tools
The mathematical equations of Quantum Chromodynamics, or QCD — the theory that describes how quarks and gluons interact—have a multitude of variables and possible values for those variables. So the
scientists needed to wait for supercomputing capabilities to evolve before they could actually solve them. The physicists invented the complex algorithms and wrote nifty software packages that some
of the world’s most powerful supercomputers used to describe the quarks’ behavior and solve the problem.
In the physicists’ software, the particles are “placed” on an imaginary four-dimensional space-time lattice consisting of three spatial dimensions plus time. At one end of the time dimension lies the
kaon, made of two kinds of quarks — a “strange” quark and an “anti-down” quark — held together by gluons. At the opposite end, they place the end products, the four quarks that make up the two pions.
Then the supercomputer computes how the kaon transforms into two pions as it flies through space and time. Conducting these computations on the lattice greatly simplifies the problem.
“We use the supercomputers to look at how each quark is flying — its velocity, direction — in other words, the dynamics of the strong QCD interaction,” Izubuchi said.
Somewhere in the middle of this complicated space-time grid, with some degree of probability, the strange quark of the kaon — which the strong force keeps strongly bound with its anti-down quark
partner — suddenly starts to change into a down quark by the so-called electroweak interaction. Since a kaon is heavier than two pions, the energy released creates a new quark/anti-quark pair — an
“up” and an “anti-up” quark — from the vacuum. These quarks then combine with the new down quark and the leftover anti-down quark to make the two pions.
“The experiments showed how frequently these ‘K→ππ’ processes happen, but the part that violates CP symmetry is the strange quark converting into a down quark through the weak interaction,” Izubuchi
said. “That’s the part we really wanted to know more about to understand the strength of this CP violation. That information will give us a hint of why the universe is matter-rich, and/or confirm the
correctness of our current understanding of particle physics.”
The supercomputers crunched tens of billions of numbers into the equation that describes this part of the process to find the result that should reproduce the decaying particle patterns and
frequencies observed by the experiments.
“The result of the calculation tells us how frequently this CP-violating weak interaction occurs and the strength of the CP violation at the quark level,” Izubuchi said. “It’s a kind of
reverse-engineering what experimenters have seen in kaon decays to solve the problem.”
New algorithm, higher precision
After publishing their initial results in 2012, the physicists further improved their calculation to more closely simulate what happens with these particles in Nature. These new calculations allow
them to directly compare their numbers with the experimental results more accurately, but they also increase the computational “cost” considerably—requiring more computing power/time. Even with the
newest supercomputers, the homework would have taken many years if not for a new efficient algorithm developed by the Brookhaven group in late 2012.
“This new algorithm, called all-mode averaging (AMA), divides the whole calculation into a ‘difficult’ but small piece and an ‘easier’ large piece, and devotes more computation time to the latter
part to save the total computation required,” Izubuchi said. “It accelerates the speed of the computations by a factor of ten or more. This very simple idea of dividing the calculation into two
pieces actually helped to reduce the statistical error of the computation by a lot.”
Do the numbers add up?
Is the calculated strength of the weak interaction strong enough to account for the matter antimatter asymmetry in the early universe?
“That’s the million-dollar question,” said Izubuchi. “So far people think this is not the full answer. We cannot explain why the universe is matter-rich based solely on the amount of CP violation
that this kaon decay accounts for. So there may be other sources of CP violation other than the weak interaction that would be revealed if a discrepancy were found between our calculation and the
experimental results.”
Then Izubuchi confessed that the theorists have only solved half of their homework problem.
“When we say we theoretically understood this process, it is only half true. There are two different ways the two end-result pions can combine with each other (called isospin states), and we’ve only
solved the problem for one combination, the isospin 2 channel.”
The experiments have measurements for both isospin states, so the theorists are working on calculating the second process as well.
“The other, isospin 0, is more challenging, and we are getting there by employing the faster supercomputers and new theoretical ideas and computation algorithms. But, for now, we have finished half
of 50 years’ homework.”
This research is part of DOE’s Scientific Discovery through Advanced Computing (SciDAC-3) program “Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity
and Energy Frontiers,” supported by the DOE Office of Science.
The supercomputing resources used for this research included: QCDCQ, a pre-commercial version of the IBM Blue Gene supercomputers, located at the RIKEN/BNL Research Center — a center funded by the
Japanese RIKEN laboratory in a cooperative agreement with Brookhaven Lab; a Blue Gene/Q supercomputer of the New York State Center for Computational Science, hosted by Brookhaven; half a rack of an
additional Blue Gene/Q funded by DOE through the US based lattice QCD consortium, USQCD; a Blue Gene/Q machine at the Edinburgh Parallel Computing Centre; the large installation of BlueGene/P
(Intrepid) and Blue Gene/Q (Mira) machines at Argonne National Laboratory funded by the DOE Office of Science; and PC cluster machines at Fermi National Accelerator Laboratory and at RIKEN.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.
For more information, please visit science.energy.gov | {"url":"https://www.rdworldonline.com/theoretical-physicists-solve-a-50-year-homework-assignment/","timestamp":"2024-11-07T00:12:16Z","content_type":"text/html","content_length":"67920","record_id":"<urn:uuid:50eef931-e501-4825-a9eb-5b45ecd42791>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00112.warc.gz"} |
Global Sensitivity Analysis
Chris Rackauckas
December 12st, 2020
Sensitivity analysis is the measure of how sensitive a model is to changes in parameters, i.e. how much the output changes given a change in the input. Clearly, derivatives are a measure of
sensitivity, but derivative are local sensitivity measures because they are only the derivative at a single point. However, the idea of probabilistic programming starts to bring up an alternative
question: how does the output of a model generally change with a change in the input? This kind of question requires an understanding of global sensitivity of a model. While there isn't a single
definition of the concept, there are a few methods that individuals have employed to estimate the global sensitivity.
Reference implementations of these methods can be found in GlobalSensitivity.jl
Setup for Global Sensitivity
In our global sensitivity analysis, we have a model $f$ and want to understand the relationship
\[ y = f(x_i) \]
Recall $f$ can be a neural network, an ODE solve, etc. where the $X_i$ are items like initial conditions and parameters. What we want to do is understand how much the total changes in $y$ can be
attributed to changes in specific $x_i$.
However, this is not an actionable form since we don't know what valid inputs into $f$ look like. Thus any global sensitivity study at least needs a domain for the $x_i$, at least in terms of bounds.
This is still underdefined because what makes one thing that it's not more likely for $x_i$ to be near the lower part of the bound instead of the upper part? Thus, for global sensitivity analysis to
be well-defined, $x_i$ must take a distributional form, i.e. be random variables. Thus $f$ is a deterministic program with probabilistic inputs, and we want to determine the effects of the
distributional inputs on the distribution of the output.
Reasons for Global Sensitivity Analysis
What are the things we can learn from doing such a global sensitivity analysis?
1. You can learn what variables would need to be changed to drive the solution in a given direction or control the system. If your model is exact and the parameters are known, the "standard" methods
apply, but if your model is only approximate, a global sensitivity metric may be a better prediction as to how variables cause changes.
2. You can learn if there are any variables which do not have a true effect on the output. These variables would be practically unidentifiable from data and models can be reduced by removing the
terms. It also is predictive as to robustness properties.
3. You can find ways to automatically sparsify a model by dropping off the components which contribute the least. This matters in automatically generated or automatically detected models, where many
pieces may be spurious and global sensitivities would be a method to detect that in a manner that is not sensitive to the chosen parameters.
Global Sensitivity Analysis Measures
Linear Global Sensitivity Metrics: Correlations and Regressions
The first thing that you can do is approximate the full model with a linear surrogate, i.e.
\[ y = AX \]
for some linear model. A regression can be done on the outputs of the model in order to find the linear approximation. The best fitting global linear model then gives coefficients for the global
sensitivities via the individual effects, i.e. for
\[ y = \sum_i \beta_i x_i \]
the $\beta_i$ are the global effect. Just as with any use of a linear model, the same ideas apply. The coefficient of determination ($R^2$) is a measure of how well the model fits. However, one major
change needs to be done in order to ensure that the solutions are comparable between different models. The dependence of the solution on the units can cause the coefficients to be large/small. Thus
we need to normalize the data, i.e. use the transformation
\[ \tilde{x_i} = \frac{x_i-E[x_i]}{V[x_i]} \]
\[ \tilde{y_i} = \frac{y_i-E[y_i]}{V[y_i]} \]
The normalized coefficients are known as the Standardized Regression Coefficients (SRC) and are a measure of the global effects.
Notice that while the $\beta_i$ capture the mean effects, it holds that
\[ V(y) = \sum_i \beta^2_i x_i \]
and thus the variance due to $x_i$ can be measured as:
\[ SRC_i = \beta_i \sqrt{\frac{V[x_i]}{V[y]}} \]
This interpretation is the same as the solution from the normalized variables.
From the same linear model, two other global sensitivity metrics are defined. The Correlation Coefficients (CC) are simply the correlations:
\[ CC_i = \frac{\text{cov}(x_i,y)}{\sqrt{V[x_i]V[y]}} \]
Similarly, the Partial Correlation Coefficient is the correlation coefficient where the linear effect of the other terms are removed, i.e. for $S_i = {x_1,x_2,\ldots,x_{j-1},x_{j+1},\ldots,x_n}$ we
\[ PCC_{i|S_i} = \frac{\text{cov}(x_i,y|S)j)}{\sqrt{V[x_i|S_i]V[y|S_i]}} \]
Derivative-based Global Sensitivity Measures (DGSM)
To go beyond just a linear model, one might want to do successive linearization. Since derivatives are a form of linearization, then one may thing to average derivatives. This averaging of
derivatives is the DGSM method. If the $x_i$ are random variables with joint CDF $F(x)$, then it holds that:
\[ v_i = \int_{R^d} \left(\frac{\partial f(x)}{\partial x_i}\right)^2 dF(x) = \mathbb{E}\left[\left(\frac{\partial f(x)}{\partial x_i}\right)^2\right], \]
We can also define the mean measure, which is simply:
\[ w_i = \int_{R^d} \frac{\partial f(x)}{\partial x_i} dF(x) = \mathbb{E}\left[\frac{\partial f(x)}{\partial x_i}\right]. \]
Thus a global variance estimate would be $v_i - w_i^2$.
ADVI for Global Sensitivity
Note that the previously discussed method for probabilistic programming, ADVI, is a method for producing a Gaussian approximation for a probabilistic program. The resulting mean-field or full
Gaussian approximations are variance index calculations!
The Morris One-At-A-Time (OAT) Method
Instead of using derivatives, one can use finite difference approximations. Normally you want to use small $\Delta x$, but if we are averaging derivatives over a large area, then in reality we don't
really need a small $\Delta x$!
This is where the Morris method comes in. The basic idea is that moving in one direction at a time is a derivative estimate, and if we step large enough then the next derivative estimate may be
sufficiently different enough to contribute well to the total approximation. Thus we do the following:
1. Take a random starting point
2. Randomly choose a direction $i$ and make a change $\Delta x_i$ only in that direction.
3. Calculate the derivative approximation from that change. Repeat 2 and 3.
Keep doing this for enough steps, and the average of your derivative approximations becomes a global index. Notice that this reuses every simulation as part of two separate estimates, making it much
more computationally efficient than the other methods. However, it accounts for average changes and not necessarily measurements gives a value that's a decomposition of a total variance. But its
computational cost makes it attractive for making quick estimates of the global sensitivities.
For practical usage, a few changes have to be done. First of all, notice that positive and negative change can cancel out. Thus if one want to measure of associated variance, one should use absolute
values or squared differences. Also, one needs to make sure that these trajectories get good coverage of the input space. Define the distance between two trajectories as the sum of the geometric
distances between all pairs of points. Generate many more trajectories than necessary and choose the $r$ trajectories with the largest distance. If the model evaluations are expensive, this is
significantly cheap enough in comparison that it's okay to do.
Sobol's Method (ANOVA)
Sobol's method is a true nonlinear decomposition of variance and it is thus considered one of the gold standards. For Sobol's method, we define the decomposition
\[ f(x) = f_0 + \sum_i f_i(x_i) + \sum_{i,j} f_{ij}(x_i,x_j) + \ldots \]
\[ f_0 = \int_\Omega f(x) dx \]
and orthogonality holds:
\[ f_{i,j,\ldots}(x_i,x_j,\ldots)dx = 0 \]
by the definitions:
\[ f_i(x_i) = E(y|x_i) - f_0 \]
\[ f_{ij}(x_i,y_j) = E(y|x_i,x_j) - f_0 - f_i - f_j \]
Assuming that $f(x)$ is L2, it holds that
\[ \int_\Omega f^2(x)dx - f_0^2 = \sum_s \sum_i \int f^2_{i_1,i_2,\ldots,i_s} dx \]
and thus
\[ V[y] = \sum V_i + \sum V_{ij} + \ldots \]
\[ V_i = V[E_{x_{\sim i}}[y|x_i]] \]
\[ V_{ij} = V[E_{x_{\sim ij}}[y|x_i,x_j]]-V_i - V_j \]
where $X_{\sim i}$ means all of the variables except $X_i$. This means that the total variance can be decomposed into each of these variances.
From there, the fractional contribution to the total variance is thus the index:
\[ S_i = \frac{V_i}{Var[y]} \]
and similarly for the second, third, etc. indices.
Additionally, if there are too many variables, one can compute the contribution of $x_i$ including all of its interactions as:
\[ S_{T_i} = \frac{E_{X_{\sim i}}[Var[y|X_{\sim i}]]}{Var[y]} = 1 - \frac{Var_{X_{\sim i}}[E_{X_i}[y|x_{\sim i}]]}{Var[y]} \]
Computational Tractability and Quasi-Monte Carlo
Notice that every single expectation has an integral in it, so the variance is defined as integrals of integrals, making this a very challenging calculation. Thus instead of directly calculating the
integrals, in many cases Monte Carlo estimators are used. Instead of a pure Monte Carlo method, one generally uses a low-discrepancy sequence (a form of quasi-Monte Carlo) to effectively sample the
search space.
The following generates for example a Sobol sequence:
using Sobol, Plots
s = SobolSeq(2)
p = hcat([next!(s) for i = 1:1024]...)'
scatter(p[:,1], p[:,2])
ERROR: invalid redefinition of constant Main.s
Another common quasi-Monte Carlo sequence is the Latin Hypercube, which is a generalization of the Latin Square where in every row, column, etc. only one point is given, allowing a linear spread over
a high dimensional space.
using LatinHypercubeSampling
p = LHCoptim(120,2,1000)
For a reference library with many different quasi-Monte Carlo samplers, check out QuasiMonteCarlo.jl.
Fourier Amplitude Sensitivity Sampling (FAST) and eFAST
The FAST method is a change to the Sobol method to allow for faster convergence. First transform the variables $x_i$ onto the space $[0,1]$. Then, instead of the linear decomposition, one decomposes
into a Fourier basis:
\[ f(x_i,x_2,\ldots,x_n) = \sum_{m_1 = -\infty}^{\infty} \ldots \sum_{m_n = -\infty}^{\infty} C_{m_1m_2\ldots m_n}\exp\left(2\pi i (m_1 x_1 + \ldots + m_n x_n)\right) \]
\[ C_{m_1m_2\ldots m_n} = \int_0^1 \ldots \int_0^1 f(x_i,x_2,\ldots,x_n) \exp\left(-2\pi i (m_1 x_1 + \ldots + m_n x_n)\right) \]
The ANOVA like decomposition is thus
\[ f_0 = C_{0\ldots 0} \]
\[ f_j = \sum_{m_j \neq 0} C_{0\ldots 0 m_j 0 \ldots 0} \exp (2\pi i m_j x_j) \]
\[ f_{jk} = \sum_{m_j \neq 0} \sum_{m_k \neq 0} C_{0\ldots 0 m_j 0 \ldots m_k 0 \ldots 0} \exp \left(2\pi i (m_j x_j + m_k x_k)\right) \]
The first order conditional variance is thus:
\[ V_j = \int_0^1 f_j^2 (x_j) dx_j = \sum_{m_j \neq 0} |C_{0\ldots 0 m_j 0 \ldots 0}|^2 \]
\[ V_j = 2\sum_{m_j = 1}^\infty \left(A_{m_j}^2 + B_{m_j}^2 \right) \]
where $C_{0\ldots 0 m_j 0 \ldots 0} = A_{m_j} + i B_{m_j}$. By Fourier series we know this to be:
\[ A_{m_j} = \int_0^1 \ldots \int_0^1 f(x)\cos(2\pi m_j x_j)dx \]
\[ B_{m_j} = \int_0^1 \ldots \int_0^1 f(x)\sin(2\pi m_j x_j)dx \]
Implementation via the Ergodic Theorem
\[ X_j(s) = \frac{1}{2\pi} (\omega_j s \mod 2\pi) \]
By the ergodic theorem, if $\omega_j$ are irrational numbers, then the dynamical system will never repeat values and thus it will create a solution that is dense in the plane (Let's prove a bit
later). As an animation:
(here, $\omega_1 = \pi$ and $\omega_2 = 7$)
This means that:
\[ A_{m_j} = \lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T}^T f(x)\cos(m_j \omega_j s)ds \]
\[ B_{m_j} = \lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T}^T f(x)\sin(m_j \omega_j s)ds \]
i.e. the multidimensional integral can be approximated by the integral over a single line.
One can satisfy this approximately to get a simpler form for the integral. Using $\omega_i$ as integers, the integral is periodic and so only integrating over $2\pi$ is required. This would mean
\[ A_{m_j} \approx \frac{1}{2\pi} \int_{-\pi}^\pi f(x)\cos(m_j \omega_j s)ds \]
\[ B_{m_j} \approx \frac{1}{2\pi} \int_{-\pi}^\pi f(x)\sin(m_j \omega_j s)ds \]
It's only approximate since the sequence cannot be dense. For example, with $\omega_1 = 11$ and $\omega_2 = 7$:
A higher period thus gives a better fill of the space and thus a better approximation, but may require a more points. However, this transformation makes the true integrals simple one dimensional
quadratures which can be efficiently computed.
To get the total index from this method, one can calculate the total contribution of the complementary set, i.e. $V_{c_i} = \sum_{j \neq i} V_j$ and then
\[ S_{T_i} = 1 - S_{c_i} \]
Note that this then is a fast measure for the total contribution of variable $i$, including all higher-order nonlinear interactions, all from one-dimensional integrals! (This extension is called
extended FAST or eFAST)
Proof of the Ergodic Theorem
Look at the map $x_{n+1} = x_n + \alpha (\text{mod} 1)$, where $\alpha$ is irrational. This is the irrational rotation map that corresponds to our problem. We wish to prove that in any interval $I$,
there is a point of our orbit in this interval.
First let's prove a useful result: our points get arbitrarily close. Assume that for some finite $\epsilon$ that no two points are $\epsilon$ apart. This means that we at most have spacings of $\
epsilon$ between the points, and thus we have at most $\frac{2\pi}{\epsilon}$ points (rounded up). This means our orbit is periodic. This means that there is a $p$ such that
\[ x_{n+p} = x_n \]
which means that $p \alpha = 1$ or $p = \frac{1}{\alpha}$ which is a contradiction since $\alpha$ is irrational.
Thus for every $\epsilon$ there are two points which are $\epsilon$ apart. Now take any arbitrary $I$. Let $\epsilon < d/2$ where $d$ is the length of the interval. We have just shown that there are
two points $\epsilon$ apart, so there is a point that is $x_{n+m}$ and $x_{n+k}$ which are $<\epsilon$ apart. Assuming WLOG $m>k$, this means that $m-k$ rotations takes one from $x_{n+k}$ to $x_{n+m}
$, and so $m-k$ rotations is a rotation by $\epsilon$. If we do $\frac{1}{\epsilon}$ rounded up rotations, we will then cover the space with intervals of length epsilon, each with one point of the
orbit in it. Since $\epsilon < d/2$, one of those intervals is completely encapsulated in $I$, which means there is at least one point in our orbit that is in $I$.
Thus for every interval we have at least one point in our orbit that lies in it, proving that the rotation map with irrational $\alpha$ is dense. Note that during the proof we essentially showed as
well that if $\alpha$ is rational, then the map is periodic based on the denominator of the map in its reduced form.
A Quick Note on Parallelism
Very quick note: all of these are hyper parallel since it does the same calculation per parameter or trajectory, and each calculation is long. For quasi-Monte Carlo, after generating "good enough"
trajectories, one can evaluate the model at all points in parallel, and then simply do the GSA index measurement. For FAST, one can do each quadrature in parallel. | {"url":"https://book.sciml.ai/notes/17-Global_Sensitivity_Analysis/","timestamp":"2024-11-06T12:24:49Z","content_type":"text/html","content_length":"70608","record_id":"<urn:uuid:a5b3d354-30cb-456a-9c0d-c7861af8477b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00644.warc.gz"} |
Celebratio Mathematica
by Susanna S. Epp and E. Graham Evans, Jr.
We are two of the fifty-five students who completed a doctorate with Kaplansky between 1950 and 1978. This is an astonishing number. Indeed during the years 1964–1969, when we were at the
University of Chicago, Kap oversaw an average of three completed dissertations a year despite serving as department chair from 1962–1967. His secret, we think, was an extraordinary
instinct for productive avenues of research coupled with a generous willingness to spend time working with his students. He also often encouraged students to run a seminar, with
beginning students presenting background material and advanced students presenting parts of their theses.
When Evans worked with him, Kap was teaching the commutative algebra course that was published soon afterward by Allyn and Bacon. As with each course he taught, he filled it with new
thoughts about the subject. For instance, at one memorable point he experimented to see how much he could deduce if he knew only that \( \operatorname{Ext}^1(A,B) \) was zero. He managed to
get pretty far, but eventually the proofs became unpleasantly convoluted. So he abruptly announced that henceforth, he would assume the full structure of \( \operatorname{Ext}^j(A,B) \),
and the next day he resumed lecturing in his usual polished fashion. This episode was atypical in that he first developed and then cut off a line of inquiry. More frequently, after
commenting on new insights of his own, he would interject questions for students to explore and develop. In his lectures he made the role of non-zero divisors, and hence regular
sequences, central in the study of commutative rings. At one point he gave an elegant proof, avoiding the usual filtration argument, that the zero divisors are a finite union of prime
ideals in the case of finitely generated modules over a Noetherian ring. Then he asked Evans to try to determine what kinds of non-Noetherian rings would have the property that the zero
divisors of finitely generated modules would always be a finite union of primes. One of the ideas in Kap’s proof was just what Evans needed to get the work on his thesis started.
The year that Epp worked with Kap, he was not teaching a course but had gone back to a previous and recurring interest in quadratic forms. A quintessential algebraist, he was
interested in exploring and expanding classical results into more abstract settings. Just as in his courses he tossed out questions for further investigation, in private sessions
with his students he suggested various lines of inquiry beyond his own work. In Epp’s case this meant exploring the results Kap had obtained in generalizing and extending H. Brandt’s
work on composition of quaternary quadratic forms and trying to determine how many of these results could be extended to general Cayley algebras.
Kap typically scheduled an early morning weekly meeting with each student under his direction. For some it was much earlier than they would have preferred, but for him it followed a daily
swim. He led our efforts mostly by expressing lively interest in what we had discovered since the week before and following up with question after question. Can you prove a simpler case?
Or a more general one? Can you find a counterexample? When one of us arrived disappointed one day, having discovered that a hoped-for conjecture was false, Kap said not to be
discouraged, that in the search for truth negative results are as important as positive ones. He also counseled persistence in other ways, commenting that he himself had had papers
rejected—a memorable statement because it seemed so improbable. Having made contributions in so many fields and having experienced the benefits of cross-fertilization, he
advised being open to exploring new areas. Some of his students may have taken this advice further than he perhaps intended, ultimately working far from their original topic areas at
the National Security Agency, at the Jet Propulsion Laboratory, and in K–12 mathematics education, for example.
Kap derived a great deal of pleasure from having generated 627 mathematical descendants, perhaps especially from meeting his mathematical grandchildren and great-grandchildren.
When one encountered him at the MSRI bus stop one day and, not knowing what to say, commented on the weather, Kap responded with a smile, “Cut the crap. Let’s talk mathematics.” They did,
and he became one of the many students Kap mentored long after he retired. | {"url":"https://celebratio.org/Kaplansky_I/article/426/","timestamp":"2024-11-13T16:25:17Z","content_type":"text/html","content_length":"24741","record_id":"<urn:uuid:2923673e-2fcc-49ac-ad25-8d5bb464d0e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00320.warc.gz"} |
THE LIGHT OF NUMBERS - 0 :: Centrumserafin
The Cosmic Story begins with the number 0. It will only make complete sense to us if we gradually acquire all of it. Starting with the 0, then 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 144
here more
0 - UNLIMITED POTENTIAL
How we approach the number 0 through the wisdom of human subconsciousness
* In all cultures, the circle symbolises perfection and potential.
* King Arthur and his Knights made decisions sitting at the Round Table, which symbolises the principle that nobody takes the upper hand.
* Rosette - a circular window above the main entrance of cathedrals. It serves to express the perfection and omnipresence of God.... (Rosettes often used the symbolism of numbers to convey other
* The circle symbolises interconnection: the Olympic rings.
* The circle symbolises interconnection and potential: newlyweds exchange rings.
* The circle returns energy (round buildings, round corners).
* When a road ends in itself (when something determines itself) and does not lead to a specific destination, then we say that we are "going round in circles", or we mention "a vicious circle"
What shape is perfect? It's the circle. The points on the circle are all the same distance from its centre. Each part of the circle's curve is equal to any other. The circle can absorb and embrace
(and therefore contain) 'EVERYTHING' within it. It has no beginning and it has no end. The circle can roll anywhere, any time. It has no fixed position. It's perfectly variable.
We can therefore see the circle as a symbol of the principle that EVERYTHING has its origin in the ABSOLUTE, UNMANIFESTED, VARIABLE SOURCE, in other words, everything has its origin in the ABSOLUTE,
ALL-EMBRACING NOTHINGNESS. We therefore start our symbolic numerical description of the mystery of being with just the circle.
The zero - with no beginning and no end - the absolute SOURCE
The knowledge of the zero and the ways of its usage reflect the intellectual level of the culture using it. Europeans only 'discovered' the zero at some point in the 11th century. The fascinating,
wise Mayans used the zero. Their symbol for this number is amazing. It's commonly said that the Mayan zero symbolises a seashell. This may be so. The Mayan zero can also be viewed as a symbol of an
eye. And eyes are the windows of the soul. We know this, after all. The depicted Mayan 0 symbolises virtually the number 3 principles, triality... In essence, it resembles the well-known Eye of
Providence within a triangle.
The Mayan 0 may also symbolize something else, something quite significant. The process of transformation of the unmanifested into the manifested. We will demonstrate it in a moment with the number
The Mayan symbol for the zero
Numerical Symbol: ZERO
The principles of the number 0: ABSOLUTE SOURCE, PERFECTION, ABSOLUTE POTENTIAL, UNMANIFESTED POTENTIAL
The shape of the numerical symbol: CIRCLE
Qualities of the shape: A MOST PERFECT SHAPE, ROTATION, COMPLETENESS, MOVEMENT IN ALL DIRECTIONS | {"url":"https://www.centrumserafin.cz/en/the-light-of-numbers-0/","timestamp":"2024-11-07T18:31:20Z","content_type":"text/html","content_length":"81889","record_id":"<urn:uuid:cf63c860-7629-4b18-9f7e-71c702fdf79a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00316.warc.gz"} |
39.1.1 Slotine and Li
Citation: Slotine, J. E., Li, W., "Applied Nonlinear Control", Prentice Hall, 1991.
Fundamentals of Lyapunov theory
Control of Multi-input physical systems
Comments: Directed to mechanical undergraduate students.
39.1.2 VandeVegte
Citation: VandeVegte, John, "Feedback Control Systems", Prentice Hall, Second Edition, 1990.
Transfer function models of physical systems
Modeling of feedback systems and controllers
Transient performance and the s-plane
The perfoarmance and dynamic compensation of feedback systems
Digital control system analysis and design
Introduction to state space design
Multivariable systems in the frequency domain
App: Vectors matrices and determinants
App: Computer aids for analysis and design
Comments: Directed to mechanical undergraduate students. Chapters cover a range of topics systems including motors, pneumatics, fluids, thermo, etc. Chapters 1-8 use classical design techniques.
Chapters 9-10 present z-transform methods for controller analysis, some design issues are presented.
39.1.3 Others
Close, C., Frederick, D., and Newell, J., "Modeling and Analysis of Dynamic Systems", 3rd ed.
- some problems have answers, not solutions
Eronini "System Dynamics and Control"
- delays the use of LaPlace - uses state variables first
- network examples in Matlab/Mathcad
- chapter at end on computer based control
- answers to selected problems
- some Matlab examples in boxes
- example problems with solutions
- not suitable for an introductory course | {"url":"https://engineeronadisk.com/V2/book_modelling/engineeronadisk-351.html","timestamp":"2024-11-02T14:34:43Z","content_type":"text/html","content_length":"7916","record_id":"<urn:uuid:b5615bb6-7ed6-41a2-9dbf-44631e77e955>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00326.warc.gz"} |
The Remarkable Identity: (a-b)³ + (b-c)³ + (c-a)³ = 3(a-b)(b-c)(c-a)
This seemingly complex equation holds a fascinating truth in the world of algebra. It's a powerful identity that allows us to simplify expressions and solve problems in a more efficient way. Let's
delve into its origins, proof, and some interesting applications.
Understanding the Identity
The equation states that the sum of the cubes of the differences between three variables (a, b, and c) is equal to three times the product of those differences. This may appear convoluted at first
glance, but the beauty lies in its simplicity and elegance.
Proving the Identity
There are a couple of ways to prove this identity:
• Direct Expansion: We can expand the left-hand side of the equation using the binomial theorem. This involves multiplying out the cubes and simplifying the resulting terms. The process can be
tedious but leads to the desired result.
• Factorization: A more elegant approach involves factoring the left-hand side. We can use the algebraic identity:
□ x³ + y³ + z³ - 3xyz = (x+y+z)(x² + y² + z² - xy - xz - yz)
Notice that if we set x = a-b, y = b-c, and z = c-a, we get: * (a-b)³ + (b-c)³ + (c-a)³ - 3(a-b)(b-c)(c-a) = (a-b+b-c+c-a)((a-b)² + (b-c)² + (c-a)² - (a-b)(b-c) - (a-b)(c-a) - (b-c)(c-a))
The first term on the right-hand side simplifies to zero, leaving us with: * (a-b)³ + (b-c)³ + (c-a)³ = 3(a-b)(b-c)(c-a)
This identity finds its applications in various areas of mathematics:
• Polynomial Simplification: It can be used to simplify expressions involving cubes of differences.
• Solving Equations: The identity can help us solve certain types of cubic equations by factoring.
• Geometric Interpretations: The identity has a geometric interpretation related to volumes of parallelepipeds.
The identity (a-b)³ + (b-c)³ + (c-a)³ = 3(a-b)(b-c)(c-a) stands as a testament to the elegance and interconnectedness of mathematics. While it might appear complex at first, its proof and
applications showcase its power and beauty. This remarkable identity is a valuable tool for mathematicians, students, and anyone seeking a deeper understanding of algebraic expressions and their
intricate relationships. | {"url":"https://jasonbradley.me/page/(a-b)3%252B(b-c)3%252B(c-a)3%253D3(a-b)(b-c)(c-a)","timestamp":"2024-11-04T15:26:08Z","content_type":"text/html","content_length":"59593","record_id":"<urn:uuid:faecb9ff-128a-41e1-9e24-9c73dff27a0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00270.warc.gz"} |
Algebra Calculators
Algebra calculators
, solved examples, work with steps and step by step calculations to practice and learn equations and expressions with unknown variables. Compute or verfify the results of quadratic equation,
arithmetic between vector and complex numbers, mathematical equation solving operations etc is easy with these algebra or algebraic calculators. The main objective of these calculators is to assist
students, professionals and researchers quickly perform or verify algebraic calculations to analyze, determine and solve many complex algorithms in physics, engineering and business math | {"url":"https://dev.ncalculators.com/algebra/","timestamp":"2024-11-12T18:23:31Z","content_type":"text/html","content_length":"24032","record_id":"<urn:uuid:6d9c5328-54d7-412e-9378-b653723acc40>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00855.warc.gz"} |
An Algebraic Jost-Schroer Theorem for Massive Theories
Jens Mund
December 07, 2010
We consider a purely massive local relativistic quantum theory specified by a family of von Neumann algebras indexed by the space-time regions. We assume that, affiliated with the local algebras,
there are operators which create only single particle states from the vacuum and are well-behaved under the space-time translations. Strengthening a result of Borchers, Buchholz and Schroer, we show
that then the theory is unitarily equivalent to that of a free field for the corresponding particle type. We admit particles with any spin and localization of the charge in space-like cones, thereby
covering the case of string-localized covariant quantum fields.
string localized quantum fields | {"url":"https://lqp2.org/node/369","timestamp":"2024-11-08T01:16:04Z","content_type":"text/html","content_length":"16235","record_id":"<urn:uuid:7368f1e2-1b82-47b8-97a8-64032539de69>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00815.warc.gz"} |
Proof That Santa Doesn’t Exist – For Nerds!
There are approximately two billion children (persons under 18) in the world. However, since Santa does not visit children of Muslim, Hindu, Jewish, or Buddhist (except maybe in Japan) religions,
this reduces the workload for Christmas night to 15% of the total, or 378 million (according to the population reference bureau). At an average (census) rate of 3.5 children per household, that comes
to 108 million homes, presuming there is at least one good child in each.
Santa has about 31 hours of Christmas to work with, thanks to the different time zones and the rotation of the earth, assuming east to west (which seems logical). This works out to 967.7 visits per
second. This is to say that for each Christian household with a good child, Santa has around 1/1000th of a second to park the sleigh, hop out, jump down the chimney, fill the stockings, distribute
the remaining presents under the tree, eat whatever snacks have been left for him, get back up the chimney, jump into the sleigh and get onto the next house.
Assuming that each of these 108 million stops is evenly distributed around the earth (which, of course, we know to be false, but will accept for the purposes of our calculations), we are not talking
about 0.78 miles per household; a total trip of 75.5 million miles, not counting bathroom stops or breaks.
This means Santa’s sleigh is moving at 650 miles per second – 3,000 times the speed of sound. For purposes of comparison, the fastest man made vehicle, the Ulysses space probe, moves at a pokey 27.4
miles per second, and a conventional reindeer can run (at best) 15 miles per hour. The payload of the sleigh adds another interesting element. Assuming that each child gets nothing more than a medium
sized LEGO set (two pounds), the sleigh is carrying over 500 thousand tons, not counting Santa himself. On land, a conventional reindeer can pull no more than 300 pounds. Even granting that flying
reindeer can pull 10 times the normal amount, the job can’t be done with eight or even nine of them -Santa would need 360,000 of them. This increases the payload, not counting the weight of the
sleigh, another 54,000 tons, or roughly seven times the weight of the Queen Elizabeth (the ship, not the monarch).
A mass of nearly 600,000 tons traveling at 650 miles per second creates enormous air resistance – this would heat up the reindeer in the same fashion as a spacecraft re-entering the earth’s
atmosphere. The lead pair of reindeer would absorb 14.3 quintillion joules of energy per second each. In short, they would burst into flames almost instantaneously, exposing the reindeer behind them
and creating deafening sonic booms in their wake. The entire reindeer team would be vaporized within 4.26 thousandths of a second, or right about the time Santa reaches the fifth house on his trip.
Not that it matters, however, since Santa, as a result of accelerating from a dead stop to 650 m.p.s. in .001 seconds, would be subjected to acceleration forces of 17,000 g’s.
A 250 pound Santa (which seems ludicrously slim considering all the high calorie snacks he must have consumed over the years) would be pinned to the back of the sleigh by 4,315,015 pounds of force,
instantly crushing his bones and organs and reducing him to a quivering blob of pink goo. Therefore, if Santa did exist, he’s dead now. MERRY CHRISTMAS!!! | {"url":"https://5w1h.boo/santa/","timestamp":"2024-11-07T00:25:56Z","content_type":"text/html","content_length":"60340","record_id":"<urn:uuid:3b963be0-7253-4b84-8789-e2348c571e70>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00408.warc.gz"} |
How to simplify IT using the system complexity ratio
Big, old, large companies have large, inefficient, and complex IT systems associated with erratic operations, difficulty maintaining them, and difficulty adapting them to new needs. Consequently,
reducing complexity across the entire system portfolio has been a common focus in most large and complex organizations. Conversely, trendy young tech companies are touted as the ideals of system
simplicity and efficiency. Indeed, Netflix can sport an eyewateringly simple architecture, leaving more mature companies and public organizations behind with a perception that their legacy systems
are hopelessly complex.
The issue is that sometimes systems are complex because they handle complex problem spaces. By comparison, Netflix and most SaaS companies build systems to handle kindergarten complexity compared to
most banks or tax authorities, for example. Netflix just needs to be able to deliver a file to you on demand and subsequently invoice you. In contrast, tax authorities need to interpret tax laws and
rulings continuously (and they are rarely made to minimize system complexity). Banks have multiple complex regulatory requirements to guide their system development.
Conceptualizing system complexity
That does not mean legacy systems cannot be made simpler. What we want to know is whether a system is too complex and has the potential to be simplified and, in particular, the degree to which this
potential exists.
To understand that, we first need to distinguish two types of complexity: intrinsic complexity and extrinsic complexity. Intrinsic complexity refers to the complexity of the problem space the system
is intended to manage. In contrast, extrinsic complexity refers to the complexity introduced in handling the problem area, that is, the total system complexity. While there are different meanings of
complexity in other disciplines, system complexity is usually considered to be driven by three factors
1. Number of elements – all else being equal, a system of three components is simpler than one with 3.000
2. Number of relations between elements – the relations are defined by interactions between the elements of the system. The more different elements interact with each other, the more complex
3. Number of states of elements – the states of the elements drive complexity because the different states usually have consequences for the interactions between elements. This is to be
distinguished from other accidental information about the element.
Intrinsic complexity is, thus, the inherent complexity of the problem space. If we look at a way to gauge this, we can look at it from an information angle. (1) The number of elements would be the
number of entities in a logical model and the number of attributes that describe the problem space. (2) The number of relations are the relationships between entities in this logical model, and (3)
the number of states is defined by the number of different states the attributes of the entities can take on. Note here that we are not talking about information about the entity but only predefined
states. For example, if we have a person entity, the name is not a state, but the gender is. Only the state is likely to drive complexity since rules are likely to be built on it, not the name. This
measure of complexity is computable not only in principle but also in practice.
We can compare this with the extrinsic complexity, which is the actual complexity of the resulting implementation of the system handling the problem space. The same approach we took to the logical
model can be applied to the system implementation. Here, we would calculate the complexity of the physical data model of the system implementation designed to handle the problem area. Again, we would
count the number of entities, attributes, relations, and states.
The system complexity ratio
These two concepts, intrinsic and extrinsic complexity, provide an interesting tool with which to measure and conceptualize complexity and simplification potential. The critical ratio, which we can
call the system complexity ratio, is extrinsic to intrinsic complexity, that is, how complex the system is compared to the complexity of the problem it is made to handle. If it is one, the system
perfectly contains the problem space, and no further simplification potential exists. If it is below zero, the resulting system does not capture the problem space adequately, i.e., it is too
simplistic. If the system complexity ratio is above one, the system is too complex and could meaningfully be simplified.
The system complexity ratio would be expected to be above one because it is necessary to create artifacts other than those in the problem space to render the system usable, such as managing users,
their roles, and different technical artifacts. It should, however, not be much greater than one. If the system complexity ratio approaches five, something is terribly wrong.
The boundary of simplicity
Calls for simplification are justified, but there is a boundary of complexity that cannot be passed unless the system’s ability to deal with the problem space is compromised. If the system complexity
ratio is below one and we make the system simpler than the problem space it is supposed to handle, it will necessarily be deficient in one way or another.
Einstein is often attributed the aphorism: “A theory should be as simple as possible but no simpler.” This is a concise statement of the general gist of this thinking. But there seems to be no
evidence that he actually did say that. Instead, he said the following:
“It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a
single datum of experience.” (‘On the Method of Theoretical Physics’, lecture delivered at Oxford, 10 June 1933)
While not as quotable, it is more precise and even better as a way to think about complexity. Paraphrasing it, we could say that the supreme goal of a solution is to reduce the extrinsic complexity
of the system as much as possible but not beyond the intrinsic complexity of the problem space or, in other words, to reduce the system complexity ratio towards one but never beneath one. | {"url":"https://lisdorf.com/2024/04/30/how-to-simplify-it-using-the-system-complexity-ratio/","timestamp":"2024-11-04T20:33:34Z","content_type":"text/html","content_length":"62861","record_id":"<urn:uuid:f3de7daa-0b8b-4a02-a75c-8734fc93e7e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00274.warc.gz"} |
1999 Coin Proof Set Value: A Numismatic Treasure
What is the 1999 coin proof set value? This is a question that many coin collectors and enthusiasts have asked themselves. The 1999 coin proof set is a special set of coins that was released by the
United States Mint in 1999. The set includes all of the circulating coins that were produced that year, including the penny, nickel, dime, quarter, and half dollar. These coins were struck on special
planchets and have a mirror-like finish.
Editor’s Notes: The 1999 coin proof set value is a popular topic among coin collectors. This is because the set is relatively scarce and has a high collector value. In this guide, we will provide you
with all of the information that you need to know about the 1999 coin proof set value.
We have done some analysis and digging, and we have put together this guide to help you make the right decision. In this guide, we will cover the following topics:
• What is the 1999 coin proof set?
• How much is the 1999 coin proof set worth?
• Factors that affect the value of the 1999 coin proof set
• How to buy and sell the 1999 coin proof set
So, whether you are a seasoned coin collector or just starting, this guide has something for you. Let’s get started!
1999 coin proof set value
The 1999 coin proof set value is a topic of interest to many coin collectors and enthusiasts. This set of coins was released by the United States Mint in 1999 and includes all of the circulating
coins that were produced that year. These coins were struck on special planchets and have a mirror-like finish. The value of the set can vary depending on a number of factors, including the condition
of the coins, the rarity of the set, and the overall demand for the set.
• Mintage: The mintage of the 1999 coin proof set was 791,382.
• Composition: The coins in the set are made of a clad composition, which is a mixture of copper and nickel.
• Condition: The condition of the coins in the set can affect the value of the set. Coins that are in mint condition will be worth more than coins that are damaged or worn.
• Rarity: The rarity of the set can also affect the value of the set. The 1999 coin proof set is not a particularly rare set, but it is not as common as some other proof sets.
• Demand: The overall demand for the set can also affect the value of the set. If there is a high demand for the set, then the value of the set will be higher.
• Strike: The strike of the coins in the set can affect the value of the set. Coins that have a strong strike will be worth more than coins that have a weak strike.
• Luster: The luster of the coins in the set can affect the value of the set. Coins that have a bright luster will be worth more than coins that have a dull luster.
These are just some of the factors that can affect the value of the 1999 coin proof set. It is important to note that the value of the set can vary depending on the individual circumstances of the
sale. If you are interested in buying or selling a 1999 coin proof set, it is important to do your research and to consult with a professional coin dealer.
The mintage of a coin refers to the number of coins that were produced. The mintage of the 1999 coin proof set was 791,382. This means that there are 791,382 of these sets in existence. The mintage
of a coin can affect its value. Coins that have a lower mintage are generally worth more than coins that have a higher mintage. This is because coins with a lower mintage are more rare.
• Rarity: The mintage of a coin can affect its rarity. Coins that have a lower mintage are generally more rare than coins that have a higher mintage. This is because there are fewer of them in
• Value: The mintage of a coin can affect its value. Coins that have a lower mintage are generally worth more than coins that have a higher mintage. This is because they are more rare.
• Demand: The mintage of a coin can affect the demand for the coin. Coins that have a lower mintage are generally in higher demand than coins that have a higher mintage. This is because there are
fewer of them available.
The mintage of the 1999 coin proof set is a factor that can affect its value. The mintage of this set is relatively low, which means that it is more rare than some other proof sets. This can make it
more valuable to collectors.
The composition of a coin can affect its value. Coins that are made of precious metals, such as gold or silver, are generally worth more than coins that are made of base metals, such as copper or
nickel. However, the composition of a coin is not the only factor that affects its value. Other factors, such as the rarity of the coin and the demand for the coin, can also affect its value.
• Rarity: The composition of a coin can affect its rarity. Coins that are made of precious metals are generally more rare than coins that are made of base metals. This is because precious metals
are more valuable, and therefore, there are fewer of them in circulation.
• Value: The composition of a coin can affect its value. Coins that are made of precious metals are generally worth more than coins that are made of base metals. This is because precious metals are
more valuable, and therefore, there is more demand for them.
• Demand: The composition of a coin can affect the demand for the coin. Coins that are made of precious metals are generally in higher demand than coins that are made of base metals. This is
because precious metals are more valuable, and therefore, more people want them.
The composition of the 1999 coin proof set is a factor that can affect its value. The coins in this set are made of a clad composition, which is a mixture of copper and nickel. This composition is
not as valuable as gold or silver, but it is still more valuable than base metals. This can make the set more valuable to collectors.
The condition of the coins in the 1999 coin proof set is a major factor in determining its value. Coins that are in mint condition, meaning that they have no scratches, dings, or other damage, will
be worth more than coins that are damaged or worn. This is because mint condition coins are more rare and desirable to collectors.
• Facet 1: Appearance
The appearance of the coins in the set is a key factor in determining their condition. Coins that have a bright, shiny surface and no scratches or other damage will be considered to be in mint
condition. Coins that have been damaged or worn will have a duller surface and may have scratches or other imperfections.
• Facet 2: Strike
The strike of the coins in the set is another important factor in determining their condition. Coins that have a strong strike will have sharp, well-defined details. Coins that have a weak strike
will have soft, mushy details.
• Facet 3: Luster
The luster of the coins in the set is also a factor in determining their condition. Coins that have a bright, reflective luster will be considered to be in mint condition. Coins that have a dull
luster may have been damaged or worn.
• Facet 4: Color
The color of the coins in the set can also be a factor in determining their condition. Coins that have a bright, even color will be considered to be in mint condition. Coins that have a dull or
uneven color may have been damaged or worn.
These are just a few of the factors that can affect the condition of the coins in the 1999 coin proof set. By understanding these factors, you can better assess the condition of the coins in your set
and determine its value.
Rarity is a key factor in determining the value of a coin or coin set. The rarer a set is, the more valuable it will be. This is because rare coins are more difficult to find, and therefore, there is
more demand for them.
• Facet 1: Mintage
The mintage of a coin or coin set refers to the number of pieces that were produced. The mintage of the 1999 coin proof set was 791,382. This is a relatively low mintage, which makes the set more
rare and valuable than sets with a higher mintage.
• Facet 2: Survival rate
The survival rate of a coin or coin set refers to the number of pieces that have survived to the present day. The survival rate of the 1999 coin proof set is not known, but it is estimated to be
relatively high. This is because proof sets are often disimpan by collectors, which helps to protect them from damage and wear.
• Facet 3: Demand
The demand for a coin or coin set refers to the number of people who want to own it. The demand for the 1999 coin proof set is relatively high, as it is a popular set among collectors. This high
demand helps to support the value of the set.
Overall, the rarity of the 1999 coin proof set is a factor that contributes to its value. The set is not particularly rare, but it is not as common as some other proof sets. This makes it a desirable
set for collectors, and it helps to support its value.
The demand for a coin or coin set is a key factor in determining its value. The demand for a set is influenced by a number of factors, including the rarity of the set, the condition of the set, and
the overall popularity of the set. The 1999 coin proof set is a popular set among collectors, and this has helped to support its value. This is because proof sets are often seen as a good investment.
There are a number of reasons why the demand for the 1999 coin proof set is high. First, the set is relatively rare. The mintage of the set was only 791,382, which is lower than the mintage of many
other proof sets. Second, the set is in high condition. The coins in the set are struck on special planchets and have a mirror-like finish. This makes them very attractive to collectors. Third, the
set is popular because it contains all of the circulating coins that were produced in 1999. This makes it a great way to commemorate the year.
The high demand for the 1999 coin proof set has helped to support its value. The set is currently worth around $100, which is a significant increase over its original issue price of $12.95. The value
of the set is likely to continue to increase in the future, as it is a popular set among collectors.
Factor Effect on demand
Rarity A rare set will be in higher demand than a common set.
Condition A set in good condition will be in higher demand than a set in poor condition.
Popularity A popular set will be in higher demand than an unpopular set.
The strike of a coin refers to the force with which the coin was struck by the dies during the minting process. A strong strike will produce a coin with sharp, well-defined details, while a weak
strike will produce a coin with soft, mushy details. The strike of a coin can be affected by a number of factors, including the condition of the dies, the amount of pressure applied during the
striking process, and the type of metal used to make the coin.
• Facet 1: Die condition
The condition of the dies used to strike the coins can affect the strength of the strike. Dies that are in good condition will produce coins with a strong strike, while dies that are worn or
damaged will produce coins with a weak strike.
• Facet 2: Striking pressure
The amount of pressure applied during the striking process can also affect the strength of the strike. Coins that are struck with a high amount of pressure will have a strong strike, while coins
that are struck with a low amount of pressure will have a weak strike.
• Facet 3: Coin metal
The type of metal used to make the coin can also affect the strength of the strike. Coins that are made of harder metals, such as gold and silver, will have a stronger strike than coins that are
made of softer metals, such as copper and nickel.
The strike of the coins in the 1999 coin proof set is a factor that can affect the value of the set. Coins that have a strong strike will be worth more than coins that have a weak strike. This is
because coins with a strong strike are more attractive to collectors. Collectors prefer coins that have sharp, well-defined details, and coins with a weak strike will often have soft, mushy details.
The luster of a coin refers to its shine or brilliance. It is caused by the way light interacts with the surface of the coin. Coins with a bright luster are more attractive to collectors and,
therefore, more valuable. This is because a bright luster indicates that the coin has been well-preserved and has not been damaged or worn.
The luster of the coins in the 1999 coin proof set is a factor that can affect the value of the set. Coins in a proof set are struck on special planchets and have a mirror-like finish. This gives
them a bright and lustrous appearance. Coins that have been well-preserved will have a bright luster, while coins that have been damaged or worn will have a dull luster.
The following table shows the relationship between the luster of the coins in the 1999 coin proof set and the value of the set:
Luster Value
Bright $100 or more
Dull $50-$75
As you can see, the luster of the coins in the 1999 coin proof set can have a significant impact on the value of the set. If you are considering buying or selling a 1999 coin proof set, it is
important to take the luster of the coins into account.
FAQs about 1999 coin proof set value
The 1999 coin proof set is a popular collector’s item, and its value can vary depending on a number of factors. Here are some frequently asked questions about the 1999 coin proof set value:
Question 1: How much is the 1999 coin proof set worth?
The value of the 1999 coin proof set can vary depending on a number of factors, including the condition of the coins, the rarity of the set, and the overall demand for the set. However, the average
value of a 1999 coin proof set is around $100.
Question 2: What factors affect the value of the 1999 coin proof set?
The following factors can affect the value of the 1999 coin proof set:
• Condition
• Rarity
• Demand
• Strike
• Luster
Question 3: How can I tell if my 1999 coin proof set is valuable?
There are a few things you can look for to determine if your 1999 coin proof set is valuable:
• Condition: The condition of the coins in the set is a major factor in determining its value. Coins that are in mint condition will be worth more than coins that are damaged or worn.
• Rarity: The rarity of the set can also affect its value. The 1999 coin proof set is not a particularly rare set, but it is not as common as some other proof sets.
• Demand: The overall demand for the set can also affect its value. The 1999 coin proof set is a popular set among collectors, and this has helped to support its value.
Question 4: Where can I buy or sell a 1999 coin proof set?
There are a number of places where you can buy or sell a 1999 coin proof set. You can find these sets for sale at coin dealers, online auction sites, and at coin shows.
Question 5: How can I protect my 1999 coin proof set?
There are a few things you can do to protect your 1999 coin proof set:
• Store the set in a cool, dry place.
• Handle the coins with care.
• Do not clean the coins.
The 1999 coin proof set is a valuable collector’s item. The value of the set can vary depending on a number of factors, but the average value is around $100. If you are thinking about buying or
selling a 1999 coin proof set, it is important to do your research and to consult with a professional coin dealer.
Next steps
If you are interested in learning more about the 1999 coin proof set, there are a number of resources available online. You can also visit a local coin dealer to get more information about the set.
Tips for Determining the Value of a 1999 Coin Proof Set
Determining the value of a 1999 coin proof set requires careful consideration of several factors. Here are some tips to guide you in accurately assessing its worth:
Tip 1: Inspect the Condition of the Coins
The condition of the coins is a primary determinant of value. Examine the coins for any scratches, dents, or other signs of wear and tear. Coins in pristine condition, with sharp details and a
lustrous finish, will command a higher price.
Tip 2: Determine the Rarity of the Set
The mintage of the 1999 coin proof set was 791,382, which is relatively low compared to other proof sets. However, the survival rate of these sets is unknown, so the actual rarity may vary.
Tip 3: Assess the Strike Quality
The strike quality refers to the sharpness and precision of the coin’s design. A strong strike will result in well-defined details, while a weak strike may produce a blurry or indistinct appearance.
Coins with a strong strike are more desirable to collectors.
Tip 4: Evaluate the Luster
Luster refers to the shine or brilliance of the coin’s surface. Coins with a bright, mirror-like luster are more attractive and valuable than those with a dull or hazy appearance. Proper storage and
handling are crucial for preserving the luster.
Tip 5: Consider the Demand and Market Value
The demand for the 1999 coin proof set influences its value. Proof sets are generally popular among collectors, but the specific demand for this set may fluctuate based on market conditions.
Researching recent sales and consulting with reputable coin dealers can provide insights into the current market value.
By following these tips, you can effectively determine the value of a 1999 coin proof set. Carefully assessing the condition, rarity, strike quality, luster, and market demand will help you make an
informed decision when buying or selling this valuable collectible.
Next Steps
If you have a 1999 coin proof set and are considering its value, consult with a professional coin dealer or numismatic expert for a detailed evaluation and appraisal.
The value of the 1999 coin proof set is influenced by a multitude of factors, including its condition, rarity, strike quality, luster, and market demand. Collectors and investors alike recognize the
significance of these sets, making them highly sought-after items.
Understanding the factors that contribute to the value of a 1999 coin proof set empowers collectors to make informed decisions when buying, selling, or valuing these treasured pieces. By carefully
assessing each aspect and considering the market trends, collectors can ensure the preservation and appreciation of these valuable numismatic artifacts. | {"url":"https://coinfyp.com/1999-coin-proof-set-value/","timestamp":"2024-11-01T20:40:26Z","content_type":"text/html","content_length":"159087","record_id":"<urn:uuid:177f85fc-7d04-4284-a143-50f807be605e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00069.warc.gz"} |
A slope field (also called a direction field^[1]) is a graphical representation of the solutions to a first-order differential equation^[2] of a scalar function. Solutions to a slope field are
functions drawn as solid curves. A slope field shows the slope of a differential equation at certain vertical and horizontal intervals on the x-y plane, and can be used to determine the approximate
tangent slope at a point on a curve, where the curve is some solution to the differential equation.
The slope field of ${\displaystyle {\frac {dy}{dx}}=x^{2}-x-2}$, with the blue, red, and turquoise lines being ${\displaystyle {\frac {x^{3}}{3}}-{\frac {x^{2}}{2}}-2x+4}$, ${\displaystyle {\frac {x^
{3}}{3}}-{\frac {x^{2}}{2}}-2x}$, and ${\displaystyle {\frac {x^{3}}{3}}-{\frac {x^{2}}{2}}-2x-4}$, respectively.
Standard case
The slope field can be defined for the following type of differential equations
${\displaystyle y'=f(x,y),}$
which can be interpreted geometrically as giving the slope of the tangent to the graph of the differential equation's solution (integral curve) at each point (x, y) as a function of the point
It can be viewed as a creative way to plot a real-valued function of two real variables ${\displaystyle f(x,y)}$ as a planar picture. Specifically, for a given pair ${\displaystyle x,y}$ , a vector
with the components ${\displaystyle [1,f(x,y)]}$ is drawn at the point ${\displaystyle x,y}$ on the ${\displaystyle x,y}$ -plane. Sometimes, the vector ${\displaystyle [1,f(x,y)]}$ is normalized to
make the plot better looking for a human eye. A set of pairs ${\displaystyle x,y}$ making a rectangular grid is typically used for the drawing.
An isocline (a series of lines with the same slope) is often used to supplement the slope field. In an equation of the form ${\displaystyle y'=f(x,y)}$ , the isocline is a line in the ${\displaystyle
x,y}$ -plane obtained by setting ${\displaystyle f(x,y)}$ equal to a constant.
General case of a system of differential equations
Given a system of differential equations,
{\displaystyle {\begin{aligned}{\frac {dx_{1}}{dt}}&=f_{1}(t,x_{1},x_{2},\ldots ,x_{n})\\{\frac {dx_{2}}{dt}}&=f_{2}(t,x_{1},x_{2},\ldots ,x_{n})\\&\;\;\vdots \\{\frac {dx_{n}}{dt}}&=f_{n}(t,x_
{1},x_{2},\ldots ,x_{n})\end{aligned}}}
the slope field is an array of slope marks in the phase space (in any number of dimensions depending on the number of relevant variables; for example, two in the case of a first-order linear ODE, as
seen to the right). Each slope mark is centered at a point ${\displaystyle (t,x_{1},x_{2},\ldots ,x_{n})}$ and is parallel to the vector
${\displaystyle {\begin{pmatrix}1\\f_{1}(t,x_{1},x_{2},\ldots ,x_{n})\\f_{2}(t,x_{1},x_{2},\ldots ,x_{n})\\\vdots \\f_{n}(t,x_{1},x_{2},\ldots ,x_{n})\end{pmatrix}}.}$
The number, position, and length of the slope marks can be arbitrary. The positions are usually chosen such that the points ${\displaystyle (t,x_{1},x_{2},\ldots ,x_{n})}$ make a uniform grid. The
standard case, described above, represents ${\displaystyle n=1}$ . The general case of the slope field for systems of differential equations is not easy to visualize for ${\displaystyle n>2}$ .
General application
With computers, complicated slope fields can be quickly made without tedium, and so an only recently practical application is to use them merely to get the feel for what a solution should be before
an explicit general solution is sought. Of course, computers can also just solve for one, if it exists.
If there is no explicit general solution, computers can use slope fields (even if they aren’t shown) to numerically find graphical solutions. Examples of such routines are Euler's method, or better,
the Runge–Kutta methods.
Software for plotting slope fields
Different software packages can plot slope fields.
funn = @(x, y)y-x; % function f(x, y) = y-x
[x, y] = meshgrid(-5:0.5:5); % intervals for x and y
slopes = funn(x, y); % matrix of slope values
dy = slopes ./ sqrt(1 + slopes.^2); % normalize the line element...
dx = ones(length(dy)) ./ sqrt(1 + slopes.^2); % ...magnitudes for dy and dx
h = quiver(x, y, dx, dy, 0.5); % plot the direction field
set(h, "maxheadsize", 0.1); % alter head size
/* field for y'=xy (click on a point to get an integral curve). Plotdf requires Xmaxima */
plotdf( x*y, [x,-2,2], [y,-2,2]);
(* field for y'=xy *)
plot_slope_field(x*y, (x,-2,2), (y,-2,2))
See also
• Blanchard, Paul; Devaney, Robert L.; and Hall, Glen R. (2002). Differential Equations (2nd ed.). Brooks/Cole: Thompson Learning. ISBN 0-534-38514-1
External links | {"url":"https://www.knowpia.com/knowpedia/Slope_field","timestamp":"2024-11-08T19:08:50Z","content_type":"text/html","content_length":"123573","record_id":"<urn:uuid:9ed59d3c-e476-4023-a82c-7d41b99c7578>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00001.warc.gz"} |
Maple bilingual - math in der Oberstufenanalysis
Doing Calculus & Linear Algebra
with the math package
This page explains how to use the math package in undergraduate calculus. Please use math version 3.04 or higher.
For general help on the math package see: ?math.
> restart:
At first, make sure that Maple can find the package by assigning the path where the math package is located to libname. If in Windows you have saved the package to drive C, directory `maple7\math`,
> libname := `c:/maple7/math`, libname;
libname := `c:/maple7/math`, "E:\\maple7/lib"
After that assign short names to the package functions:
> with(math);
reading math ini file: e:/maple7/math/math.ini
math v3.6.4 for Maple 7, current as of September 22, 2001 - 16:06
written by Alexander F. Walz, alexander.f.walz@t-online.de
Warning, the protected name extrema has been redefined and unprotected
[Arclen, END, PSconv, V, _Zval, arclen, assumed, asym, cancel,
cartgridR3, cartprod, colplot, cont, curvature, curveplot,
cutzeros, dec, deg, diffquot, diffquotfn, dim, domain,
domainx, ex, extrema, fnull, fnvals, getindets, getreals,
gridplot, inc, inflection, inter, interpol, interpolplot,
isAntiSymmetric, isCont, isDependent, isDiagonal, isDiff,
isEqual, isFilled, isIdentity, isQuadratic, isSymmetric, jump,
lineangle, load, mainDiagonal, makepoly, mat, mean, names,
nondiff, normale, padzero, pointgridR3, pole, printtree, prop,
rad, rangemembers, realsort, recseq, redefdim, reduce,
removable, retrieve, rootof, rotation, roundf, seqby, seqnest,
seqplot, setdef, singularity, slice, slopefn, sortranges,
sortsols, split, symmetry, tangente, tree, un, unique]
Be f a function in one real:
> f := x -> x^2*exp(-x);
Determine the domain of f with math/domain:
> domain(f(x));
math/symmetrychecks for symmetry:
> symmetry(f(x));
The interception with the x-axis (i.e. zeros of f) math/fnull:
> fnull(f(x), x);
Interception with the y-axis:
> f(0);
f and its first three derivatives:
> f(x);
> f1 := diff(f(x), x);
> f2 := diff(f(x), x$2);
> f3 := diff(f(x), x$3);
Collecting to e^(-x):
> f1 := un(collect(f1, exp(-x)));
> f2 := un(collect(f2, exp(-x)));
> f3 := un(collect(f3, exp(-x)));
You can find extremas with math/ex:
> ex(f(x), x);
Inflections are calculated with math/inflection:
> inflection(f(x), x);
The from the left to infinity and from the right to -infinity:
> limit(f(x), x=-infinity);
> limit(f(x), x=infinity);
> restart:
> with(math):
> f := x -> abs(1/10*x^3+27/10)-2;
Find all zeros, return floating point numbers; as opposed to fsolve, you do not need to specify intervals since the default is -10 .. 10 (you can change this by assigning _MathDomain another range).
fnull then divides this intervall into even smaller parts, scanning each for zeros. See ?math,fnull for further information.
> fnull(f(x), x);
A plot of f shows that f is not differentiable at x=-3 and has a saddle point at x=0. A graph on coordinate paper (horizontal and vertical grid lines) computes math/gridplot.
> gridplot(f(x), x=-5 .. 3, -3 .. 4);
math/unis an interface to unapply, you do not need to specify the indeterminates.
> f1 := un(diff(f(x), x));
> f2 := un(diff(f(x), x$2));
There is no standard function in Maple that knows that a function is not differentiable at a point x, here x=-3. But you may check this by entering f1(-3) and getting an exception message generated
by the internal help procedure simpl/abs in this case. solve determines a solution of f'(x) = 0 only at x=0:
> solve(f1(x), x);
> is(f2(0) <>0);
math/ex calculates extrema even at these points where a function is not differentiable). Note that P(0, 7/10) is a saddle point, not an extrema.
> ex(f(x), x);
You can search for points of a function not being differentiable using math/nondiff:
> nondiff(f(x), x);
math/inflection also determines saddle points.
> inflection(f(x), x);
With math/tangentewe now draw a tangent at x = 0, thus plotting the graph of f along with this tangent. You have more options than student/tangent offers to specify the appearance of this tangent,
especially its length, color and thickness.
> tangente(f(x), x=0);
> curveplot(f(x), x=0, x=-5 .. 3, y=-3 .. 4, length=4, tangentline=[color=navy, thickness=2]);
> restart:
> with(math):
> f := x -> sqrt((4-x)/(2+x));
As you have seen above, math/domain determines the domain of a function in one real. Points that to not belong to this domain are denoted with a call to Open.
> domain(f(x));
> symmetry(f(x));
> fnull(f(x), x);
> ex(f(x), x);
> inflection(f(x), x);
> gridplot(f(x), x=-3 .. 5, -1 .. 2, step=[1, 0.5]);
math/cont or math/isCont check whether a function is continuous at a given point. f is continuous at x=4,
> cont(f(x), x=4);
true, left
because the limit that exists at x=4 from the left side
> limit(f(x), x=4, left);
is equal to the value of f at this point:
> f(4);
> restart:
> with(math):
> f := x -> (x^2-3*x+2)/(x^2+2*x-3);
math/singularity is more precise than discont (actually using discont) by checking if the points returned by discont are defined.
> singularity(f(x), x);
You can analyse these singularities with cont:
> cont(f(x), x=-3);
> cont(f(x), x=1);
This means that the singularity is removable at x=1 (with simplify(f(x)) the zero at -1 in the denominator has vanished).
> fnull(f(x), x);
The result is incorrect (see above)
> domain(f(x), singularity);
since 1 is not part of the domain of f. To see why fnull returns a wrong answer, first delete the remember table of fnull and then set infolevel[fnull] to value > 0 to see how this function
determines the result:
> infolevel[fnull] := 1: readlib(forget)(fnull);
> fnull(f(x), x);
fnull: using default domain (_MathDomain): -10 .. 10
fnull: Fraction found, now proceeding with numerator: x^2-3*x+2
fnull: using fsolve to determine roots
fnull: Searching for roots in expression x^2-3*x+2
fnull: Searching for roots in derivative 2*x-3
fnull: Roots found in original function: 1.000000000, 2.000000000
fnull: Possible roots found in derivative: 1.500000000
The second line shows that fnull checks whether the function passed is a quotient and then by default only processes its numerator. To suppress this behavior pass the option numerator=false.
> fnull(f(x), x, numerator=false);
fnull: using default domain (_MathDomain): -10 .. 10
fnull: using fsolve to determine roots
fnull: Searching for roots in expression (x^2-3*x+2)/(x^2+2*x-3)
fnull: Searching for roots in derivative (2*x-3)/(x^2+2*x-3)-(x^2-3*x+2)/(x^2+2*x-3)^2*(2*x+2)
fnull: Roots found in original function: 2.000000000
fnull: Possible roots found in derivative: none
Reset infolevel[fnull]:
> infolevel[fnull] := 0:
Now we will compute the asymptote with math/asym:
> asym(f(x), x);
The slope of f at x=2 using math/slopefn:
> slopefn(f(x), x=2);
The arc length of the curve over the interval [2, 6] with math/arclen:
> arclen(f(x), x=2 .. 6);
> evalf(%);
You can delete the small imaginary part with math/cancel:
> cancel(%, eps=1e-5);
> restart: libname := `e:/maple7/math`, libname;
libname := e:/maple7/math, "E:\\maple7/lib"
> with(math):
reading math ini file: e:/maple7/math/math.ini
math v3.6.4 for Maple 6 & 7, current as of September 22, 2001 - 15:34
written by Alexander F. Walz, alexander.f.walz@t-online.de
With trigonometric functions, and inverse transcendental functions in general, solve only returns one solution:
> solve(sin(x), x);
By setting _EnvAllSolutions to true, you will receive general solutions:
> _EnvAllSolutions := true:
> solve(sin(x), x);
If you would like to see all solutions within a specified range, use math/_Zval:
> _Zval(%%, -3*Pi .. 3*Pi);
> evalf(%);
> restart: libname := `e:/maple7/math`, libname;
libname := e:/maple7/math, "E:\\maple7/lib"
> with(math):
reading math ini file: e:/maple7/math/math.ini
math v3.6.4 for Maple 7, current as of September 22, 2001 - 16:06
written by Alexander F. Walz, alexander.f.walz@t-online.de
Warning, the protected name extrema has been redefined and unprotected
math also has a series of special tools math:
Trailing zeros of a floating point expression can be deleted with math/cutzeros:
> solve((x-4.11)^4, x);
> op({%});
> cutzeros(%);
math/getreals retrieves all real solutions in a sequence:
> solve(x^3-1, x);
> getreals(%);
math/realsort sorts real values in ascending order:
> folge := 1, 0, exp(1), -Pi;
> sort([folge]);
> realsort(folge);
For many other functions available check the online help: ?math | {"url":"http://sunsite.informatik.rwth-aachen.de/maple/mplmcalen.htm","timestamp":"2024-11-02T12:14:58Z","content_type":"text/html","content_length":"23802","record_id":"<urn:uuid:6a559b2c-e73b-4ee8-b188-a399efd7c52d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00458.warc.gz"} |
Lesson 5
Describing Trends in Scatter Plots
Let’s look for associations between variables.
5.1: Which One Doesn’t Belong: Scatter Plots
Which one doesn’t belong?
5.2: Fitting Lines
Experiment with finding lines to fit the data. Drag the points to move the line. You can close the expressions list by clicking on the double arrow.
1. Here is a scatter plot. Experiment with different lines to fit the data. Pick the line that you think best fits the data. Compare it with a partner’s.
2. Here is a different scatter plot. Experiment with drawing lines to fit the data. Pick the line that you think best fits the data. Compare it with a partner’s.
3. In your own words, describe what makes a line fit a data set well.
5.3: Good Fit Bad Fit
The scatter plots both show the year and price for the same 17 used cars. However, each scatter plot shows a different model for the relationship between year and price.
1. Look at Diagram A.
1. For how many cars does the model in Diagram A make a good prediction of its price?
2. For how many cars does the model underestimate the price?
3. For how many cars does it overestimate the price?
2. Look at Diagram B.
1. For how many cars does the model in Diagram B make a good prediction of its price?
2. For how many cars does the model underestimate the price?
3. For how many cars does it overestimate the price?
3. For how many cars does the prediction made by the model in Diagram A differ by more than $3,000? What about the model in Diagram B?
4. Which model does a better job of predicting the price of a used car from its year?
5.4: Practice Fitting Lines
1. Is this line a good fit for the data? Explain your reasoning.
2. Draw a line that fits the data better.
3. Is this line a good fit for the data? Explain your reasoning.
4. Draw a line that fits the data better.
These scatter plots were created by multiplying the \(x\)-coordinate by 3 then adding a random number between two values to get the \(y\)-coordinate. The first scatter plot added a random number
between -0.5 and 0.5 to the \(y\)-coordinate. The second scatter plot added a random number between -2 and 2 to the \(y\)-coordinate. The third scatter plot added a random number between -10 and 10
to the \(y\)-coordinate.
1. For each scatter plot, draw a line that fits the data.
2. Explain why some were easier to do than others.
When a linear function fits data well, we say there is a linear association between the variables. For example, the relationship between height and weight for 25 dogs with the linear function whose
graph is shown in the scatter plot.
Because the model fits the data well and because the slope of the line is positive, we say that there is a positive association between dog height and dog weight.
What do you think the association between the weight of a car and its fuel efficiency is?
Because the slope of a line that fits the data well is negative, we say that there is a negative association between the fuel efficiency and weight of a car.
• negative association
A negative association is a relationship between two quantities where one tends to decrease as the other increases. In a scatter plot, the data points tend to cluster around a line with negative
Different stores across the country sell a book for different prices.
The scatter plot shows that there is a negative association between the the price of the book in dollars and the number of books sold at that price.
• outlier
An outlier is a data value that is far from the other values in the data set.
Here is a scatter plot that shows lengths and widths of 20 different left feet. The foot whose length is 24.5 cm and width is 7.8 cm is an outlier.
• positive association
A positive association is a relationship between two quantities where one tends to increase as the other increases. In a scatter plot, the data points tend to cluster around a line with positive
The relationship between height and weight for 25 dogs is shown in the scatter plot. There is a positive association between dog height and dog weight. | {"url":"https://curriculum.illustrativemathematics.org/MS/students/3/6/5/index.html","timestamp":"2024-11-07T23:48:53Z","content_type":"text/html","content_length":"121012","record_id":"<urn:uuid:b63c3a16-fdfe-4983-b6f9-ec50fb4a1f7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00470.warc.gz"} |
[Solved] In a soccer practice session, the football is kept at ... | Filo
In a soccer practice session, the football is kept at the centre of the field 40 yards from the 10 ft high goalposts. A goal is attempted by kicking the football at a speed of 64 ft/s at an angle of
45° to the horizontal. Will the ball reach the goalpost?
Not the question you're searching for?
+ Ask your question
horizontal range
We know that horizontal range
which is less than the height of goal post.
In time 2.65, the ball travels horizontal distance (40 yd) and vertical height which is less than . The ball will reach the goal post.
Was this solution helpful?
Video solutions (3)
Learn from their 1-to-1 discussion with Filo tutors.
12 mins
Uploaded on: 6/3/2023
Was this solution helpful?
5 mins
Uploaded on: 11/10/2022
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
8 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Concepts of Physics (HC Verma Part I)
Practice more questions from Motion in a Plane
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
In a soccer practice session, the football is kept at the centre of the field 40 yards from the 10 ft high goalposts. A goal is attempted by kicking the football at a speed of 64 ft/s
Question Text at an angle of 45° to the horizontal. Will the ball reach the goalpost?
Updated On Aug 17, 2023
Topic Motion in a Plane
Subject Physics
Class Class 11
Answer Type Text solution:1 Video solution: 3
Upvotes 366
Avg. Video 8 min | {"url":"https://askfilo.com/physics-question-answers/in-a-soccer-practice-session-the-football-is-kept-03n","timestamp":"2024-11-05T15:47:13Z","content_type":"text/html","content_length":"428743","record_id":"<urn:uuid:558be6b9-ee7e-4719-867d-869058d971a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00286.warc.gz"} |
Abstract Algebra and Discrete Mathematics, Banach and Hilbert Spaces
There are two ways to look at euclidean space. It is a vector space with lines and planes and scaling factors, and rigid rotations, and other transformations that respect the linear structure of the
space. Or it is a space with distance, and open and closed sets, and continuous functions that respect the underlying topology.
Over the past 200 years much has been written about vector spaces, and metric spaces - and they almost seem like separate branches of mathematics. But what if a space is both a vector space and a
metric space? This is a banach space, and it is closer to R^n than a vector space or a metric space alone. In fact a finite dimensional banach space is equivalent to R^n. (We'll prove this below.)
However, there are infinite dimensional banach spaces that do not resemble euclidean space.
A normed vector space, also called a normed linear space, is a real vector space S with a norm function denoted |x|. The norm has the following properties.
1. The norm maps S into the nonnegative reals.
2. The norm of x is 0 iff x = 0.
3. For any real number c, |cx| = |c|×|x|.
4. |x+y| ≤ |x|+|y|.
Sometimes the norm is derived from a dot product, but it need not be.
This norm can be turned into a metric, thus turning our normed space into a metric space. Let the distance d(x,y) be the norm of x-y. Since x-y is -1 times y-x, property 3 above tells us this is well
By property 2, d(x,y) = 0 iff x = y.
Let a triangle have vertices z, z+x, and z+x+y. Subtract z and apply property 4. This establishes the triangular inequality. Thus d becomes a distance metric, and S is a metric space, with the open
ball topology.
A banach space is a normed vector space that forms a complete metric space. Every cauchy sequence in S converges to a limit point in S.
At this point the word subspace has become ambiguous. If W is a subspace of S, is it an arbitrary subset of S that inherits the open ball topology, or is it a vector space contained in S? Sometimes
you have to infer the correct definition from context. The term linear subspace refers to a sub vector space. This is also called a linear manifold, and that's unfortunate, since a manifold is also a
space that is locally homeomorphic to R^n. I'll stick with linear subspace.
A finite dimensional (finitely generated) linear subspace is closed in S. For example, a line is a closed set in the plane. The proof is a technical exercise in real analysis.
Choose a basis b[1] through b[n] for this subspace, and build a box around the origin of dimensions d[1] through d[n]. The box consists of linear combinations of b[1] through b[n] using coefficients
bounded by d[1] through d[n] in absolute value. The origin is at the center of this box, i.e. when all coefficients are 0. Draw a segment from the origin to the edge of the box, in any direction. The
distance starts out 0 at the origin and advances linearly, as the coefficients grow, until one of those coefficients reaches its limit. The distance to the edge of the box is a function of direction.
The directions form a sphere of dimension n-1, and that is compact. A tiny change in direction changes the coefficients at the edge of the box only slightly. This adds something small to the previous
location, hence it changes the distance only slightly. In other words, the distance to the edge of the box is a continuous function of direction.
Continuous on a compact set means there is a minimum distance that is attained at a particular direction. Only the origin has a distance of 0, hence this distance is nonzero. Each box admits such a
distance, and a larger box admits a larger minimum distance. Start at the point on the outer box that exhibits the minimum distance, and retrace the ray back to the origin, and find a point on the
inner box with a smaller distance. Every point on the outer box is farther from the origin then the minimum distance of the inner box.
Translate these boxes to any point u in n space. The distance from u to u+z is the same as the distance from z to the origin. The boxes centered at u present the same minimum distances from u.
Now let p be a point in the closure, so that every open ball containing p intersects our n dimensional space. Shrink these balls to zero, and let q[i] be a sequence of points in the subspace that
monotonically approaches p. Consider the coefficients on b that build the points in the sequence q. Suppose the coefficients on b[1] through b[3] are cauchy, and approach the real values c[1] through
c[3], while the coefficients on b[4] through b[6] are not cauchy. These sequences present differences of 2d[4] through 2d[6] infinitely often. That defines a 3 dimensional box with some minimum
distance ε. This implies a minimum distance of 2ε from one side of the box through the center and to the other side of the box. Move out in the sequence q so that all points are strictly within ε/2
of p. They are then less than ε of each other. Further move out so that the coefficients on b[1] through b[3] are close enough to c[1] through c[3] so that the difference between a[1,i]b[1] + a[2,i]b
[2] + a[3,i]b[3] and c[1]b[1] + c[2]b[2] + c[3]b[3] is less then ε/2. Focus on b[4], somewhat arbitrarily, and find two coefficients on b[4] that differ by at least 2d[4]. This can be done since the
sequence is not cauchy. This establishes two points of q, which lie on opposite sides of the box, presenting a distance of at least 2ε. Add in the distance introduced by the first three coordinates,
and the first point moves by less than ε/2, and the second point moves by less than ε/2. The points are still at least ε apart. Yet their distance is less than ε. This is a contradiction, hence the
sequence of coefficients, on each of the basis vectors, is cauchy.
Let c[j] be the limit of the cauchy sequence of coefficients on b[j]. Let r be the sum over c[j]b[j]. The distance from r to p has to be 0, else the sequence q is bounded close to r and away from p.
Therefore r = p, and p is part of our subspace. The subspace is closed.
If S is not complete then complete it, and find r as above. r = p, and p was part of our original space S, so p belongs to the n dimensional subspace, and the subspace is closed.
Let u and v be points in S, and put a ball of radius ε around u+v. Points within ½ε of u, plus points within ½ε of v, wind up within ε of u+v. The preimage of the open ball about u+v is covered by
open sets in S cross S. Plus is a continuous operator, and it turns S into a continuous group.
How about scaling by c? Keep points within ε/c of u, and the image is within ε of cu. (Treat c = 0 as a special case.) Thus scaling by c is a continuous function from S onto S, (or from S onto 0 if c
= 0), and S is a continuous R module.
If c is nonzero, then keep the scaling factor close to c, and choose δ small, so that the open ball of radius δ about p maps entirely inside an open ball containing q, where cp = q. Scaling is a
continuous function from R cross S onto S, except where c = 0.
The translate, or translation, or shift, of a set W in S, by a vector x, is the set x+y for all y in W. We're just sliding the set W along in S.
Translation is a continuous function from S into S. The distance from a to b is the same as the distance from a+x to b+x. Open balls correspond to open balls.
Since translation is a bijection, a set W is homeomorphic to any of its translates, using the subspace topology. A plane has the same open and closed sets as a shifted copy of that plane in space.
Two norms f(S) and g(S) are equivalent if there are positive constants b and c such that bf ≤ g ≤ cf.
Divide by b, or c, and obtain g/c ≤ f ≤ g/b. The relation is symmetric. Set b = c = 1 to show the relation is reflexive. If bf ≤ g and dg ≤ h, then bdf ≤ h. The relation is transitive, giving an
equivalence relation on norms. Norms clump together in equivalence classes, as they should, since they are called "equivalent norms".
Let's cover an open set in f with open balls in g. A point p in our open set is a certain distance d from the nearest edge, as measured by f. The points within bd of p, measured by g, are all within
d of p, measured by f. So p is contained in an open g ball inside our open set. Open sets in f remain open in g, and by symmetry, open sets in g are open in f. The topologies are the same.
The identity map on S, from f to g, is uniformly bicontinuous.
As we move from f to g, cauchy sequences remain cauchy, and the limit point of our sequence becomes the limit point of the same sequence under g. If S is complete under f, it is complete under g,
i.e. still a banach space.
A pseudo norm, without property 2, produces a pseudo metric. Collapse points that are 0 distance apart to build a new metric space. But this time S is a vector space, so there's more to the story.
If x and y are 0 distance apart, then u+x and u+y are 0 distance apart. Addition on equivalence classes is well defined. Also, addition remains commutative, and associative, and continuous, so we
still have a topological group. Make similar observations for scaling, and the quotient space is a continuous module. Merge the inseparable points of a pseudo normed vector space and get a normed
vector space.
Picture the z axis in 3 space. A linear transformation squashes the z axis to 0, and the result is the xy plane. We can generalize this to a normed vector space.
Let S be a normed vector space and let U be a linear subspace. Build a pseudo norm on S as the distance from x to U. Technically, this is the greatest lower bound of the distances from x to all the
points of U. We need to show this is a pseudo norm.
If q is the point in U with minimum distance to p, scale p and q by c and the distance is multiplied by c. Yet the distance to every other point in U is also multiplied by c. (Everything in U is c
times something else in U.) Thus q remains the closest point, and distance is scaled by c.
If there is no minimum q, let q[i] be a sequence of points whose distance from p approaches the lower bound. All distances are multiplied by c, and the lower bound is multiplied by c, as the sequence
q[i]*c illustrates.
The triangular inequality is inherited from S. Let x and y be any two points in S, and let p and q be points in U that hold the distances |x,p| and |y,q| to within ½ε of their true distances from U.
The distance from p+q to x+y is now bounded by the sum of the true distances to U, + ε.
|(x+y) - (p+q)| = |(x-p) + (y-q) | ≤
d(x)+½ε + d(y)+½ε ≤ d(x) + d(y) + ε
Let ε approach 0, and the sequence of points p+q proves the norm of x+y is no larger than the norm of x plus the norm of y. We have satisfied the properties of a pseudo norm.
If p is in the closure of U then a sequence q approaches p, and p is 0 distance from U. Extend U to the closure of U; hence U is a closed set in S. Verify that this does not change the distance from
x to U. If |x,p| attains the minimum distance then a sequence of points approaching p gives that same distance as a lower bound.
With U closed, a point not in U is a positive distance from U. Otherwise a sequence of points in U approaches x, and x would be included in U. Thus U, and only U, has norm 0.
If y is in U, add y to x. This does not change the set of distances from x to the points of U. The distance to p has become the distance to p+y, and so on. Thus the shifted subspace x+U is a fixed
distance from U, and has a well defined norm.
Collapse the cosets of U down to single points, giving a quotient space S/U. Like U, each coset of U is 0 distance from itself, and a positive distance from everything else. Thus we are also
collapsing the inseparable points, and turning the pseudo metric into a true metric. The result is both a topological quotient space and a linear quotient space. No ambiguity here; S/U is a quotient
If S is complete, is S/U complete? For starters, U is closed by assumption, and a closed subspace of a complete metric space remains complete. If a cauchy sequence in U converges to p, then p is in
the closure of U, and is in U.
Let q[n] be a sequence in S that becomes cauchy in S/U. Find a point b[n] in U, so that |q[n]-b[n]| in S is bounded by |q[n]| in S/U + 1/n. After a time, the points of q in S/U never differ by more
than ε, and moving farther out, 1/n is less than ε, hence the norms of q-b in S never differ by more than 3ε. That makes q-b cauchy in S, with a limit point r. The difference sequence q-b comes
arbitrarily close to r-0. Pass to the quotient space, where b doesn't matter, and q comes arbitrarily close to r. Each cauchy sequence has a limit, and S/U is complete. If S is banach then S/U is
A linear operator is a map between vector spaces that respects addition and scaling. Put another way, a linear operator is a module homomorphism.
Assume the domain and range are normed vector spaces. The operator f is bounded if there is some constant k such that |f(x)| ≤ k×|x|. The function does not grow faster than linear.
Note that f(0) has to be 0, but this is the case for any linear operator.
Move to a point v and find the same bound relative to v.
|f((x+v) - v)| ≤ k×|x|.
If f is bounded it has a norm, denoted |f|, which is the lower bound of all the constants k that make f a bounded operator. This is also called the Lipschitz constant. Can we home in on |f|?
If x satisfies our constraint for a fixed k, then so does cx. To see if k is valid, there is no need to test the multiples of x. It is enough to test x/|x|, the unit vector in the direction of x.
Consider all the points x on the unit sphere and evaluate |f(x)|. (I'm calling it a sphere, but I really mean all the points that are a distance 1 from the origin. This could be the surface of a
cube, or almost any other shape, depending on the norm.) Let k be the least upper bound, and f is a k bounded linear operator. Lower values of k will not do, thus |f| = k.
In R^n, when f is implemented as a matrix, you might think |f| is the largest eigen value, but this need not be the case. Let f be the 2×2 upper triangular matrix [1,1|0,1]. The eigen values are 1,
but run 1,0 through the matrix and get 1,1 with length sqrt(2).
If f is a normal matrix, e.g. a symmetric matrix, then its eigen vectors are orthogonal, and |f| is indeed the largest eigen value. Of course we have to take the norms of the eigen values, so that -4
is bigger than 3, forcing k = 4.
If f is the quotient map S mod a closed subspace U, which is a linear map, as described in the previous section, the bound on f is 1. By definition the distance from x to U has to be |x-0| or less.
If x remains nonzero in S/U then let q[i] be a sequence x-b[i] whose distances approach the distance d from x to U. |q[i]| is arbitrarily close to d, with |f(q[i])| = d, hence |f| = 1.
Continuity of f is demonstrated at 0. Assume a ball of radius ε in the range includes the image of a ball of radius δ in the domain. Move these balls to v and f(v). With |b| < δ, f(v+b) = f(v) + f(b)
which is within ε of f(v).
Though it is not linear, norm is continuous from S into the reals. Place an interval around |v| of radius ε. Keep |b| below ε, and by the triangular inequality |v+b| lies inside our open interval.
Assume f is continuous, and the unit sphere is compact. Norm is continuous, as described above, so |f(x)| is a continuous function on the unit sphere. This is a continuous function from a compact set
into the reals. The image is compact, hence closed and bounded. The linear operator f is bounded.
Apply the above when the domain is R^n. The unit sphere is closed and bounded in R^n, hence compact. We only need show continuity. Focus on one of the n coordinates. Our linear operator is continuous
on R; in fact it scales R by a fixed amount and embeds it in the range. A linear operator on R^n is the sum of n linear operators on R, and is continuous. Thus we have a continuous function on a
compact set, and every linear operator on R^n is bounded.
In fact f is continuous iff it is bounded. If f is bounded, f is continuous at 0, hence continuous. In fact f is uniformly continuous. Distance is magnified by at most k, everywhere. Conversely
assume f is continuous. Select an r so that |x| < r implies |f(x)| < 1. The norm of the image of the sphere of radius r is at most 1, hence 1/r acts as a bound for f.
It's easy to build a linear function that is not bounded, and not continuous. Let b[1] b[2] b[3] etc form a basis for an infinite dimensional vector space, with the generalized topology. Let f map b
[1] to b[1], b[2] to 2b[2], b[3] to 3b[3], b[4] to 4b[4], and so on. The image of the j^th unit vector has length j, and f is unbounded.
If A and B are vector spaces then the linear operators from A into B form another vector space. This because linear functions can be added and scaled.
Linear functions from A into B are sometimes denoted hom(A,B). The word "hom" is short for homomorphism, because these functions are actually module homomorphisms from A into B. This concept is
generalized here.
Let A and be be normed spaces, and note that the bounded operators from A into B form a vector space. Scale a transformation f and you scale its norm |f|. The norm of f+g is the maximum of the image
of the unit sphere under f+g, which is no larger than the maximum of |f(x)|+|g(x)|, which is no larger than |f| + |g|. The set of bounded homomorphisms from A into B is denoted boundhom(A,B). This
structure is another normed vector space, via |f|. We just proved the triangular inequality and scaling. If |f| = 0 then f is identically 0, so we are done.
If B is complete then so is boundhom(A,B). Let f[1] f[2] f[3] etc be a cauchy sequence of bounded linear operators. Define a new function g as follows. Let g(0) = 0. For x on the unit sphere,
consider the sequence f[n](x) in B. The difference between two functions, on the unit sphere, is bounded by the norm of their difference, which is the "distance" between the two functions. In a
cauchy sequence this distance shrinks to 0. For any ε, we can move down the sequence, and keep |f[i]-f[j]| below ε. This keeps f[i](x)-f[j](x) below ε. The sequence of images of x is cauchy, and
converges to some limit in B, which becomes g(x).
Is g a linear function? Consider x and y in S. Since each f is linear, the sequence f[n](x+y) is the sum of the individual sequences, and the limit of f[n](x+y), also known as g(x+y), is the sum of
the limits, or g(x)+g(y). In short, the limit of the sum is the sum of the limits in a metric space. Similar reasoning shows g respects scaling by c, hence g is linear.
Let's show g is the limit of our sequence f[n]. Remember, |g-f[n]| is the distance metric. For each x on the unit sphere, g(x) is the limit of f[n](x). Find an n so that functions beyond n are within
ε of each other, all over the unit sphere. g(x) could be a distance ε from f[n](x), as the functions approach their limit, but no more. This holds for all x on the sphere, and keeps the norm of g-f
[j] ≤ ε for j beyond n. This holds for each ε, hence f converges to g.
Set ε to 1, and g is within 1 of some bounded function f[n]. This makes g a bounded function. Every cauchy sequence converges, and boundhom(A,B) is complete.
Equivalent norms on B lead to equivalent norms on the space of bounded functions from A into B.
A functional f is a linear map (respecting scaling and addition) from a real vector space S into the reals. If you think of S as an R module, then a functional on S belongs to the dual of S. The
following theorem extends a functional from a subspace T up to a larger space S.
Let S be a normed vector space and let T be a linear subspace of S. If f is a linear functional from T into the reals satisfying f(x) ≤ |x|, then f can be extended to all of S, with the same
constraint f(x) ≤ |x|.
By zorn's lemma, let U be the largest subspace of S with f(U) extending f(T). Suppose U is not all of S, so that y is a point in S-U. For any two points u[1] and u[2] in U:
f(u[1]) + f(u[2]) = f(u[1]+u[2]) ≤
|u[1]+u[2]| = |u[1]-y + u[2]+y| ≤
|u[1]-y| + |u[2]+y|
Put this all together and get this.
f(u[1]) + f(u[2]) ≤ |u[1]-y| + |u[2]+y|
f(u[1]) - |u[1]-y| ≤ |u[2]+y| - f(u[2])
View the left side as a function of U, and the right side as a function of U, and consider the respective images of U in R. The image of the left function is below, or at worst shares a boundary
point with, the image of the right function. Choose a real number e between these images. If the two images share a boundary point, let e be this boundary point.
Set f(y) = e. By linearity, this defines f on the span of U and y.
Let's check our constraint. In the following, x is any point in U, and c is positive. Verify x+cy first, then x-cy.
f(cy+x) =
ce+f(x) =
c × (e + f(x/c)) ≤
c × (|x/c+y| - f(x/c) + f(x/c)) = { substituting for e }
c × |x/c+y| = |x+cy|
f(x+cy) ≤ |x+cy|
f(x-cy) =
-ce+f(x) =
c × (-e + f(x/c)) ≤
c × (|x/c-y| - f(x/c) + f(x/c)) = { substituting for -e }
c × |x/c-y| = |x-cy|
f(x-cy) ≤ |x-cy|
We have extended f to a subspace beyond U, and this contradicts the maximality of U. Therefore f extends to all of S, and f(x) ≤ |x| everywhere.
In the above, the selection of e might be forced, if the two images of U have a point in common, but more often there is a gap between the two images, providing some wiggle room. We can generalize
this theorem by adding more requirements to f. This may close the gap, but it is still possible to select e, and extend f to all of S.
Let S, T, and f(T) be as above. In addition, assume there is an abelian monoid of bounded linear operators from S into itself. Don't worry about the word monoid; it just means operators can be
composed in the usual way. Follow one operator with another and the resulting linear operator is bounded. If the first bound is k and the second is l, then distance is multiplied by at most k, and
then at most l, thus a bound of kl. Since the monoid is abelian, two operators can be composed in either order and the result is the same. If they are implemented by matrices, for example, the
matrices must commute. That's not typical, but there are sets of matrices that do commute, such as the symmetric matrices. Multiply any two symmetric matrices, in either order, and get the same
result, which happens to be another symmetric matrix.
Assume these commuting operators are bounded by 1, i.e. they never expand distance in S. Since the composition of two such operators is still bounded by 1, we're all right.
The operators also map T into itself, and preserve the value of f. If a is one of our operators, write the constraints this way.
|a(x)| ≤ |x|
x ∈ T → a(x) ∈ T
x ∈ T → f(a(x)) = f(x)
There is an extension of f to all of S, with f(x) ≤ |x|, and f(a(x)) = f(x), for every x in S and all the operators a in our monoid.
Wow - that was just the statement of the theorem - now for the proof.
Consider all finite sets of operators taken from the monoid. Given a set of n operators, find the n images of x, add them up, take the norm in S, and divide by n. Let q(x) be the lower bound of this
"average", across all finite sets of operators. Since norm is at least 0, the average is at least 0, and the greatest lower bound is well defined.
The monoid includes the identity map. Select this single operator, and the average norm is simply |x|. Therefore q(x) ≤ |x|.
We will see that q(x) has most of the properties of a norm. Since q is derived from the norm in S, q(cx) = |c|×q(x). Showing the triangular inequality requires more work.
Choose a finite set of n operators that keeps the average of x below q(x)+ε. Find a second set of m operators that keeps the average of y below q(y)+ε. Now build a set of m×n operators that is the
cross product of these, an operator from the first set composed with an operator from the second. This is a finite set from our monoid, so compute the sum of each applied to x+y, then take the norm,
then divide by mn.
The operators are linear, so replace each a[i]b[j](x+y) with a[i]b[j](x) + a[i]b[j](y). Instead of taking the norm of the entire sum, take the norm of the first sum over x, then take the norm of the
second sum over y, then add these norms together, then divide by mn. Thanks to the triangular inequality, this can only make things bigger.
The monoid is abelian, so we can apply the operators in either order. When the sum is applied to x, apply the n operators from the first set and add up the images of x. Call this intermediate result
w. By assumption, |w|/n < q(x)+ε. Now each operator from the second set is applied to w, and the images are added. Once again, things only get bigger if we run w through each operator in turn, take
norms, and add up those norms. Since the operators are bounded by 1, the norm is not increased by any operator in the second set. We can just skip that step. Thus the norm of the sum over x is below
Similar reasoning shows the norm of the sum over y is below mn×(q(y)+ε).
Add these together and divide by mn, and the result is below q(x)+q(y)+2ε. Let ε approach 0, and q(x+y) ≤ q(x)+q(y).
Watch what happens when x lies in T. Consider a finite set of n operators. Let w be the sum of the images of x under these operators. Since the operators do not change the value of f, f(x) = f(w)/n.
We also know that f(w) ≤ |w|. Therefore f(x) ≤ |w|/n. This holds for all finite sets of operators from the monoid, hence f(x) ≤ q(x).
Go back up to the top of this section and apply the Hahn Banach theorem, using q(x) in place of the norm of x in S. You'll see that q has all the properties we need: the triangular inequality,
scaling by positive constants, and f(x) ≤ q(x) on T. Therefore f extends to a linear function on S with f(x) ≤ q(x).
Since q(x) ≤ |x|, we have f(x) ≤ |x|. We only need show f is preserved by the operators in our monoid, as was the case for T.
Let a() be any operator, and x any element of S. Select the finite set of n operators 1, a, a^2, a^3, … a^n-1. Apply these operators to x, add up the images, take the norm, and divide by n. This
gives the "average", and it bounds q(x).
q(x) ≤ |x + a(x) + a^2(x) + …| / n
This holds for any x, so apply the inequality to a(x)-x. The left side becomes q(a(x)-x). The right side telescopes down to two terms, and looks like this.
q(a(x)-x) ≤ |a^n(x) - x| / n
The right side only gets bigger if we replace the norm with |a^n(x)| + |x|.
Since a is bounded by 1, |a^n(x)| is no larger than |x|. This gives 2×|x|/n, which approaches 0 for large n. Therefore q(a(x)-x) ≤ 0.
Since f is bounded by q, f(a(x)-x) ≤ 0. The same is true of -x, and since all functions are linear, we can pull -1 out, giving -f(a(x)-x) ≤ 0. This means f(a(x)-x) ≥ 0. Combine these results and f(a
(x)-x) = 0. This implies f(a(x)) = f(x), and that completes the proof.
A continuous linear map from one banach space onto another is bicontinuous.
The word onto is important here. Embed the x axis into the plane, and the open interval (0,1) maps to a set that is neither open nor closed in the plane. This is not a bicontinuous map.
If f is bicontinuous then an open ball centered at the origin maps to an open set containing the origin. This means the image encloses the origin, i.e. it contains a ball about the origin.
Conversely, assume every open ball centered at the origin has an image that encloses the origin. The image of an open ball at x is, by linearity, f(x) plus the image of the same open ball at 0. Thus
the image of the open ball at x includes an open ball about f(x). Apply this to every x in an open set O in the domain, choosing a ball about x that lies in O, and considering just the open ball
about f(x). The image of O is a union of these open balls in the range and is open. This makes f bicontinuous. We only need prove the open ball property at 0.
Cover the domain with open balls centered at the origin having radius k, for all positive integers k. Let W be the image of the open unit ball in the range. Let kW be the image of the ball of radius
k. Note that kW is in fact all the points of W multiplied by k, which sort of justifies my notation.
The images kW, for all k, cover the range. This because f maps onto the range.
If W′ is the closure of W, verify that k times W′ is the closure of kW. Since multiplication by k implements a homeomorphism, a point p is in an open set missing W iff kp is in an open set missing
kW, hence k×W′ = (kW)′. Since there is no ambiguity, I'll just write kW′.
Suppose W is a nowhere dense set. Every open ball misses W, or contains a smaller open ball that misses W.
Given an integer k, consider kW, and an open ball. Contract everything by k, and the open ball pulls back to a smaller open ball nearer the origin. This open ball contains some open ball that misses
W, and when we expand by k again we find an open ball that misses kW. Therefore kW is also a nowhere dense set.
The range is now the countable union of nowhere dense sets, and that makes it first category. However, a complete metric space is second category. This is a contradiction, hence W is not a nowhere
dense set.
If W′ is the closure of W, then W′ contains an open ball of radius r, centered at c. If c is not in W then c is in the closure, and that means points of W approach c. Pick a point in W that is close
to c, inside the ball of radius r, and relabel this point as c, the center of a new ball with a smaller radius r, that lies in W′. Now we know c is in W, and c has some preimage d in the ball of
radius 1. When f is applied to the unit ball - d, a translate of the unit ball, the result is a translate of W that carries c to 0. Translates are homeomorphic, hence the closure of W-c contains a
ball of radius r about 0. When subtracting d from the open ball of radius 1 in the domain, the result lies in the open ball of radius 2. Therefore 2W′ contains an open ball of radius r about the
origin. Scale this by any real number, and the closure of the image of every open ball at 0 includes an open ball about 0.
For notational convenience, let W′ contain an open ball of radius r about 0. Thus kW′ contains an open ball of radius kr about 0.
Let y[0] be any point in the range with |y| < r. Thus y[0] is contained in a ball of radius r about 0, is contained in W′. We're going to build a series of vectors x[i] in the domain, as i runs from
1 to infinity, summing to x[0], and their image y[i] in the range will sum to y[0]. This is where continuity finally comes into play; f(x[0]) = y[0]. Here we go.
Remember that points of W approach y[0]. Let y[1] be a point in W that is within r/2 of y[0], and let x[1] be a preimage of y[1]. Thus |x[1]| < 1.
Let d[1] = y[0]-y[1]. The point d[1] is in ½W′. Since d[1] is in the closure of ½W, find y[2] within r/4 of d[1], and let f(x[2]) = y[2]. If you add y[2] to y[1], d[1] takes you back to y[0], and
then there is the extra piece, the difference between d[1] and y[2], which is bounded by r/4.
Let d[2] = d[1]-y[2]. This is the extra piece I talked about. Since d[2] is in ¼W′, find y[3] within r/8 of d[2], and f(x[3]) = y[3]. Add y[3] to (y[1] + y[2]) and retrace d[2] back to y[0], and then
the extra piece bounded by r/8. This extra piece is the difference between d[2] and y[3].
Set d[3] = d[2]-y[3], find y[4] and x[4], and so on. Continue this process, building a sequence x in the domain and y in the range. The x sequence approaches 0 geometrically. The partial sums of x
form a cauchy sequence, with a limit that I will call x[0]. Meanwhile the partial sums of the y sequence approaches y[0]. Since f is continuous, f(x[0]) = y[0].
How far is x[0] from the origin? Since x[1] is in the preimage of W, it has norm at most 1. Similarly, x[2] has norm at most ½, x[3] has norm at most ¼, and so on. Thus |x| < 2. This holds for all y
with |y| < r, hence the open ball of radius r is contained in 2W. Scaling, the open ball of radius r/2 is contained in W. This proves f is bicontinuous.
If f is also injective it is a homeomorphism.
If the vector space S is complete with respect to two different norms f and g, and f(x) ≤ c×g(x) for some constant c, then the norms are equivalent. A ball of radius ε via f wholly contains a ball of
radius ε/c via g. The identity map from g to f is injective, continuous, and onto, hence a homeomorphism by the above theorem. The inverse map is continuous, hence bounded. The bound gives a constant
b satisfying g(x) ≤ b×f(x), and as per an earlier theorem, the norms are equivalent.
Let f be a linear map from one banach space S into another banach space T. Given any convergent sequence x[n] in S, with f(x[n]) a convergent sequence in T, assume f(x) = y, where x and y are the
respective limits. Then f is continuous.
Define a new norm r on S as r(x) = sqrt(|x|^2 + |f(x)|^2). This is basically the euclidean formula for distance in two dimensions, so it satisfies the properties of a norm.
Start with a cauchy sequence under the norm r, and select ε and n so that terms beyond x[n] differ by no more than ε. Since distance under |S| or |T| is never larger than distance under r(), the
terms beyond x[n] differ by no more than ε under |S|, and their images differ by no more than ε under |T|. Both x[n] and f(x[n]) are cauchy, and converge to x and y. By assumption y = f(x), so x is
the limit of the sequence under the metric r(). In other words, S is still a complete banach space.
Apply the identity map on S from |S| to r(S). The former metric is always bounded by the latter, so by the previous theorem, the map is a homeomorphism, continuous in both directions, and there is a
reverse bound b satisfying r(x) ≤ b×|x|. Yet the metric of f(x) in T is bounded by r, so distance in T (via f) is bounded by b times distance in S, and f is continuous.
A topological vector space is a vector space with a topology, such that addition and scaling are continuous. A normed vector space is a topological vector space, deriving its topology from the
metric. But there could be other topological vector spaces that are not metric spaces.
A topological vector space satisfies certain criteria, which will be presented below. As you might imagine, these criteria deal with open sets. There is no metric, no notion of distance, so that only
leaves open and close sets and the operations of the vector space.
In a metric space, the translate of an open ball is an open ball, since the distance between two points does not change; but in a topological group, we have to use the properties of continuity to
prove translation preserves open sets. If S is our topological space, S+S onto S is continuous. Let U be an open set and let V = U-x. Thus V is the preimage of U under translation. The preimage of U
under addition is open, and is covered with base open sets. Remember that base open sets in the product are open sets cross open sets. For any y in V, x cross y is covered by an open set cross an
open set. The open set that contains y has to lie in V, else its translate by x will not lie in U. Therefore V is covered by open sets and is open. Translation by x is continuous. Translation by -x
is also continuous, hence translation is a homeomorphism on S.
This works even if S is a nonabelian group. Translation by x, on either side, implements a homeomorphism on S, and a subgroup of S with the subspace topology is homeomorphic to all its translates.
When S is a topological module, a nearly identical proof shows that scaling by the units of the base ring is a homeomorphism. Of course our ring is a field, and every nonzero real number is a unit,
so scaling by a nonzero constant carries open sets to open sets. Scaling by 0 is continuous, but certainly not a homeomorphism.
Let S be a topological group with a local base at 0. If an open set O contains x, O-x encloses 0, and contains an open set U from our local base. Thus O contains U+x contains x, and the translate of
the local base at 0 gives a local base at x. This holds for all x, hence the translates of the local base at 0 give a base for the topology.
What's wrong with the discrete topology, where every set is open? Nothing, if S is a group, but watch what happens when S is a vector space. Consider the multiples of x by the reals in [0,1]. This is
the smear of x by a closed interval. The set is open of course, and by continuity its preimage is open. Let y be a real number in [0,1] and x cross y is in an open set cross an open set, where the
open set containing y lies in [0,1]. So [0,1] is covered by open sets and is open; yet it is closed. The topology of S is restricted by the topology of R.
The following properties turn a vector space into a topological vector space. They deal with the local base at 0, which is sufficient to describe the entire topology. The variables U V and W
represent base open sets at 0.
1. Every open set containing 0 contains some U.
2. If x is a point in an open set then there is some U with x+U in that open set. (Use x*U and V*x for a nonabelian group.)
3. There is some V with V+V in U.
4. For any point x and any U, there is a nonzero constant c with cx in U.
5. If 0 < |c| ≤ 1 then c×U lies in U and is a member of the base.
Condition 1 builds a local base at 0, and condition 2 moves that local base to any point x, thus building a base for the topology. Conversely, if S is a topological group, then the open sets
containing 0 form a local base, and this or any other local base can be translated to any x.
Condition 3 implies, and is implied by, the continuity of S+S onto S. Assume the latter, and let U contain 0. Look at the preimage of S+S and find V+W containing 0,0 in U. The intersection of these
base open sets is another open set containing 0, which contains a base open set. Relabel this as V, and V+V is in U.
Conversely, assume condition 3, and consider the preimage of x+y+U in a larger open set that contains x+y. Find V such that V+V lies in U. Now x+V + y+V lies in x+y+U, x cross y is contained in an
open set, the preimage is open, and addition is continuous. A continuous abelian group is equivalent to 1 2 and 3.
Assume scaling by R is continuous, hence continuous at 0. The preimage of the base open set U is open. An open interval about 0, cross an open set about x, lies in U. This means a nonzero constant
times x lies in U.
A local base need not satisfy condition 5, but it's possible to find a new local base that does. Start with W, a set in the local base. Continuity of scaling at 0 means an open interval (-e,e) times
an open set V winds up in W. If e is 1 or greater, ratchet e down to any number below 1. Now c×V lies in W whenever |c| < e. Let U be the union of c×V for all c with |c| < e. Now U is an open subset
of W. Replace W with U, and do the same for every other open set in the local base. Shrinking the open sets in a local base preserves the property of being a local base. Furthermore, each such set,
multiplied by a constant below 1, produces a subset of the original. We are simply taking the union of fewer instances of c×V. Thus a topological vector space implies all 5 conditions.
If you don't want to mess with the axiom of choice then do this. Consider every pair (open interval cross base open set) wherein the product lies in W. For each such pair, reduce e down to a power of
2 below 1, such as ½, ¼, etc. Take the union of c*V for c in (-e,e) as we did above, then take the union across all pairs. The result, call it U, lives in W, and is the new base open set that takes
the place of W. All base sets are replaced at once. Each is smaller than its predecessor, thus building a new base, and each satisfies condition 5.
Finally assume all 5 conditions hold. We already said 1 2 and 3 produce a topological group; we only need show scaling by R is continuous. Consider a base open set about cx, namely cx+U. Let V be a
base open set such that V+V+V lies in U. Choose a real number d such that dx lies in V. If d > 1 let d = 1. (We can do this by property 5, and every e < d has ex in V.) Multiply the interval
(c-d,c+d) by the open set x+V/c, or by x+V if |c| < 1. This is the union of cx + ex + (c+e)(v/c) for e in (-d,d). The result lies in cx+U, and contains cx, hence multiplication by R is continuous.
What goes wrong if S is discrete? We already said it can't be a topological vector space with scaling by R. The local base at 0 has to include the open set {0}, to cover the open set {0}. And this is
all we need for the local base, and the base. Conditions 1 2 and 3 are satisfied, but 4 is not.
Review the separation axioms, and assume S is T[0]. Select U so that x+U misses y. Thus U does not contain y-x. Remember that -U is an open set, hence y-U is an open set about y that misses x. If S
is T[0] it is T[1].
A space need not be T[0]. Within R^2, let vertical stripes, x = (-e,e), form a local base about 0. Verify that all 5 properties are satisfied. Yet points along the y axis cannot be separated.
If points are inseparable they can be clumped together to produce a quotient space. Does this disturb the structure of the group? If inseparable points become separated after translation, translate
back and an open set contains one and not the other. Similar reasoning holds for scaling. Thus the quotient space is also a quotient module, in this case a vector space. In the above example the
kernel is the y axis, and the quotient space is the x axis, which is indeed a topological vector space.
Henceforth we will assume inseparable points have been clumped together, and our topological vector space is T[1].
If x+U misses y, let V+V lie inside U, and suppose x+V and y-V intersect. Now x plus something in V yields y minus something in V, hence x plus something in U yields y, which is a contradiction.
Therefore our topological vector space is T[2], or hausdorff.
In a metric space, a map is uniformly continuous if every ε has its δ. In a normed vector space, it is enough to look at balls about the origin, since the local base defines the base. Every ball of
radius ε pulls back to a ball of radius δ. Of course this is required for continuity at 0, and we already showed that continuity at a point is the same as uniform continuity everywhere. All this can
be generalized to topological vector spaces.
Let V be an open set about 0 in the range and pull back, by continuity, to U, an open set about 0 in the domain. Here V plays the role of ε and U plays the role of δ. Add x to the domain and x+U maps
into f(x)+V. Given V, one U applies across the domain, and the function is uniformly continuous.
Every finite dimensional topological vector space S is homeomorphic to R^n. Remember, we're assuming S is hausdorff.
A normed vector space is hausdorff, so every finite dimensional normed vector space is homeomorphic to R^n. But we'll prove the more general assertion, regarding topological spaces.
Select a basis for S and build a linear map f from R^n onto S. If f is bicontinuous, then the spaces are indeed homeomorphic.
Let U be an open set about 0 in S. Choose V so that V+V+V… n times lies in U. (U and V are part of the local base at 0, as described in the previous section.) Let x be the image of a coordinate unit
vector of R^n in S, and let cx lie in V. Thus the image of (-c,c) lies in V. Find such a constant for each dimension and build an open rectangular box in R^n. This maps into V+V+V…, or U. Thus f is
continuous at 0, and f is continuous everywhere.
Now for the converse. Let Q be the unit sphere in R^n, and let B be the unit open ball in R^n. The image of Q is a compact subspace of a hausdorff space, hence it is closed. The complement is open,
so let U be a base open set about 0 that lies in the complement. Remember, our base open sets shrink when multiplied by a constant less than 1. Suppose y lies in U, where y = f(x), yet x is not in B.
Scale x down to a unit vector, and y remains in U, even though it is now in the image of Q. This is a contradiction, hence all of U maps into B via f inverse. This holds for a ball of any radius,
thus the inverse of f is continuous, and f is bicontinuous, and S is homeomorphic to R^n.
We showed earlier that a finite dimensional subspace of a banach space is closed - how about a topological vector space? Let S be a topological vector space and let T be a finite dimensional
subspace. Of course T looks just like R^n.
For any x not in T, T and x span a subspace of dimension n+1. This also looks like euclidean space, hence x can be placed in an open set that misses T. This comes from an open set in S that misses T,
thus x is not in the closure of T, and T is closed.
Every open set of S includes an n dimensional ball. As usual we'll prove this for a base set about 0, whence it applies to every open set by translation. Assume S supports at least n independent
vectors. These combine to build euclidean space. Find a base set V such that V+V+V… n times lies in U. Scale each of the n unit vectors so that it lives in V. In other words, c[i]b[i] lives in V,
where b[i] builds the basis for n space. Add v to itself n times and all linear combinations of basis vectors, with coefficients from -c[i] to c[i], live in U. This is a box, and of course it
contains an open ball.
If S is locally compact then it is homeomorphic to R^n. Prove S is finite dimensional and apply the earlier result.
Select an open set about 0 whose closure is compact. This is the definition of locally compact. Let U be a base set in this open set. Now the closure of U is closed in a compact set, hence it is also
Let U contain the nonzero point x. Remember that c×U lies in U for |c| ≤ 1. Thus U contains the smear of x from -1 to 1. But suppose each c×U contains x. Thus U contains x/c, and U contains all the
multiples of x, a line in our topological vector space. Being hausdorff, some V separates 0 from x. Let V be in the local base, so that V intersects the line of x in (-e,e)×x, where e < 1. The
translates of V cover U closure. A finite subcover will do, yet a finite subcover cannot cover all the multiples of x. This is a contradiction, Therefore we can always scale U down to exclude any
nonzero point x.
Let the translates of U/3 cover U closure. Note that we only need translate by elements of U. If p is in U closure then p-U/3 has to intersect U in some point Q, and Q+U/3 brings in p, so
translations by elements of U will suffice.
Let x[1] x[2] x[3] … x[n] define the translates of U/3 that form the finite subcover.
Let z be any point in U and assume, without loss of generality, that z is in the translate x[1]+U/3. The difference z-x[1] lies in U/3, hence 3 times this difference still lies in U. Let 3(z-x[1])
lie in some translate x[2]+U/3. Now the difference between 3(z-x[1]) and x[2] lies in U/3, so place 3 times this difference in x[3]+U/3. Continue this process, and z = x[1] + x[2]/3 + x[3]/9 + x[4]/
27 … with the error term in U/3^k.
If you don't stumble upon equality early in the game, elements of x will repeat. That's ok; just group them together. So x[2]/3 + x[2]/243 becomes x[2] times 82/243, and so on. This is the compressed
sum. Each of the n coefficients adds some of the terms of a geometric series, with powers of 3 in the denominator, hence each coefficient converges absolutely to a real number. Let c[i] be this real
number, the limit of the sum of the coefficients on x[i], and let W be the sum c[1]x[1] + c[2]x[2] + c[3]x[3] + … c[n]x[n].
How close is the k^th approximation to z, and to W? We already said the approximation is within U/3^k of z. At worst, the difference between W and the k^th approximation is a linear combination of x
[1] through x[n], where the coefficients are bounded by the tail of a geometric series. These tails go to 0, and eventually the multiples of x[i] are all in V, and the resulting linear combinations
all lie in U. In other words, the error term lies in U. If the coefficients are cut in half, the result lies in ½u. As the coefficients approach 0, the error term fits into c×U, where c approaches 0.
Put this all together and W-z lies in every multiple of U, no matter how small. However, if W-z is nonzero then some scale multiple of U excludes W-z. This is a contradiction, hence W = z.
Since z was arbitrary, x[1] through x[n] span all of U.
Let T be the span of x[1] through x[n]. Thus T contains U. Since T is finite dimensional it looks like R^n. It is closed in S, and contains U closure. Suppose T does not contain some y in S. Let T′
be the span of T and y. Remember that T′ looks like R^n+1. U is an open set in S, hence open in T, and in T′. However, a set stuck in n dimensions cannot be an open set in n+1 dimensions. The
interior of a square may be open in the plane, but it is not open in 3 space. Therefore T is all of S, and S is finite dimensional, and euclidean.
A hilbert space is a banach space with a dot product. The definition of dot product, given below, is consistent with the euclidean definition in R^n.
The dot product is a binary operator whose operands are vectors in a banach space S. The result is a real number. If x and y are vectors in S, the dot product is indicated by a literal dot, as in
x.y, hence the name dot product. This is also called an inner product.
The dot product respects scaling, so that c×(x.y) = cx.y = x.cy for any real constant c. Also, the dot product respects addition in either component. As a corollary, x.0 = x.y-x.y = 0, and similarly,
0.x = 0.
Symmetry is another requirement: x.y = y.x. (This does not hold for complex numbers; i'll get to that below.)
Finally, x.x = |x|^2, where |x| is the norm of x in the banach space S. The norm is tied to the dot product.
Notice that the traditional dot product in euclidean space satisfies all these properties, thus R^n is a hilbert space.
A finite dimensional banach space is homeomorphic to R^n, as demonstrated earlier. Hence a finite dimensional banach space can be viewed as a hilbert space by applying the euclidean norm and dot
Although we have skirted this topic thus far, a complex banach space is essentially a real banach space with some extra bells and whistles. It is a complex vector space with a norm that obeys the
triangular inequality, thus a metric space. Addition is the same whether viewed as a complex space or a real space, hence addition is continuous. As with real scalars, the norm of c×x is the absolute
value of c times the norm of x. Multiply x by anything on the unit circle and the norm does not change. Use this to show scaling by complex numbers is continuous, giving a continuous vector space.
With this in mind, view S as a real vector space and it becomes a traditional banach space. The norm, only scaled by real numbers, is our old familiar norm once again.
The dot product changes slightly when S is viewed as a complex banach space. For starters, the dot product produces a complex number, not just a real number. A constant is conjugated when applied to
the second operand, so that cx.y is the conjugate of x.cy. Also, x.y and y.x are conjugates of each other. This is consistent with the definition of dot product in n dimensional complex space, where
the second vector is conjugated, then corresponding components are multiplied, and the pairwise products are added together. This seems like a trick, but it facilitates x.x = |x|^2. Concentrate on
one component in an n dimensional vector, say a+bi. This is multiplied by a-bi, giving a^2 + b^2, which contributes to the norm of the vector just as it would in real space, where a and b are
separate real components. The complex dot product in n dimensional complex space satisfies our properties, and C^n is a complex hilbert space.
Let M be the space of continuous real valued functions on [0,1]. (You can use complex functions if you like; it's not much different.) This is a real vector space, and the norm |f| = sqrt(∫ f^2)
makes it a normed vector space. (If f is complex then replace f^2 with ff.) Actually we should stop and prove this is a norm. Because f is continuous, it cannot stray from 0 at a single point; it
must leave 0 over a subinterval, giving a nonzero integral. Thus f = 0 iff |f| = 0. Scaling by a constant c multiplies the norm by |c|. Finally, |f+g| ≤ |f| + |g|, because the same is true of the
riemann step functions that approach f and g. These step functions can be represented as vectors in n space when dividing [0,1] into n subintervals, and in that context the triangular inequality
holds. It holds in the limit as n approaches in finity, giving the integrals that define the norms. The functions of M form a metric space.
Complete this metric space to build a banach space S. The completion includes every function that is approached by continuous functions. This includes piecewise continuous functions. Let f[n] = 0
from 0 to ½-1/n, then slope up to 1 at x = ½+1/n, then remain at 1 across the rest of the unit interval. The limit is the discontinuous function that is 0 on [0,½), ½ at ½, and 1 on (½,1].
S may include functions that are not integrable. In fact the elements of S may not be functions at all. Tweak the above example, so that when n is odd the sloping line segment runs from ½-1/n,0 up to
½,1, and when n is even the segment runs from ½,0 up to ½+1/n,1. The limit function is 0 on [0,½) and 1 on (½,1], but is not defined at x = ½. Even more bizarre examples are possible. But as with any
metric space, the completion consists of cauchy sequences, whether those sequences have a convenient representation or not.
You might wonder about the distance metric in S, where functions, and their integrals, are not well defined. Remember, distance in S ultimately comes from distance in M, which is always well defined.
If f and g are functions in S, |f,g| is the limit of the distances between the functions that approach f and the functions that approach g. In any metric space, this limit exists, hence distance is
well defined in S, and makes S a metric space. In fact S is a complete metric space, since the completion is always complete.
With |f| defined on M, and on S, S becomes a complete normed vector space, or a banach space. Let's turn it into a hilbert space.
Let f.g be the integral of f×g, or f×g if functions are complex. Since f×g is continuous, this is well defined, and when f = g, the result is the square of the norm. Verify the properties of
linearity and symmetry, and ∫ f×g becomes a dot product for M.
Extend this to all of S. Let f[n] and g[n] be cauchy sequences in M, defining elements of S. By ignoring leading terms, we can assume that all the terms in f[n] are within 1 of each other. ∫ (f[i]-f
[j])^2 < 1. This is the property of being cauchy, with ε set to 1. Similarly, assume the terms of g are within 1 of each other. If |f[n]| exceeds |f[1]|+1, then the distance from f[1] to f[n] exceeds
1, which is a contradiction. Tied to their first terms, all the terms of f, and all the terms of g, have norms below some constant w.
For a small ε, smaller than 1, go out in both sequences so that functions beyond f[n] are within ε of each other, and similarly for g. Consider pairs of functions f[i] and f[j], and g[i] and g[j].
The difference between the two dot products is the integral of f[i]g[i] - f[j]g[j]. Can this be bounded by some constant times ε?
For notational convenience let u = f[i] and let u+a = f[j]. Let v = g[i] and let v+b = g[j]. (Remember that u, v, a, and b are continuous functions on [0,1].) The integrand becomes ub + va + ab.
Consider the last term first. The integral of ab, squared, is no larger than the integral of a^2 times the integral of b^2. How do we know? Divide the unit interval into n subintervals and let
riemann step functions approach the integrals. The left side is the sum of a[i]b[i], squared, while the right side is the product of the sum of a[i]^2 times the sum of b[i]^2. The first is bounded by
the second by the cauchy schwarz inequality. The same is true in the limit. Since a is the gap between f[i] and f[j], a.a < ε^2. Similarly, b.b < ε^2. The square of a.b is less than ε^4, hence a.b <
Apply cauchy schwarz to the integral of ub, and bound it below the square root of the integral of u^2 times the integral of b^2. This is |u|×|b|, or wε. Similarly, v.a is bounded below wε. Put this
all together and the difference between the dot products in positions i and j is bounded by (2w+1)ε. The sequence of dot products is cauchy, and converges to a real number. This is the dot product of
the two sequences f[n] and g[n] in S.
Is this well defined? Let another sequence of continuous functions h[n] represent the same element in S as g[n]. In other words, their difference, e[n], converges to 0. Consider the limit of f[n].e
[n] as n approaches infinity. Each term is bounded below |f[n]| × |e[n]|, (again using the cauchy schwartz inequality), and |e[n]| approaches 0, while |f[n]| is bounded below w. The dot products f
[n].g[n] and f[n].h[n] converge to the same real number, and dot product is well defined in S.
The properties of linearity and symmetry are straight forward, so consider the last property, the dot product of a sequence f[n] with itself. Replace the terms with |f[n]|^2. Since |f[n]| approaches
the norm of the entire sequence by definition, and the limit of the squares is the square of the limit, the sequence dotted with itself approaches the norm of the sequence, squared. That completes
the proof.
In summary, the completion of the continuous functions on [0,1] is a banach space, and a hilbert space, using integration and limits to define the norm and the dot product. Complex functions on [0,1]
build a complex hilbert space.
The dot product in finite dimensional real or complex space is a mathematical function using multiplication and addition, and is continuous - but how about a generic hilbert space S? Is the inner
product continuous from S*S into R or C?
The dot product is tied to the norm, and the norm satisfies certain properties, such as the triangular inequality. This can be used to prove cauchy schwarz in an arbitrary hilbert space. Use the
properties of norm and dot product to write the following.
0 ≤ |x-ly|^2 = (x-ly).(x-ly) =
x.x - 2l(x.y) + l^2(y.y)
The inequality becomes equality iff x = ly.
Setting l = 0 or y = 0 gives 0 ≤ x.x, which isn't very interesting, so assume l > 0 and y ≠ 0, and write the following inequality.
2x.y ≤ x.x/l + ly.y
Set l = |x|/|y| and find x.y ≤ |x| × |y|. If x or y is 0 we have equality, and if x = ly we began with an equation that produces equality. This is cauchy schwarz. The dot product is bounded by the
product of the norms, with equality iff one vector is a linear multiple of the other. Remember that 0 is technically a linear multiple of x, and sure enough, 0.x = |0| times |x|.
Now for continuity. Fix a vector v and consider the linear map from v.S into R. Concentrate on the unit sphere in S, the vectors in S with norm 1. v.x is bounded by |v|×|x|. Thus v.S is a bounded
operator, hence continuous into R.
Let T be S cross S with the product topology. This is another banach space. The dot product now maps T into R. Select x and y from S such that the ordered pair x,y in T has norm 1. Thus |x| and |y|
are at most 1 in S. The dot product x.y is bounded by 1. Once again a linear operator on a banach space is bounded, hence continuous. The dot product is a continuous linear map from S cross S into
the reals.
Two nonzero vectors in a hilbert space S are orthogonal if their dot product is 0. Two vectors are orthonormal if they are orthogonal and have norm one. Orthogonal vectors can always be scaled to
become orthonormal. These agree with the traditional definitions.
Let x and y be orthonormal and consider the distance from x to y. This is the square root of (x-y).(x-y). Expand, and replace x.y with 0, to get x.x+y.y, or 2. Orthonormal vectors are sqrt(2)
distance apart.
A set of vectors (possibly infinite) forms an orthogonal system if every pair of vectors is orthogonal. The vectors in an orthonormal system are orthogonal, with norm 1.
Suppose a linear combination of orthogonal vectors yields 0. Take the dot product with any of these vectors and find that the coefficient on that vector has to be 0. This holds across the board,
hence the vectors in an orthogonal system are linearly independent.
Suppose S is a separable hilbert space with an uncountable orthogonal system. Convert to an orthonormal system and find uncountably many points that are all sqrt(2) distance from each other. Place an
open ball of radius ½ about each point. These are disjoint open sets, hence any dense set has to be uncountable, and S is not separable. A separable hilbert space can only support a countable
orthogonal system.
If S is finite dimensional, use the gram schmidt process to convert any basis into an orthonormal system. In theory this works for a countable basis as well, but then S is not complete. Build a
sequence by adding a scaled version of each coordinate in turn, the i^th coordinate scaled by 1/2^i. We'll see below that this sequence is cauchy, but it cannot converge to any finite linear
combination of these independent vectors. These orthogonal vectors do not form a basis for S, but they do form a hyperbasis, as we'll see below.
Assume S is separable, and let b[1] b[2] b[3] etc be a countable orthonormal system inside S. If v is any vector in S, let a[j] = v.b[j]. Let the n^th approximation of v be the partial sum over a[j]b
[j], as j runs from 1 to n. What is the distance from v to its n^th approximation? Answer this by looking at the square of the distance, which is v - the n^th approximation, dotted with itself. Let's
illustrate with a[1] and a[2].
(v-a[1]b[1]-a[2]b[2]) . (v-a[1]b[1]-a[2]b[2]) =
v.v - 2×(v.a[1]b[1]+v.a[2]b[2]) + (a[1]b[1]+a[2]b[2]).(a[1]b[1]+a[2]b[2]) =
v.v - 2×(a[1]^2+a[2]^2) + (a[1]b[1]+a[2]b[2]).(a[1]b[1]+a[2]b[2]) =
v.v - 2×(a[1]^2+a[2]^2) + a[1]^2|b[1]|^2 + a[2]^2|b[2]|^2 =
v.v - 2×(a[1]^2+a[2]^2) + a[1]^2 + a[2]^2 =
v.v - (a[1]^2+a[2]^2)
This value is nonnegative for all n, thus giving Bessel's inequality:
∑ {1,∞} a[i]^2 ≤ v.v
The squares of the coefficients of v are nonnegative numbers, and together they build a monotone series that is bounded by v.v. The terms approach 0, and the series converges absolutely to a real
number between 0 and v.v.
The square of the norm of the difference between the i^th approximation and the j^th approximation is a section of the above series from a[i]^2 to a[j]^2. As we move out in the series, these slices
approach 0. The approximations define a sequence that is cauchy, and since S is complete, the approximations approach an element that I will call u.
Remember that dot product is a continuous map from S cross S into R. If u is the limit of a cauchy sequence in S, then u.b[j] is the limit of the cauchy sequence dotted with b[j]. The approximations
form our cauchy sequence, and when dotted with b[j], they produce 0 for a while, and then a[j] thereafter. Therefore u.b[j] = a[j]. In other words, u and v produce the same coefficients. Every v
generates a series of coefficients according to our orthonormal set b[1] b[2] b[3] etc, and these coefficients determine an element u in S.
If v ≠ u, the coefficients of v-u are all 0. Thus v-u is a new orthogonal vector, and the system is not maximal. If the orthonormal system is maximal, then every vector v is uniquely represented by
its coefficients, such that the sequence of approximations approaches v. A maximal orthonormal system is also called complete, or total.
Apply zorn's lemma, and S contains a maximal orthonormal system, which acts as a hyperbasis for S. Note the difference in terminology; a basis spans using finite linear combinations, but a hyperbasis
spans using infinite sums, i.e. infinitely many basis vectors could participate, as long as the squares of the coefficients sum to a real number. This is required by bessel's inequality, and if it
holds, the approximations are cauchy, and the sequence converges to something in S. Therefore, the points of S are uniquely represented by square summable sequences, according to the designated
If S is a finite dimensional R vector space, every orthonormal system is finite. A maximal system in n dimensional space contains n elements. It is a basis for the hilbert space.
If S is infinite dimensional, a finite orthonormal system will not span, hence a maximal orthonormal system is infinite. If S is separable, each such system is countable, hence the orthonormal basis
can be designated b[1] b[2] b[3] etc, out to infinity. The finite dimensional hilbert space is equivalent to R^n, and the separable infinite dimensional hilbert space is equivalent to L[2], the
square summable sequences. This equivalence is more than a homeomorphism; the norm and dot product are also determined. There is but one hilbert space for each nonnegative integer, and one infinite,
separable hilbert space, up to isomorphism.
We've demonstrated uniqueness, but what about existence? R^n is a hilbert space, using the euclidean norm and dot product. Let's prove L[2] is a separable hilbert space.
Let S = L[2], the set of square summable sequences. If f is such a sequence, let |f| be the square root of the sum of its squares. This is 0 only when f is 0.
Scale a square summable sequence by c and find another square summable sequence. Furthermore, |cf| = |c|×|f|.
If f and g are two sequences, consider the first n terms of f+g. The triangular inequality is valid in n space. Thus the square root of the sum of the squares of the first n terms of f+g is no larger
than the square root of the sum of the squares of the first n terms of f, plus the square root of the sum of the squares of the first n terms of g. This holds for all n, so take limits as n
approaches infinity. Given any of these square summable sequences, what is the limit of the norm of the first n terms, as n approaches infinity? Since square root is a continuous function from R into
R, applying square root to the partial sums of a convergent series is the same as applying square root to the limit. In other words, the limit of the norms of the partial sums is simply the norm of
the entire sequence. Therefore, |f+g| ≤ |f| + |g|. This proves the triangular inequality, and it also proves f+g belongs to S, which was not obvious at the outset. Therefore S is a normed vector
To show S is complete, let u[1] u[2] u[3] etc be a cauchy sequence in S. Strip off leading terms, so that all terms are within 1 of each other. With this in mind, all the norms are founded, no more
than 1 away from the norm of u[1]. Project this sequence onto the j^th coordinate. In other words, look at the j^th term in u[1] u[2] u[3] etc. If u[n] is the n^th row in an infinite matrix, we are
moving down the j^th column. The distance between any two elements u[m] and u[n] is at least the difference in their j^th coordinates. Therefore, the projection onto the j^th coordinate defines a
cauchy sequence in R. This converges to a real number that I will call c[j]. This defines a new sequence c[1] c[2] c[3] etc, which is the bottom row of our infinite matrix.
Suppose c is not square summable. In other words, the sum over c[i]^2 is unbounded. For some finite index j, the norm of the first j terms of c, a vector in j dimensional space, exceeds the norms of
all the elements u[n] in our cauchy sequence. Restrict to j dimensional space, the first j columns of our matrix, and within this space the norm of c is the limit of |u[n]|, as n runs to infinity.
This limit is a real number that cannot rise above the largest |u[n]|, yet c lies above all these norms. This is a contradiction, hence c is square summable, and belongs to S.
Subtract c from each u[n]. This does not change the distance between u[m] and u[n], hence the resulting sequence is still cauchy in S. It converges to 0 iff the original sequence converges to c. We
have a cauchy sequence u[n], and the components all converge to 0, and we want to show that u converges to 0.
A sequence approaches 0 iff its norms approach 0. Move down to u[m], so that u[n] beyond u[m] is never more than ε/3 away from u[m]. Suppose |u[m]| is at least 2ε/3. It's norm moves above ε/3 after
finitely many terms. Once again we are in j dimensional space, where the cauchy sequence converges to 0. Some u[n] is very close to 0, at least in j space, and the norm of u[m]-u[n] is above ε/3.
This can only get worse as we go beyond j. This is a contradiction, hence |u[m]| ≤ 2ε/3. Each u[n] beyond this point has norm bounded by 2ε/3 + ε/3, or ε. The norms converge to 0, the sequence
converges to 0, the original sequence converges to c, and S is complete. This makes S a banach space.
Let the dot product of f and g be the sum over f[i]g[i]. After n terms, the truncated dot product, squared, is bounded by the sum of the squares of the first n terms of f, times the sum of the
squares of the first n terms of g. This is another application of cauchy schwarz. Taking square roots, the absolute value of the truncated dot product is bounded by the product of the two partial
norms of f and g. This applies for all n, hence it applies in the limit. Therefore the dot product, as a sum of products, is absolutely convergent, and is well defined.
Note that all the properties are satisfied, including f.f = |f|^2. This makes S a hilbert space.
Finally, prove S is separable. Let a sequence of coefficients lie in the dense set D if finitely many are rational, and the rest are 0. This is a countable set. Let h be a base open set centered at
v, with radius ε. Represent v as the square summable series a[j]. Choose n so that the n^th approximation is within ½ε of v. Then move the first n coefficients to nearby rational values. These
rational numbers can be arbitrarily close to the real coefficients of v. The result is a point in D that is within ε of v. The countable set D is dense, and S is separable.
As a hilbert space, L[2] has a countable dimension, corresponding to its complete orthogonal system. However, as a vector space, the dimension of L[2] is uncountable. I hinted at this earlier.
Suppose L[2] has a countable basis, as an R vector space. Write the basis as an infinite matrix, where the i^th row holds the square summable sequence that is the i^th basis element. Use lower
gaussian elimination to make the matrix upper triangular. Then use upper gaussian elimination to make the matrix diagonal. (If it doesn't come out diagonal then the matrix does not span, and is not a
basis.) The span of this basis is now the direct sum of the coordinates, and the sequence 1, 1/2, 1/4, 1/8, 1/16, …, which is square summable, is not accessible.
Let S be a separable hilbert space with a complete orthonormal basis b[1] b[2] b[3] etc. Let f be a linear function from S into R. f is continuous iff f = u.S for some vector u. Every continuous
function is really a dot product.
Let c[i] = f(b[i]). Let v[n] be the sum of c[i]b[i], as i runs from 1 to n. Use the linear properties of f to show f(v[n]) = the sum of c[i]^2, as i runs from 1 to n.
Remember that continuous is the same as bounded. Since f is bounded, let k be a bound on f. Now |f(v[n])| is no larger than k×|v[n]|. Replace |v[n]| with the square root of the sum of c[i]^2. At the
same time, f(v[n]) is the sum of c[i]^2. The norm of v[n] is the square root of f(v[n]).
f(v[n]) ≤ k × sqrt(f(v[n]))
sqrt(f(v[n])) ≤ k
f(v[n]) ≤ k^2
This applies in the limit, hence c[i] forms a square summable sequence bounded by k^2. Apply this to the hyperbasis and find a point in S. Let u be the infinite sum of c[i]b[i].
Let a[i]b[i] be the representation of an arbitrary element x in S. Since f is continuous, f(x) becomes the limit of f applied to the partial sums. This becomes the sum over a[i]c[i]. The same formula
appears if you evaluate the dot product of x and u. Therefore f(S) = S.u.
To find the bound k, consider the points on the unit sphere in S. Let x have norm 1, and apply cauchy schwarz. The sum of squares of x.u is bounded by the sum of squares of x times the sum of squares
of u. The norm of f(x) is bounded by |u|. Now f(u) = u.u = |u|^2, hence the bound on f is precisely |u|.
Suppose S.u, a continuous linear map, equals S.v for some other vector v. Apply u-v to S and get 0. In other words, u-v dot all of S yields 0. However, (u-v).(u-v) is nonzero, hence each linear
function from S into R is S.u for a unique u. The bound on such a function is |u|, and is realized when u is dotted with itself.
If b is an orthonormal system in a possibly inseparable hilbert space S, then every vector v has countably many nonzero coefficients with respect to b, even though b might be uncountable.
Suppose v has uncountably many nonzero coefficients in its representation. Let ℵ[1] be the first uncountable ordinal. Assign an ordinal below ℵ[1] to each of these nonzero coefficients, so they
appear in order.
Consider a finite sum y = a[1]b[1] + a[2]b[2] + a[3]b[3], the start of our series. Let z = v - y. Now z.b[1] = a[1]-a[1] = 0. By linearity, z.y = 0. Expand (y+z).(y+z) and get |y|^2 + |z|^2. This
equals |v|^2. Thus a[1]^2 + a[2]^2 + a[3]^2 ≤ |v|^2. The same reasoning holds for any finite collection of terms drawn from v.
Let d be a countable limit ordinal, and assume each set of coefficients bounded strictly below d is square summable with a cap of |v|^2. Arrange all the coefficients below d in a possibly different
order, since they are countable, so they simply proceed 1 2 3 … to infinity. Each partial sum of squares is bounded by |v|^2, and so is the limit. This is an absolutely convergent series, and every
subseries is bounded by the same limit. Each set of coefficients below d is square summable with a cap of |v|^2, including all the coefficients below d. Square summable extends to the limit ordinal.
Let y be a countable linear combination of a[i]b[i], the start of our uncountable series, up to but not including an ordinal d. Assume the coefficients are square summable, thus y is well defined as
a point in S. Using the continuity of the dot product, y.b[i] is a series 0+0+0+…+a[i]+0+0+…, giving a[i]. Set z = v - y as before. Thus z.b[i] = 0, z.y is an infinite sum of zeros, and z.y = 0.
Expand (y+z).(y+z), and |y|^2 ≤ |v|^2. Put a[d]b[d] at the start of the sum, and the same relationship holds. Square summable extends to the successor ordinal.
By transfinite induction, every countable partial sum of our uncountable series of coefficients is square summable with a cap of |v|^2.
Let d be an ordinal below ℵ[1]. Let e[d] be the sum of the squares of the coefficients below d. This sum is bounded by |v|^2. Since all coefficients are nonzero, each e[d] is strictly larger than all
the values of e that came before. Each e[d] jumps over a new rational number. There are uncountably many rational numbers between 0 and |v|^2, and that is a contradiction. Therefore v only uses a
countable subset of b.
Given v, restrict attention to those hyperbasis elements with v.b[i] nonzero. As shown above, the coefficients a[i] form a square summable sequence, the partial sums over a[i]b[i] are cauchy, the
limit u exists in the complete metric space S, u and v produce the same coefficients, and v-u is orthogonal to each b[j]. If b is a total orthonormal system, then v-u = 0, and v is faithfully
represented by a countable set of nonzero coefficients applied to the hyperbasis b.
Now consider a continuous linear operator f from S into R, bounded by k. Let c[i] = f(b[i]), and since b comprises unit vectors, each c[i] is bounded by k. Suppose uncountably many hyperbasis
elements b[i] have nonzero images in R. Associate these elements with the ordinals below ℵ[1]. I'm going to rehash some material from the previous section. Let v be the finite sum over c[i]b[i] as i
runs from 1 to n. Now f(v) is the sum of c[i]^2 as i runs from 1 to n. |f(v)| is no larger than k×|v|. Replace |v| with the square root of the sum of c[i]^2. At the same time, f(v) is the sum of c[i]
^2. The norm of v is the square root of f(v).
f(v) ≤ k × sqrt(f(v))
sqrt(f(v)) ≤ k
f(v) ≤ k^2
∑ c[i]^2 ≤ k^2
Let d be a countable limit ordinal, and assume each set of hyperbasis elements bounded strictly below d has square summable images with a cap of k^2. Arrange all the elements below d in a possibly
different order, since they are countable, so they simply proceed 1 2 3 … to infinity. Map these basis elements over to R and square the images. Each partial sum of squares is bounded by k^2, and so
is the limit. This is an absolutely convergent series, and every subseries is bounded by the same limit. Each set of hyperbasis elements below d has square summable images with a cap of k^2,
including all the elements below d. Square summable extends to the limit ordinal.
Consider a countable set b[i], the start of our uncountable series, up to but not including an ordinal d. Assume the images are square summable, thus y, the sum of c[i]b[i], is well defined as a
point in S. Think of y as the limit of a countable sequence v[n], using the finite linear combinations v as above. Remember that each f(v[n]) is bounded by k^2. By the continuity of f, f(y) ≤ k^2.
The sum of c[i]^2 is bounded by k^2. Put b[d] at the start of the series and rebuild y, and the same relationship holds. Square summable extends to the successor ordinal.
By transfinite induction, every countable partial sum of our uncountable series in R is square summable with a cap of k^2.
Let d be an ordinal below ℵ[1]. Let e[d] be the sum of the squares of the images below d. This sum is bounded by k^2. Since all images are nonzero, each e[d] is strictly larger than all the values of
e that came before. Each e[d] jumps over a new rational number. There are uncountably many rational numbers between 0 and k^2, and that is a contradiction. Therefore f is nonzero on a countable
subset of b, and is 0 elsewhere.
Concentrate on the countable subsequence b[i] where c[i] is nonzero. Let u be the infinite (or perhaps finite) sum over c[i]b[i]. The previous section applies. A linear bounded/continuous operator is
equivalent to S.u, and the bound is |u|, realized by f(u).
If the cardinality of b is g, then we need at least g elements in any dense set. This is because the elements of b are all sqrt(2) units apart, and can be enclosed in disjoint open balls of radius ½.
Conversely, we can produce a dense set with g elements by taking all finite linear combinations of b with rational coefficients. Each v is a countable linear combination drawn from b, and the finite
linear combinations approach v. Find a finite partial sum within ½ε of v, then adjust the coefficients to nearby rational numbers, so that the adjustment doesn't stray by more than ½ε. The result is
inside the open ball about v with radius ε.
Beyond R^n, the dimension of S, as a hilbert space, is equal to the size of the smallest dense set in S. The latter is a function of the topology. Thus the dimension of a hilbert space is well
defined. If you want to change the dimension, you have to change the topology of S, or find a new set altogether.
Two hilbert spaces are isomorphic iff they have the same dimension. The hilbert space of dimension g has an orthonormal basis of size g, and all countable linear combinations thereof such that the
coefficients are square summable. To prove this construction is in fact a hilbert space, review the earlier theorem for L[2]. Not much has changed, as long as you remember that the union of countable
sets remains countable. The sum of two elements in S still draws on a countable subset of b. Even the infinite countable union of countable sets is countable. This is used to prove S is complete. The
cauchy sequence is countable, and each element in this sequence uses a countable subset of b, hence the entire cauchy sequence lives in a separable hilbert subspace of S, and approaches its limit.
In summary, there is one hilbert space, up to isomorphism, for each nonzero cardinal, and the space is completely characterized as the countable linear combinations of basis elements with square
summable coefficients. | {"url":"http://book.mathreference.com/banach","timestamp":"2024-11-06T23:18:50Z","content_type":"text/html","content_length":"93195","record_id":"<urn:uuid:4c30b2ff-bae7-4b5d-8ee9-12f17058eb4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00620.warc.gz"} |
Header file for implicit nonlinear solver with the option of a pseudotransient (see Numerical Utilities within Cantera and class solveProb). More...
Go to the source code of this file.
class solveProb
Method to solve a pseudo steady state of a nonlinear problem. More...
const int SOLVEPROB_INITIALIZE = 1
Solution Methods. More...
const int SOLVEPROB_RESIDUAL = 2
const int SOLVEPROB_JACOBIAN = 3
const int SOLVEPROB_TRANSIENT = 4
Header file for implicit nonlinear solver with the option of a pseudotransient (see Numerical Utilities within Cantera and class solveProb).
Definition in file solveProb.h.
const int SOLVEPROB_INITIALIZE = 1
Solution Methods.
Flag to specify the solution method
1: SOLVEPROB_INITIALIZE = This assumes that the initial guess supplied to the routine is far from the correct one. Substantial work plus transient time-stepping is to be expected to find a solution.
2: SOLVEPROB_RESIDUAL = Need to solve the surface problem in order to calculate the surface fluxes of gas-phase species. (Can expect a moderate change in the solution vector -> try to solve the
system by direct methods with no damping first -> then, try time-stepping if the first method fails) A "time_scale" supplied here is used in the algorithm to determine when to shut off time-stepping.
3: SOLVEPROB_JACOBIAN = Calculation of the surface problem is due to the need for a numerical jacobian for the gas-problem. The solution is expected to be very close to the initial guess, and
accuracy is needed. 4: SOLVEPROB_TRANSIENT = The transient calculation is performed here for an amount of time specified by "time_scale". It is not guaranteed to be time-accurate - just stable and
fairly fast. The solution after del_t time is returned, whether it's converged to a steady state or not.
Definition at line 52 of file solveProb.h.
Referenced by solveProb::print_header(), and solveProb::solve(). | {"url":"https://cantera.org/documentation/docs-2.1/doxygen/html/solveProb_8h.html","timestamp":"2024-11-13T06:27:12Z","content_type":"application/xhtml+xml","content_length":"10111","record_id":"<urn:uuid:e695d03b-c609-4f47-ab79-ca133c5b28da>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00711.warc.gz"} |
Title: Groundwater. 1
Groundwater. Notes on geostatistics
Monica Riva, Alberto Guadagnini Politecnico di
Milano, Italy Key reference de Marsily, G.
(1986), Quantitative Hydrogeology. Academic
Press, New York, 440 pp
Modelling flow and transport in heterogenous
media motivation and general idea
Understanding the role of heterogeneity
Jan 2000 editorial "It's the Heterogeneity!
(Wood, W.W., Its the Heterogeneity!, Editorial,
Ground Water, 38(1), 1, 2000) heterogeneity of
chemical, biological, and flow conditions should
be a major concern in any remediation
scenario. Many in the groundwater community
either failed to "get" the message or were forced
by political considerations to provide rapid,
untested, site-specific active remediation
technology. "It's the heterogeneity," and it is
the Editor's guess that the natural system is so
complex that it will be many years before one can
effectively deal with heterogeneity on societally
important scales. Panel of experts
(DOE/RL-97-49, April 1997) As flow and transport
are poorly understood, previous and ongoing
computer modelling efforts are inadequate and
based on unrealistic and sometimes optimistic
assumptions, which render their output unreliable.
Flow and Transport in Multiscale Fields
Field laboratory-derive conductivities
dispersivities appear to vary continuously with
the scale of observation (conductivity support,
plume travel distance). Anomalous
transport. Recent theories attempt to link such
scale-dependence to multiscale structure of Y
ln K. Predict observed effect of domain size on
apparent variance and integral scale of
Y. Predict observed supra linear growth rate of
dispersivity with mean travel distance
(time). Major challenge develop more
powerful/general stochastic theories/models for
multiscale random media, and back them with
lab/field observation.
Neuman S.P., On advective transport in fractal
permeability and velocity fields, Water Res.
Res., 31(6), 1455-1460, 1995.
Shed some light Conceptual difficulty Data
deduced by means of deterministic Fickian models
from laboratory and field tracer tests in a
variety of porous and fractured media, under
varied flow and transport regimes. Linear
regression aLa ? 0.017 s1.5 Supra-linear growth
Natural Variability. Geostatistics revisited
• Introduction Few field findings about spatial
• Regionalized variables
• Interpolation methods
• Simulation methods
AVRA VALLEY Clifton and Neuman, 1982 Clifton,
P.M., and S.P. Neuman, Effects of Kriging and
Inverse Modeling on Conditional Simulation of the
Avra Valley Aquifer in southern Arizona, Water
Resour. Res., 18(4), 1215-1234, 1982. Regional
Columbus Air Force Adams and Gelhar,
1992 Aquifer Scale
Mt. Simon aquifer Bakr, 1976 Local Scale
• Summary Variability is present at all scales
• But, what happens if we ignore it? We will see in
this class that this would lead to interpretation
problems in both groundwater flow and solute
transport phenomena
• Examples in transport - Scale effects in
• - New processes arising
• Heterogeneous parameters ALL (T, K, , S, v
(q), BC, ...)
• Most relevant one T (2D), or K (3D), as they
have been shown to vary orders of magnitude in an
apparently homogeneous aquifer
Variability in T and/or K
Summary of data from many different places in the
world. Careful though! Data are not always
obtained with rigorous procedures, and moreover,
as we will see throughout the course, data depend
on interpretation method and scale of
regularization Data given in terms of mean and
variance (dispersion around the mean value)
Variability in T and/or K Almost always slnT (or
slnK ) lt 2 (and in most cases lt1) This can be
questioned, but OK by now Correlation scales
(very important concept later!!)
• But, what is the correct treatment for natural
• First of all, what do we know?
• - real data at (few) selected points
• - Statistical parameters
• - A huge uncertainty related to the lack of
data in most part of the aquifer. If parameter
continuous (of course they are), then the number
of locations without data is infinity
• Note The value of K at any point DOES EXIST. The
problem is we do not know it (we could if we
measured it, but we could never be exhaustive
• Stochastic approach K at any given point is
RANDOM, coming from a predefined (maybe known,
maybe not) pdf, and spatially correlated ------
Regionalized Variables
• T(x,?) is a Spatial Random Function iif
• If ? ?0 then T(x,?0) is a spatial function
(continuity?, differentiability?)
• If x x0 then T(x0) (actually T(x0, ?)) is a
random function
• Thus, as a random function, T(x0) has a
univariate distribution (log-normal according to
Law, 1944 Freeze, 1975)
Hoeksema and Kitanidis, 1985
Hoeksema Kitanidis, 1985 Log-T normal, log-K
normal Both consolidated and unconsolidated
Now we look at T(x), so we are interested in the
multivariate distribution of T(x1), T(x2), ...
T(xn) Most frequent hypothesis Y(Y(x1),
Y(x2), ... Y(xn))(ln T(x1), ln T(x2), ... ln
T(xn)) Is multinormal with But most
important NO INDEPENDENCE
What if independent? and then we are in
classical statistics But here we are not, so we
need some way to characterize dependency of one
variable at some point with the SAME variable at
a DIFFERENT point. This is the concept of the
SEMIVARIOGRAM (or VARIOGRAM)
Classification of SRF
• Second order stationary
• EZ(x)const
• C(x, y) is not a function of location (only of
separation distance, h)
• Particular case isotropic RSF C(h) C(h)
• Anisotropic covariance different correlation
scales along different directions
• Most important property if multinormal
distribution, first and second order moments are
enough to fully characterize the SRF multivariate
(No Transcript)
Relaxing the stationary assumption
1. The assumption of second-order stationarity
with finite variance, C(0), might not be satisfied
(The experimental variance tends to increase
with domain size)
2. Less stringent assumption INTRINSIC HYPOTHESIS
The variance of the first-order increments is
finite AND these increments are themselves
second-order stationary. Very simple example
hydraulic heads ARE non intrinsic SRF
EY(x h) Y(x) m(h) varY(x h) Y(x)
Independent of x only function of h
Usually m(h) 0 if not, just define a new
function, Y(x) m(x), which satisfies this
Definition of variogram, ?(h)
EY(x h) Y(x) 0 ?(h) (1/2) varY(x h)
Y(x) (1/2) E(Y(x h) Y(x))2
Variogram v. Covariance
1. The variogram is the mean quadratic increment
of Y between two points separated by h.
2. Compare the INTRINSIC HYPOTHESIS with
EY(x) m constant ?(h) (1/2) E(Y(x h)
Y(x))2 (1/2) ( EY(x h)2
EY(x)2 2 m2 2 EY(x h) Y(x) 2 m2)
C(0) C(h)
The variogram
The definition of the Semi-Variogram is usually
given by the following probabilistic formula
When dealing with real data the
semi-variogram is estimated by the Experimental
Semi-Variogram. For a given separation vector,
h, there is a set of observation pairs that are
approximately separated by this distance. Let the
number of pairs in this set be N(h). The
experimental semi-variogram is given by
(No Transcript)
Some comments on the variogram
If Z(x) and Z(xh) are totally independent,
then If Z(x) and Z(xh) are totally
dependent, then One particular case is when x
xh. Therefore, by definition
In the stationary case
Variogram Models
• DEFINITIONS
• Nugget
• Sill
• Range
• Integral distance or correlation scale
• Models
• Pure Nugget
• Spherical
• Exponential
• Gaussian
• Power
(No Transcript)
• Correlation scales Larger in T than in K.
Larger in horizontal than in vertical. Fraction
of the domain of interest
Additional comments
• Second order stationary
• EZ(x)constant
• ?(h) is not a function of location
• Particular case isotropic RSF ?(h) ?(h)
• Anisotropic variograms two types of anisotropy
depending on correlation scale or sill value
• Important property ?(h) ?2 C(h)
• Most important property if multinormal
distribution, first and second order moments are
enough to fully characterize the SRF multivariate
Estimation vs. Simulation
• Problem Few data available, maybe we know mean,
variance and variogram
• Alternatives
• (1) Estimation (interpolation) problems KRIGING
• Kriging BLUE
• Extremely smooth
• Many possible krigings Alternative cokriging
The kriging equations - 1
We want to predict the value, Z(x0), at an
unsampled location, x0, using a weighted average
of the observed values at N neighboring
locations, Z(x1), Z(x2), ..., Z(xN). Let
Z(x0) represent the predicted value a weighted
average estimator be written as
The associated estimation error is
In general, we do not know the (constant) mean,
m, in the intrinsic hypothesis. We impose the
additional condition of equivalence between the
mathematical expectation of Z and Z0.
The kriging equations - 2
Unknown mathematical expectation of the process Z.
This condition allows obtaining an unbiased
The kriging equations - 3
We wish to determine the set of weights. IMPOSE
the condition
The kriging equations - 4
We then use the definition of variogram
Which I will use into
The kriging equations - 5
By substitution
Noting that
We finally obtain
The kriging equations - 6
This is a constrained optimization problem. To
solve it we use the method of Lagrange
Multipliers from the calculus of variation. The
Lagrangian objective function is
To minimize this we must take the partial
derivative of the Lagrangian with respect to each
of the weights and with respect to the Lagrange
multiplier, and set the resulting expressions
equal to zero, yielding a system of linear
The kriging equations - 7
Minimize this
and get (N1) linear equations with (N1)
The kriging equations - 8
The complete system can be written as A ? b
The kriging equations - 9
We finally get the Variance of the Estimation
Estimation vs. Simulation (ii)
• (2) Simulations try to reproduce the look of
the heterogeneous variable
• Important when extreme values are important
• Many (actually infinite) solutions, all of them
equilikely (and with probability 0 to be
• For each potential application we are interested
in one or the other
Estimation. 1
AVRA VALLEY. Regional Scale - Clifton, P.M., and
S.P. Neuman, Effects of Kriging and Inverse
Modeling on Conditional Simulation of the Avra
Valley Aquifer in southern Arizona, Water Resour.
Res., 18(4), 1215-1234, 1982.
Estimation. 2
AVRA VALLEY. Regional Scale - Clifton, P.M., and
S.P. Neuman, Effects of Kriging and Inverse
Modeling on Conditional Simulation of the Avra
Valley Aquifer in southern Arizona, Water Resour.
Res., 18(4), 1215-1234, 1982.
Estimation. 3
AVRA VALLEY. Regional Scale - Clifton, P.M., and
S.P. Neuman, Effects of Kriging and Inverse
Modeling on Conditional Simulation of the Avra
Valley Aquifer in southern Arizona, Water Resour.
Res., 18(4), 1215-1234, 1982.
Estimation. 4
AVRA VALLEY. Regional Scale - Clifton, P.M., and
S.P. Neuman, Effects of Kriging and Inverse
Modeling on Conditional Simulation of the Avra
Valley Aquifer in southern Arizona, Water Resour.
Res., 18(4), 1215-1234, 1982.
Estimation. 5
AVRA VALLEY. Regional Scale - Clifton, P.M., and
S.P. Neuman, Effects of Kriging and Inverse
Modeling on Conditional Simulation of the Avra
Valley Aquifer in southern Arizona, Water Resour.
Res., 18(4), 1215-1234, 1982.
Monte Carlo approach
Statistical CONDITIONAL moments, first and second
. . .
. . .
2000 simulations
? Evaluation of key statistics of medium
parameters (K, porosity, ) ? Synthetic
generation of an ensemble of equally likely
fields ? Solution of flow/transport problems on
each one of these ? Ensemble statistics
? Simple to understand ? Applicable to a wide
range of linear and nonlinear problems ? High
heterogeneities ? Conditioning
? Heavy calculations ? Fine computational grids ?
Reliable convergence criteria (?)
Problems reliable assessment of convergence
Ballio and Guadagnini 2004
Hydraulic head variance
Number of Monte Carlo simulations
User Comments (0) | {"url":"https://www.powershow.com/view4/7e6528-NmYyM/Groundwater_powerpoint_ppt_presentation","timestamp":"2024-11-04T05:39:37Z","content_type":"application/xhtml+xml","content_length":"168064","record_id":"<urn:uuid:09d8c5bc-0b44-4784-90fc-b2ccea695278>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00300.warc.gz"} |
Calculate high-resolution isotope mass distribution and density function
[MD, Info, DF] = isotopicdist(SeqAA)
[MD, Info, DF] = isotopicdist(Compound)
[MD, Info, DF] = isotopicdist(Formula)
isotopicdist(..., 'NTerminal', NTerminalValue, ...)
isotopicdist(..., 'CTerminal', CTerminalValue, ...)
isotopicdist(..., 'Resolution', ResolutionValue, ...)
isotopicdist(..., 'FFTResolution', FFTResolutionValue, ...)
isotopicdist(..., 'FFTRange', FFTRangeValue, ...)
isotopicdist(..., 'FFTLocation', FFTLocationValue, ...)
isotopicdist(..., 'NoiseThreshold', NoiseThresholdValue, ...)
isotopicdist(..., 'ShowPlot', ShowPlotValue, ...)
[MD, Info, DF] = isotopicdist(SeqAA) analyzes a peptide sequence and returns a matrix containing the expected mass distribution; a structure containing the monoisotopic mass, average mass, most
abundant mass, nominal mass, and empirical formula; and a matrix containing the expected density function.
[MD, Info, DF] = isotopicdist(Compound) analyzes a compound specified by a numeric vector or matrix.
[MD, Info, DF] = isotopicdist(Formula) analyzes a compound specified by an empirical chemical formula represented by the structure Formula. The field names in Formula must be valid element symbols
and are case sensitive. The respective values in Formula are the number of atoms for each element. Formula can also be an array of structures that specifies multiple formulas. The field names can be
in any order within a structure. However, if there are multiple structures, the order must be the same in each.
isotopicdist(..., 'PropertyName', PropertyValue, ...) calls isotopicdist with optional properties that use property name/property value pairs. You can specify one or more properties in any order.
Enclose each PropertyName in single quotation marks. Each PropertyName is case insensitive. These property name/property value pairs are as follows:
isotopicdist(..., 'NTerminal', NTerminalValue, ...) modifies the N-terminal of the peptide.
isotopicdist(..., 'CTerminal', CTerminalValue, ...) modifies the C-terminal of the peptide.
isotopicdist(..., 'Resolution', ResolutionValue, ...) specifies the approximate resolution of the instrument, given as the Gaussian width (in daltons) at full width at half height (FWHH).
isotopicdist(..., 'FFTResolution', FFTResolutionValue, ...) specifies the number of data points per dalton, to compute the fast Fourier transform (FFT) algorithm.
isotopicdist(..., 'FFTRange', FFTRangeValue, ...) specifies the absolute range (window size) in daltons for the FFT algorithm and output density function.
isotopicdist(..., 'FFTLocation', FFTLocationValue, ...) specifies the location of the FFT range (window) defined by FFTRangeValue. It specifies this location by setting the location of the lower
limit of the range, relative to the location of the monoisotopic peak, which is computed by isotopicdist.
isotopicdist(..., 'NoiseThreshold', NoiseThresholdValue, ...) removes points in the mass distribution that are smaller than 1/NoiseThresholdValue times the most abundant mass.
isotopicdist(..., 'ShowPlot', ShowPlotValue, ...) controls the display of a plot of the mass distribution.
Input Arguments
SeqAA Peptide sequence specified by either a:
• Character vector or string of single-letter codes
• Cell array of character vectors or string vector that specifies multiple peptide sequences
You can use the getgenpept and genpeptread functions to retrieve peptide sequences from the GenPept database or a GenPept-formatted file. You can then use the cleave function to
perform an insilico digestion on a peptide sequence. The cleave function creates a cell array of character vectors representing peptide fragments, which you can submit to the
isotopicdist function.
Compound Compound specified by either a:
• Numeric vector of form [C H N O S], where C, H, N, O, and S are nonnegative numbers that represent the number of atoms of carbon, hydrogen, nitrogen, oxygen, and sulfur
respectively in a compound.
• M-by-5 numeric matrix that specifies multiple compounds, with each row corresponding to a compound and each column corresponding to an atom.
Formula Chemical formula specified by either a:
• Structure whose field names are valid element symbols and case sensitive. Their respective values are the number of atoms for each element.
• Array of structures that specifies multiple formulas.
If Formula is a single structure, the order of the fields does not matter. If Formula is an array of structures, then the order of the fields must be the same in each structure.
NTerminalValue Modification for the N-terminal of the peptide, specified by either:
• One of 'none', 'amine' (default), 'formyl', or 'acetyl'
• Custom modification specified by an empirical formula, represented by a structure. The structure must have field names that are valid element symbols and case sensitive. Their
respective values are the number of atoms for each element.
CTerminalValue Modification for the C-terminal of the peptide, specified by either:
• One of 'none', 'freeacid' (default), or 'amide'
• Custom modification specified by an empirical formula, represented by a structure. The structure must have field names that are valid element symbols and case sensitive. Their
respective values are the number of atoms for each element.
ResolutionValue Value in daltons specifying the approximate resolution of the instrument, given as the Gaussian width at full width half height (FWHH).
Default: 1/8 Da
FFTResolutionValue Value specifying the number of data points per dalton, used to compute the FFT algorithm.
Default: 1000
FFTRangeValue Value specifying the absolute range (window size) in daltons for the FFT algorithm and output density function. By default, this value is automatically estimated based on the
weight of the molecule. The actual FFT range used internally by isotopicdist is further increased such that FFTRangeValue * FFTResolutionValue is a power of two.
Increase the FFTRangeValue if the signal represented by the DF output appears to be truncated.
Ultrahigh resolution allows you to resolve micropeaks that have the same nominal mass, but slightly different exact masses. To achieve ultrahigh resolution, increase
FFTResolutionValue and reduce ResolutionValue, but ensure that FFTRangeValue * FFTResolutionValue is within the available memory.
FFTLocationValue Fraction that specifies the location of the FFT range (window) defined by FFTRangeValue. It specifies this location by setting the location of the lower limit of the FFT range,
relative to the location of the monoisotopic peak, which is computed by isotopicdist. The location of the lower limit of the FFT range is set to the mass of the monoistopic peak -
(FFTLocationValue * FFTRangeValue).
You may need to shift the FFT range to the left in rare cases where a compound contains an element, such as Iron or Argon, whose most abundant isotope is not the lightest one.
Default: 1/16
NoiseThresholdValue Value that removes points in the mass distribution that are smaller than 1/NoiseThresholdValue times the most abundant mass.
Default: 1e6
ShowPlotValue Controls the display of a plot of the isotopic mass distribution. Choices are true, false, or I, which is an integer specifying a compound. If set to true, the first compound is
plotted. Default is:
• false — When you specify return values.
• true — When you do not specify return values.
Output Arguments
MD Mass distribution represented by a two-column matrix in which each row corresponds to an isotope. The first column lists the isotopic mass, and the second column lists the probability for that
Info Structure containing mass information for the peptide sequence or compound in the following fields:
• NominalMass
• MonoisotopicMass
• ObservedAverageMass — Estimated from the DF signal output, using instrument resolution specified by the 'Resolution' property.
• CalculatedAverageMass — Calculated directly from the input formula, assuming perfect instrument resolution.
• MostAbundantMass
• Formula — Structure containing the number of atoms of each element.
DF Density function represented by a two-column matrix in which each row corresponds to an m/z value. The first column lists the mass, and the second column lists the relative intensity of the
signal at that mass.
Calculate and display the isotopic mass distribution of the peptide sequence MATLAP with an Acetyl N-terminal and an Amide C-terminal:
MD = isotopicdist('MATLAP','nterm','Acetyl','cterm','Amide', ...
MD =
643.3363 0.6676
644.3388 0.2306
645.3378 0.0797
646.3386 0.0181
647.3396 0.0033
648.3409 0.0005
649.3423 0.0001
650.3439 0.0000
651.3455 0.0000
Calculate and display the isotopic mass distribution of Glutamine (C[5]H[10]N[2]O[3]):
MD = isotopicdist([5 10 2 3 0],'showplot',true)
MD =
146.0691 0.9328
147.0715 0.0595
148.0733 0.0074
149.0755 0.0004
150.0774 0.0000
Display the isotopic mass distribution of the "averagine" model, whose molecular formula represents the statistical occurrences of amino acids from all known proteins:
isotopicdist([4.9384 7.7583 1.3577 1.4773 0.0417])
More About
Average Mass
Sum of the average atomic masses of the constituent elements in a molecule.
Monoisotopic Mass
Sum of the masses of the atoms in a molecule using the unbound, ground-state, rest mass of the principle (most abundant) isotope for each element instead of the isotopic average mass.
Most Abundant Mass
Mass of the molecule with the most-highly represented isotope distribution, based on the natural abundance of the isotopes.
Nominal Mass
Sum of the integer masses (ignoring the mass defect) of the most abundant isotope of each element in a molecule.
[1] Rockwood, A. L., Van Orden, S. L., and Smith, R. D. (1995). Rapid Calculation of Isotope Distributions. Anal. Chem. 67:15, 2699–2704.
[2] Rockwood, A. L., Van Orden, S. L., and Smith, R. D. (1996). Ultrahigh Resolution Isotope Distribution Calculations. Rapid Commun. Mass Spectrum 10, 54–59.
[3] Senko, M.W., Beu, S. C., and McLafferty, F. W. (1995). Automated assignment of charge states from resolved isotopic peaks for multiply charged ions. J. Am. Soc. Mass Spectrom. 6, 52–56.
[4] Senko, M.W., Beu, S. C., and McLafferty, F. W. (1995). Determination of monoisotopic masses and ion populations for large biomolecules from resolved isotopic distributions. J. Am. Soc. Mass
Spectrom. 6, 229–233.
Version History
Introduced in R2009b | {"url":"https://it.mathworks.com/help/bioinfo/ref/isotopicdist.html","timestamp":"2024-11-06T08:28:40Z","content_type":"text/html","content_length":"95124","record_id":"<urn:uuid:601da7cd-be64-4e2b-9c94-ad8f29c3aa3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00499.warc.gz"} |
Egyptian Proportions
Archaeologists confine their attention to broken pots and effaced inscriptions, their austere discipline being enlivened from time to time by the discovery of a hoard of gold. But for architecture,
they have neither eyes nor time. - An Experiment in Rural Egypt 1969
A quote that gives two conclusions; 1. Hassan followed and read the publications on ancient Egypt and searched for Architecture. Hassan was concerned with the revival of Egyptian architecture and
looked for guidelines. H.F.'s importance is of him being the first voice inside Egypt to call for the revival of the architecture of the indigenous. the Egyptian architecture.
Egyptian Proportions in Hassan Fathy's Work
Fathy began Factoring Pi 3.14 and Phi 1.61 and lyrical spaces and multiples of the Pharaonic cubit 46cm in the intervals used in the plan of rooms height of walls and doors and depth of squinch zones
in order to infuse each spatial unit with consistently well-centered, uplifted dynamic
A source of inspiration for Fathy's formalized proportional system came from archeologist as R.A. Schivaller de Lubicz and his research on The Egyptian vault was definitely enriched by Egyptian
Egyptologists as Dr Abdel Moneim Abu Bakr and Dr. Alexander Badawy.
Hassan cited the work of Dr A.M. in his article in La Revue du Caire Mai 1951. La Voute dans l architecture Egyptienne. For more information please look at Hassan Fathy and Egyptian Architecture.
Fathy met de Lubicz while working in Luxor on the new Gourna project. At that time, de Lubicz was developing a body of research on the Temple of Luxor that indicated the Pharaos knowingly related
human proportions to plan-form. Fathy became enamored of a possible architecture theory whereby mathematical functions relating dimensions could introduce human scale in architecture while relating
all elements into an overall harmonic unity. [4]
A rush google search on de Lubicz will show words as alternative Egyptology and pseudoarchaeological. if we put aside the claimer and study the claim on it is own. We find some thoughts worth
mentioning: before relating the design of Luxor Temple with Human body, we need to understand how the Egyptians describe the Temple as a whole and the shrine of the holiest. Did the Egyptians relate
the brain or the heart to the holiest chamber ? did the other temples follow the same design? is it a single case? Are we twisting a narrative to fit with our beliefs?
To what extent did H.F. Apply what he experienced from the later three is not clear to me so far.
However one is clear, Fathy applied the concept and design of the Malqaf and for more details please read
Egyptian Proportions in Alexander Badawy's work
1968, one year before H.F.'s book, A.B. Published his third Volume of A History of Egyptian Architecture.
'It is apparent that the Egyptian architecture was designed according to a harmonic system based upon the use of square and triangles... There is also sufficient evidence about the occurrence of the
numbers from the Fibonacci Series... ' - A study of Harmonic system, A.B.
"Most beautiful kind of triangle, 1 because they liken it to the nature of the universe, and Plato seems to employ this figure in his "Republic," when drawing up his Marriage scheme. The triangle,
too, has this property—three the right angle, and four the base, and five the hypothenuse, being of equal value with the lines containing it"[6]
The simple design of Egyptian doorways conforms to a harmonic system with varying proportions which can be expressed by formulas. The analysis of a few examples from different periods shows that the
module was the width of the aperture l in terms of which the whole width including both doorjambs L, the height of the aperture h, and the full height H, were designed either graphically with squares
and 8:5 triangles or by simple calculation with ⲫ- 8:5.
The resulting schemes further prove the architect's versatility even in the commonest details.
Double False-Door Temple Palace
This large and very elaborate false door exemplifies the use of harmonic design in architectural elements. As usual in scenes the plinth represents the ground level above which is suit the design.
The constructional diagram is a square topped with an 8:5 triangle. Two panels are set symmetrically on either side of a central pillar emphasizing the vertical axises.
Each panel is again based on a square topped with a triangle. An 8:5 lozenge lattice forms the smaller panels above the cornice, even to the minutest details such as the inclination in the posture of
the arms of a kneeling personage.
Capitals of Columns
(triangles, Papyriform, Lotiform)
The design of capitals is based on 1:2, 1:4, and 8:5 Triangles.
''The plan of the workmen's city at El Lahun is based on a module that forms the unit of the grid. A similar grid seems to have been used to set the constructional diagram of the plan or elevation of
monumental buildings. As a rule, this constructional diagram is symmetrical and is formed of a square with one or more 8:5 (base: height) isosceles triangles abutting it axially. As the 8:5 triangle
embodies the harmonic ratio of the Golden Number (1.618), any diagram comprising such units as this triangle and one or more squares set axially forms a harmonic framework, into which the plan or
elevation may be set. It seems, at least in some examples, that the actual basic dimensions that determine the significant points of the plan were chosen from the consecutive numbers of a summation
series of Fibonacci: 3, 5,8, 13, 21, 34 5,''
In addition to reed pens, papyrus, leather, or stuccoed tablets, the draftsman had to use a rker?, a square, and triangles.
The second phase of the architectural plan consisted of laying out the plan in the field, using primarily a cord knotted at twelve equal intervals
such a knotted cord was probably rolled up as was that of the surveyors shown in various tomb scenes and statues. the officials in charge were known as rope fastenrs39 or rope stretchers, A title
which is strongly reminiscent of the operation ''to stretch the line'' known from the Egyptian texts. A.B. 44
With a twelve-knot cord one can lay out a right-angle triangle tge sides of which are proportional to 3, 4, and 5, or draw the curve of the so-called Egyptian or catenary vault.
Theorie: The rope was very important in Egyptian architecture because it was the tool with which the proportions were signed on the building grounds. And the proportions in turn are sacred,
especially in the construction of the houses of life. Therefore, this rope was not just a rope, it was often knotted in the proportions in which the houses of life were built. The ratio of the Osiris
triangle, for example, this ratio was applied in construction with this rope.
This explains the importance of the rope, and why the Egyptian was keen on depicting himself with his ropes. The rope was the proof of the man's good deeds when he stood before the Egyptian Neter
Osiris in the court of judgment
They knew by heart the proportions of the various rooms and, given the height of a dome or vault, could tell immediately where to begin the springing. In fact, they would even watch me while I was
drawing, and tell me not to bother with these dimensions. H.F. Architecture of the Poor
Brick Dimensions
The ratio between brick length, width, and height remains, in theory, constant throughout the ages: one for length, one half of the length for width, and 1/3 of the length for height (Spencer 1979,
pp. 147 ff.)
The masons asked us to make them the special kind of bricks they used for vaults. These were made with more straw than usual, for lightness. They measured 25 cm X 15 cm X 5 cm (10 in. X 6 in. X 2
in.) and were marked with two parallel diagonal grooves, drawn with the fingers from corner to corner of the largest face.
(Fathy, The Nubian Masons at Work—First successes)
Roik, E. (1993). Das Längenmaßsystem im alten Ägypten. Hamburg: Christian Rosenkreutz.
[4] Darl Rastorfer. Hassan Fathy, The man and his work P48
[7]The element of harmonic Design A. B.
[8] Ein Kapital zur Geschichte des Pflanzenornaments, Ludwig Borchardt.
39 Cantor, Vorlesung über Geschichte der Mathematik, pp. 55-57. Hambidge, Dynamic Symmetry, p. 130. Lauer, Pyramides, p.200, n. 2.
Roik, E. (1993). Das Längenmaßsystem im alten Ägypten. Hamburg: Christian Rosenkreutz.
Ancient Egyptian Cubits – Origin and Evolution by Antoine Pierre Hirsch
Architectural sketch, 3000-2700 B.C. Found 1925 near Step Pyramid, Saqqara | {"url":"https://www.egyptianarch.com/post/egyptian-proportions","timestamp":"2024-11-04T15:22:06Z","content_type":"text/html","content_length":"1050381","record_id":"<urn:uuid:6261082c-b82a-42c5-bf34-fa328d9b5d73>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00602.warc.gz"} |
Report: Upfront cost of home electrification
This report is based on data as of March 2024.
Estimating the upfront cost of home electrification is complicated. All homes and projects are unique, there is limited public data on prices paid in the market for heat pump installation and related
projects, and prices can vary by thousands of dollars for similar jobs. Reputable sources ranging from regulatory bodies to home improvement websites cite highly varied upfront costs. Finally, the
comparison to the alternative cost of replacing existing appliances is often missing from estimates of the upfront costs of electrification—for example, if a heat pump is installed near the end of
life of the existing HVAC system, a household can avoid the costs of a new air conditioner and fossil fuel furnace or boiler.
To estimate upfront costs of electrification, we analyze cost data sets obtained from Massachusetts and California, two states leading the way on heat pump installations. We validate our model
against smaller data sets from New York and Maine. The full methodology is described below, and we welcome feedback on how to improve the utility of this report in the future. You can send
suggestions and recommendations for additional data sources to us at upfrontcosts@rewiringamerica.org.
Single-zone heat pump: A single “mini-split” heat pump installation consisting of one outdoor unit and one indoor unit. Can heat a large room, a couple of connected rooms, or an open-floor-plan
apartment up to about 1,000 square feet.
Hybrid heat pump: A heat pump installation that provides some but not all of the heating for a home, used in conjunction with a fossil or electric resistance backup system. This can be a centrally
ducted heat pump, a single mini split, or a mini split with multiple indoor units.
Whole-home heat pump: A heat pump installation that provides all of the heating and cooling for a home. This can be a centrally ducted heat pump or a ductless system (mini split typically with
multiple indoor units, and sometimes multiple outdoor units).
Data sources
Massachusetts has a relatively cold climate. The Massachusetts Clean Energy Center (MassCEC) ran a Residential Air-Source Heat Pump Program from November 2014 through March 2019, which provided
incentives ranging from about $500 to $3,000 depending on heat pump size. They have provided a detailed dataset of around 21,000 projects, which we have disaggregated into single-zone (8,000
projects), hybrid (8,500 projects), and whole-home heat pumps (3,500 projects) for use in our model. The median total project cost across all installation types was around $8,300 before incentives.
MassCEC also ran a Whole-Home Heat Pump Pilot from May 2019 through June 2021, and that detailed dataset has 158 projects. For these projects, Mass Save, the state’s energy efficiency program, offers
a whole-home rebate of $10,000. The median total project cost was around $18,300 before incentives, and ranged from $5,000 to $58,000. At the beginning of the pilot program, backup heat was
encouraged, but by the end they removed that recommendation, “reflecting growing acceptance of the ability of cold-climate heat pumps to serve as a stand-alone heating solution.”
The TECH Clean California program has provided rebates on heat pumps and heat pump water heaters (HPWHs) since 2021. The TECH Clean incentive is given to contractors, and depends on the size of the
system. These incentives were usually $3,000, and the funding ran out in a few months. With new funding for 2023, they have reduced the incentive to $1,000. They collect data about every install and
release public updates each month. As of March 2024, the median total project cost for the 21,000 heat pump projects was around $19,000 before incentives, and ranged from $2,000 to $70,000. All of
these installs were whole-home, as they all either decommissioned previous infrastructure (86% of projects), or left it in place to run for emergency use only.
Maine is one of the coldest states in the country, but air source heat pumps are being adopted by households at twice the rate of the rest of the US. The statewide rebate program, run by Efficiency
Maine, offers rebates up to $2,000 for Low and Moderate Income (LMI) residents, and up to $1,200 for everyone else. Efficiency Maine states that the installed cost of a heat pump is $4,600. While
they don’t specify what this covers, we assume it is for a single-zone heat pump. Efficiency Maine also pays for heat pump water heaters to be installed in low-income households at a fixed price of
$2,500 for electric resistance replacement, and $2,900 for fossil replacement (equipment plus installation).
Despite the great uptake of heat pumps, Efficiency Maine has not been publishing data on the equipment or installations themselves. One small dataset from South Portland, ME, includes 36 projects
completed in fall of 2022. The total project cost ranged from $3,000 to $33,000 before incentives, with the lower end for a single-zone heat pump, and the upper end for 4 to 5 indoor units and
sometimes a second outdoor unit. The average cost per zone (total cost divided by total number of indoor units) from this dataset is around $5,200.
New York
New York is also in a cold climate. As part of a 2017 to 2019 Air-Source Heat Pump Program, the New York State Energy Research and Development Authority (NYSERDA) offered rebates of $1,500 - $4,500
to households who installed heat pumps, depending on heat pump size. NYSERDA published an analysis of their program including pricing data for 386 projects. The total project cost ranged from $10,000
to $30,000 before incentives, with an average cost of $16,300.
Cost estimates
Heat pump HVAC: whole-home
The data plotted below represent total project costs (equipment plus installation costs) for a whole-home heat pump installation, derived from the TECH Clean California dataset and the two
Massachusetts datasets. These costs have been adjusted to represent present-day national averages by correcting for inflation (since installations took place over the past decade) and
location-specific materials and labor costs (since Massachusetts and California are relatively expensive markets).
Our modeled cost estimates for the 20th to 80th percentile of whole-home air source heat pumps are as follows:
Whole-home heat pump
□ <1,000 square foot home: see single-zone estimates below
□ 1,500 to 2,500 square foot home: $17,000 - $23,000 (median $19,500)
□ 2,500 to 5,500 square foot home: $22,500 - $28,000 (median $25,000)
□ 5,500+ square foot home: $26,000 - $30,000 (median $29,000)
Note: Range represents the 20th to 80th percentile.
These are national estimates, and costs will vary significantly from market to market. Homes in moderate climates or with better insulation require less heating/cooling capacity and are likely to
fall at the lower end of these ranges. Homes in colder climates or in regions with high labor costs are likely to fall at the higher end of these ranges. Hybrid heat pump installations with continued
fossil-fuel backup in any size home and region will cost less than whole-home installations. Other factors, such as supply chain constraints, the familiarity of local HVAC contractors with heat pump
technology, and the degree of price competition within local markets will also affect pricing, but are not possible to model with the available data.
To quantify some of the variability in costs based on climate, location, and home characteristics, we have developed a model that takes into account more information about a home in addition to its
square footage, including whether the home needs new ductwork, heat pump size/capacity, and heat pump efficiency. We’ve used the model to estimate median costs for a variety of home types in
locations around the country. The results are as follows:
Heat Pump HVAC: single-zone
The data plotted below are total project costs (equipment plus installation costs) for a single-zone heat pump installation, derived from the Massachusetts Air-Source Heat Pump dataset. These costs
have been adjusted to represent present-day national averages by correcting for inflation and location-specific materials & labor costs.
In the data plotted to the left, the boxed area represents the 20th to the 80th percentile of project costs. The yellow line in the middle represents the median, or 50th percentile, cost.
Based on this, we would expect the middle range (20th to 80th percentile) for an installed single-zone air source heat pump to be:
Single-zone heat pump
□ $5,400 - $8,500 (median $6,600)
NOTE: Range represents the 20th–80th percentile.
Heat pump water heaters
The data plotted below are total project costs (equipment plus installation costs) for a heat pump water heater installation, derived from the TECH Clean California dataset. These costs have been
adjusted to represent present-day national averages by correcting for inflation and location-specific materials & labor costs.
In the data plotted to the left, the boxed area represents the 20th to the 80th percentile of project costs. The yellow line in the middle represents the median, or 50th percentile, cost. Costs are
disaggregated between households replacing gas water heaters and households replacing electric resistance water heaters.
For heat pump water heaters, we did not identify a strong correlation between cost and any of the input features (e.g., square footage, water heater size in gallons, or water heater efficiency).
However, if the household was replacing a gas water heater, the additional wiring required for a gas-to-electric swap made it more expensive than just replacing an existing electric resistance water
Based on this, we would expect the middle range (20th to 80th percentile) for an installed heat pump water heater to be:
Heat pump water heater
□ Replacing electric resistance: $3,500 - $5,000 (median $4,200)
□ Replacing gas: $4,100 - $6,800 (median $5,400)
NOTE: Range represents the 20th to 80th percentile.
To translate heat pump water heater costs from national averages to location-specific cost estimates, we multiply by location-specific cost factors.
Electric stoves and dryers
To calculate the average cost of efficient electric appliances, we examine Google Shopping results. Unlike the upfront cost of heat pumps or heat pump water heaters, we do not expect the cost of
installing an electric appliance to vary significantly across the country. This does not include costs related to wiring when switching from gas to electric (most large electric appliances like
stoves and dryers require a 240 V circuit, while gas appliances require a 120 V circuit). The difference in cost between replacing a gas and an electric resistance water heater is approximately the
cost of installing a new 240 V circuit ($1,200), although this can vary widely depending on location and the distance from the electrical panel to the appliance.
□ Induction range: $1,000+
□ Portable induction cooktop: $65+
□ Electric resistance range: $600+
□ Heat pump dryer: $1,300+
□ Electric resistance dryer: $400+
Note: Prices are for base models and do not include the cost of wiring.
Heat pump HVAC
We use three large datasets of heat pump installations to estimate heat pump installation cost:
• Massachusetts Residential Air-Source Heat Pump Program | 21,000 projects, including 3,500 whole-home heat pumps and 8,000 single-zone heat pumps | 2014-2019
• Massachusetts Whole Home Pilot | 158 projects, all whole-home heat pumps | 2019-2021
• TECH Clean California | 21,000 projects, all whole-home heat pumps | 2021-2024 (more updated data is available, but our model currently only incorporates data through March 2024)
We train a gradient boosting decision tree model on the datasets, using as inputs a Producer Price Index (since installations took place over the past decade) and scaling factors to represent
location-specific material and labor costs using RS Means. Additional characteristics that are used to train the model include:
1. Square footage
2. Heat pump efficiency (as measured by HSPF and SEER)
3. Heat pump size in tons
4. Whether the ductwork needed to be replaced or upgraded
5. Whether a panel upgrade was required
6. Whether the install was in a home with or without ducts
We use the trained model to predict air source heat pump total installed costs for every building model in the National Renewable Energy Laboratory (NREL)’s publicly available ResStock dataset. This
dataset consists of approximately 550,000 simulated residential building models that statistically represent every residential housing unit in the contiguous United States. Each simulated building
includes the characteristics used to train the model and reflect the distributions of these characteristics in the real world. From these predicted costs, we calculate the median, 20th, and 80th
percentiles according to specific bins of home sizes.
For single-zone heat pumps, we simply calculate the median and the 20th and 80th percentile costs, since there is not a strong correlation with any of the input features.
Since this methodology is based only on California and Massachusetts data, we validate this methodology using datasets from other states. We will continue to add to this table as more validation data
becomes available.
Heat pump water heaters
We use the TECH Clean California dataset, which has data from over 1,200 heat pump water heater installations, to estimate heat pump water heater costs. For heat pump water heaters, we adjust the
cost of each project to represent a present-day national average by correcting for inflation and location-specific materials & labor costs. The main predictor of heat pump water heater cost is
whether the household switches from a fossil fuel water heater, which likely requires electrical work, or from an electric resistance water heater. We therefore separately calculate the median and
quartile costs for households switching from fossil and for households switching from electric resistance.
Stoves and dryers
The upfront costs of stoves, dryers, and electric vehicles do not differ by location, so we do not adjust these costs by RSMeans factors. We use a Google Shopping web scraper to collect the costs of
the most common models of stoves and dryers, screening for most common models by the presence of ratings. We then sort by price from lowest to highest, and determine the lowest price (to the nearest
hundred dollars) at which there are at least two or three models from reputed brands. For these appliance upfront costs, we report an approximate lower bound on price, since the consumer has more
power over how much they spend on an appliance purchase relative to a home upgrade like a heat pump installation. The base model of an appliance will be sufficient for most households, while some
households may choose to spend more for a higher-end appliance or vehicle.
Peer Reviewers: Andy Frank and Scott DeAngelo, Sealed; Brian Lamorte, Lamorte Electric; Joe Wachunas, AWHI | {"url":"https://www.rewiringamerica.org/research/home-electrification-cost-estimates","timestamp":"2024-11-05T23:34:01Z","content_type":"text/html","content_length":"132257","record_id":"<urn:uuid:7166274e-622d-4491-98fd-cf0a29f79a87>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00438.warc.gz"} |
Square Formation with Four Charges - Venerable Ventures Ltd
When discussing the square formation with four charges, we are referring to a configuration in physics where four charges are placed at the corners of a square. This setup is commonly used in
introductory physics courses to understand the concept of electric fields and forces. In this article, we will delve into the properties of this configuration, analyze the electric field and
potential energy associated with it, and explore the equilibrium conditions and stability of the system. Let’s break down the key components of this setup.
Understanding the Setup
In the square formation with four charges, we have four point charges, each denoted by q, placed at the four corners of a square. The charges can either be positive or negative, and we label them as
q1, q2, q3, and q4, with their respective positions as shown below:
• q1 q2
\ /
\ /
\ /
\ /
/ \
/ \
/ \
/ \
q4 q3
The distance between adjacent charges is denoted as d. It is important to note that the square formation can be oriented in any direction, as long as the charges are positioned at the corners of a
Electric Field Analysis
To determine the electric field at the center of the square formation, we first analyze the contributions from each individual charge. The electric field at the center point created by a single
charge is given by Coulomb’s law:
E = k*q / r^2
where k is the electrostatic constant, q is the charge, and r is the distance between the charge and the center point.
By considering the superposition principle, we can calculate the total electric field at the center due to all four charges. The electric fields from opposite charges cancel each other out, while
those from adjacent charges add up. The resultant electric field can be found by summing the horizontal and vertical components of the fields due to each charge.
Potential Energy Analysis
The potential energy of the system of charges in the square formation can be determined by considering the work done in assembling the charges. The potential energy of the system is the sum of the
potential energies of pairs of charges. For each pair of charges, the potential energy is given by:
U = kq1q2 / r
where k is the electrostatic constant, q1 and q2 are the charges, and r is the distance between the charges.
By summing the potential energies of all pairs of charges, we can obtain the total potential energy of the system.
Equilibrium Conditions and Stability
In the square formation with four charges, the system is said to be in equilibrium when the net force acting on each charge is zero. This condition ensures that the charges remain in their positions
without experiencing any external forces disturbing the system.
To analyze the equilibrium of the system, we can consider the forces acting on each charge due to the other charges. By applying Newton’s laws of motion and considering the forces along the diagonals
and sides of the square, we can determine the conditions for equilibrium and study the stability of the configuration.
Frequently Asked Questions (FAQs)
1. What is the significance of using a square formation with four charges in physics?
2. The square formation helps illustrate the concepts of electric fields, forces, and potential energy in a simple yet instructive manner.
3. How does the orientation of the square affect the electric field and potential energy of the system?
4. The orientation of the square can influence the magnitude and direction of the electric field at different points within the system.
5. What happens to the equilibrium of the system if one of the charges is moved or altered?
6. Any change in the position or magnitude of the charges can disrupt the equilibrium, leading to a reevaluation of the forces and potential energy in the system.
7. Is it possible to generalize the concept of square formation with four charges to other geometries?
8. Yes, similar principles can be applied to configurations with different numbers of charges in various geometries to study the behavior of electric fields and forces.
9. How can the square formation with four charges be extended to incorporate other properties of electromagnetism?
10. By introducing concepts such as electric dipole moments, polarization, and induced charges, the system’s complexity can be augmented to explore more advanced electromagnetism topics.
In conclusion, the square formation with four charges serves as a fundamental model for understanding the behavior of electric fields and forces in electrostatic systems. By examining the electric
field, potential energy, equilibrium conditions, and stability of the configuration, students and enthusiasts of physics can deepen their comprehension of these core concepts. This setup lays the
groundwork for more intricate studies in electromagnetism and provides a solid foundation for exploring the diverse phenomena in the realm of electric charges and fields. | {"url":"https://venerableventuresltd.com/square-formation-with-four-charges/","timestamp":"2024-11-09T07:48:12Z","content_type":"text/html","content_length":"269691","record_id":"<urn:uuid:c295c2e9-909c-4c0f-b03c-fd31009cb823>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00398.warc.gz"} |
Time-Varying Across-Shelf Ekman Transport and Vertical Eddy Viscosity on the Inner Shelf
1. Introduction
The mechanisms of across-shelf transport of water masses (and therefore nutrients, pollutants, phytoplankton, and planktonic larvae) in the inner shelf on wind-driven shelves are of significant
scientific and public interest, controlling access between the stratified coastal ocean and the well-mixed surf zone. Recent studies (Lentz 2001; Kirincich et al. 2005) have reported that the
exchange driven by the alongshore wind decreases in relation to the total Ekman transport as the coast is approached, from 100% of full Ekman transport in water depths of 50 m to 25% in water depths
of 15 m. This trend is based on mean results, averaged over seasonal (60–120 days) time periods. In contrast, wind-driven circulation varies over much shorter (2–7 days) time scales, and thus the
factors that control this transport divergence may vary significantly. In this paper, we investigate inner-shelf upwelling dynamics using observations from the central Oregon coast in an attempt to
quantify the variability of across-shelf exchange during the upwelling season.
The central Oregon inner shelf is an ideal location to investigate wind-driven dynamics. The region is forced by upwelling-favorable winds with small offshore wind stress curl during the spring and
summer months (Samelson et al. 2002; Kirincich et al. 2005). Throughout this time, intermittent downwelling wind bursts, occurring on periods of 5–20 days, lead to large variations in local
circulation and hydrographic conditions. During the summer of 2004, the Partnership for Interdisciplinary Studies of Coastal Oceans (PISCO) program maintained moorings at four along-shelf stations in
15 m of water on the Oregon inner shelf (Fig. 1). In a companion work, Kirincich and Barth (2009) use these observations to describe the temporal and spatial development of upwelling circulation.
Summarizing their results, the three stations inshore of an offshore submarine bank—Seal Rock (SR), Yachats Beach (YB), and Strawberry Hill (SH)—were sheltered from the regional upwelling circulation
yet still exposed to the regional wind forcing. In the lee of the bank (Barth et al. 2005), a smaller upwelling circulation formed near station SR and strengthened to the south. In this paper, we
focus on observations made by PISCO at the southernmost station (SH) to describe the effects of intermittent forcing on inner-shelf upwelling circulation.
We define the inner shelf as the region where across-shelf Ekman transport is divergent and coastal upwelling actively occurs. In this region, the thickness and amount of overlap of the surface and
bottom boundary layers control the location of upwelling and the local volume of across-shelf transport realized within each boundary layer. Using Ekman dynamics (Ekman 1905), boundary layer
thickness (d) is related to a vertical eddy viscosity (A) through d = (2A/f)^1/2, where f is the Coriolis parameter. Here, the vertical turbulent diffusion of momentum is parameterized as a function
of the vertical shear of the horizontal velocities and eddy viscosity. By decreasing A, density stratification should act to decrease the thickness of, and amount of overlap between, the theoretical
boundary layers at a given water depth. Thus, variations in stratification at these inner-shelf locations should have direct implications for across-shelf exchange of water masses in the inner shelf
and the ecological processes that depend on it.
While the physical mechanisms controlling across-shelf exchange are well studied in coastal regions dominated by buoyant plume dynamics (Wiseman and Garvine 1995; Yankovsky et al. 2000; Garvine 2004
), both observational and modeling studies of wind-driven inner-shelf dynamics have had difficulties describing and fully resolving across-shelf exchange variability. Wind-driven across-shelf
velocities in the inner shelf are difficult to observe, being an order of magnitude smaller than wind-driven along-shelf velocities, tidal motions, and wave orbital velocities. Thus, measurement
errors or unresolved spatial scales inhibit proper closure of the momentum balances. If the coastal boundary and bathymetry are locally straight and uniform, analytical models can capture the
majority of the along-shelf variability of wind- or pressure-driven inner-shelf systems during weakly stratified conditions (Lentz and Winant 1986; Lentz 1994). However, across-shelf currents and
stratified conditions are more difficult to properly represent, in part due to an oversimplification of vertical diffusion of momentum in these models.
Recently, two-dimensional (2D) numerical models with higher-order turbulent closure schemes have been used with some success to investigate inner-shelf circulation during constant wind (Austin and
Lentz 2002) and realistic wind (Kuebel Cervantes et al. 2003) forcing. In their parameterizations of the vertical turbulent diffusion of momentum, eddy viscosity is a function of the distance to the
boundary, local stratification, instantaneous velocity, and velocity shear (Mellor and Yamada 1982). These estimates of eddy viscosity and the vertical diffusion of momentum were critical to the
dynamical balances that Austin and Lentz (2002) and Kuebel Cervantes et al. (2003) obtained. Both studies noted a dynamical difference in vertical diffusion on the inner shelf during upwelling and
downwelling conditions. Further, Austin and Lentz (2002) reported that across-shelf transport decreased as stratification decreased and the vertical diffusion of momentum increased. Thus, with weaker
stratification occurring during downwelling conditions, across-shelf exchange at a given water depth during downwelling was reduced relative to that seen during upwelling.
Our analysis of the event-scale (2–7 days) upwelling dynamics would also benefit from estimates of the vertical turbulent diffusion of horizontal momentum, as it should be the ultimate controller of
boundary layer thickness, the amount of overlap, and the efficiency of across-shelf Ekman transport. However, the observations necessary to directly measure the turbulent fluxes are rarely made in
the inner shelf, especially over extended deployment periods encompassing forcing events. Thus, we use a novel approach to estimate this term by inverting the one-dimensional (1D) model of Lentz
(1994) to solve for vertical eddy viscosity and vertical turbulent diffusion given a known surface forcing and the observed velocity profiles. The novelty of this approach is that an optimization is
used to estimate vertically uniform pressure gradients and account for unknown sources (or sinks) of momentum while keeping the form of vertical diffusion intact. Using this estimated eddy viscosity,
we are able to explain the event-scale variability of across-shelf circulation in the study area.
In this paper we describe the short-time-scale (2–7 days) variability of inner-shelf circulation, hydrography, and forcing along the central Oregon coast during summer and document the dependence of
across-shelf transport on an estimate of vertical turbulent diffusion. We begin by describing the moored and shipboard observations of the study area, and detailing the numerical model formulations
used in our analysis (section 2). Results consist of a detailed description of the event-scale circulation variability, the estimated eddy viscosity, depth-dependent momentum balances, and a
quantification of the time-dependent across-shelf exchange efficiency (section 3). The influence of this variable efficiency on inner-shelf circulation is discussed (section 4) before we summarize
our results (section 5).
2. Data and methods
a. Observations
Between 9 July and 7 September 2004 (yeardays 190 and 250), velocity profiles and hydrographic measurements were collected at station SH by moored instruments maintained by PISCO (Fig. 1). Data from
a similar deployment at station SR, located 27 km to the north, are used to estimate along-shelf gradients of momentum at SH. At each station a bottom-mounted, upward-looking acoustic Doppler current
profiler (ADCP) was deployed adjacent to a mooring of temperature and conductivity sensors. The ADCP, an RDI Workhorse 600-kHz unit, collected velocity profiles in 1-m increments from 2.5 m above the
bottom to 1 m below the surface and sampled pressure at the instrument depth (14.5 m). The nearby mooring measured temperature at 1, 4, 9, and 14 m below the surface with Onset Tidbit or XTI loggers
and temperature and conductivity at 8 and 11 m using Sea-Bird 16 or 37 conductivity–temperature (CT) recorders. Wind measurements collected at National Oceanic and Atmospheric Administration’s (NOAA)
Coastal-Marine Automated Network (C-MAN) station NWP03 are used for this analysis. Station NWP03, located 30 km north of SH at Newport, Oregon, has been previously found to be representative of
near-shore winds throughout the study area during summer (Kirincich et al. 2005; Samelson et al. 2002).
From these observations, sampled every 2 to 10 min, we derived hourly averaged time series of along-shelf velocity, across-shelf surface transport, pressure anomaly, and density profiles using the
steps outlined below. The hourly averaged water velocities were rotated into an along- and across-shelf coordinate system defined by the depth-averaged principal axis of flow, found to be 7° (for SH)
and 1° (for SR) east of true north. Time series of across-shelf surface transport were computed from velocity profiles, extrapolated to the surface and bottom assuming a constant velocity (slab)
extrapolation, by subtracting the depth-averaged mean and integrating the profiles from the surface to the first zero crossing, following Kirincich et al. (2005). Pressure anomaly was calculated by
subtracting an estimate of the tidal variability, found using the T_TIDE software package (Pawlowicz et al. 2002), and a mean pressure time series, calculated following Kirincich and Barth (2009).
Density was estimated throughout the water column using the 8-m salinity measurement and each of the five temperature locations, assuming a linear relationship between temperature and salinity (m =
8.4°C psu^−1) derived from nearby conductivity–temperature–depth (CTD) casts. All time series were low-pass filtered using a filter with a 40-h half-power period to isolate the subtidal components.
Correlations between time series were tested for significance using the 95% confidence interval for the level of significance and N*, the effective degrees of freedom, following Chelton (1983).
In addition to the moored observations, a high-resolution hydrographic and velocity survey of the area was conducted from the research vessel (R/V) Elakha on 9–11 August (yeardays 222–224). Each day,
at the same phase of the tide, two to four consecutive 1-h transects were made along an east–west line terminating onshore at station SH (Fig. 1). Water depths along the transect line ranged from 15
m onshore to 70 m offshore, 12 km from the coast. Hydrographic measurements were obtained using a Sea Sciences Acrobat, a small, undulating towed body, carrying a Sea-Bird 25 Sealogger CTD. Velocity
estimates were made with a 300-kHz RDI Workhorse Mariner ADCP mounted on the R/V Elakha. Initial processing of these ship-based observations follows the methods described by Kirincich (2003). Density
and velocity fields for each transect were interpolated to a common 2D grid (offshore distance, depth), averaged, and spatially smoothed to reduce the effects of noise and unresolved variability.
b. Model formulations
The observations for station SH described above are compared with the results of two simple dynamical models for horizontal velocity that have been used in previous inner-shelf circulation studies
and one inverse model for eddy viscosity described here. All are based on the horizontal momentum equations, with a parameterization for the vertical turbulent diffusion of momentum:
, and
are the across-shelf (positive onshore), along-shelf (positive northward), and vertical (positive upward) coordinates,
are the corresponding velocities,
is pressure,
is the vertical eddy viscosity,
= 1 × 10
, and
= 1025 kg m
. The nonlinear advective terms, terms 2, 3, and 4 in
, are included here to aid the discussion that follows but are ignored by the models presented next.
1) Velocity models
The first model, the analytical model of
Lentz and Winant (1986)
, uses the spindown time associated with bottom friction to calculate the vertically uniform, along-shelf currents associated with a given along-shelf wind. In the model equation
are the modeled and observed depth-averaged along-shelf velocities, respectively,
is the bottom depth,
is the along-shelf component of wind stress, and
is a linear drag coefficient of 5 × 10
m s
Lentz and Winant 1986
). Using
, with additional forcing from an along-shelf pressure gradient,
Lentz and Winant (1986)
were able to successfully predict along-shelf velocities in the southern California bight during unstratified wintertime conditions. However, the model did not accurately represent these velocities
when stratification increased during summer.
The second model, the 1D eddy viscosity model originally presented by
Lentz (1994
), uses a control volume formulation with fully implicit time stepping to calculate the vertical profile of horizontal velocity given a profile of vertical eddy viscosity and surface or body forces.
The model combines
, ignoring the nonlinear terms, into a single equation for
, where
− 1 υ
is the complex horizontal velocity at time step
and grid point
is the control volume thickness, and ∂
and ∂
are across- and along-shelf pressure gradients, assumed to be vertically uniform. Boundary conditions at the surface and bottom are
is the along-shelf wind stress,
is the top control volume, and
is a roughness length scale. The coastal boundary condition:
is the total water depth, is used to estimate ∂
with an iterative method (
Lentz 1994
). In this study, we use the cubic profile formulation of eddy viscosity described first by
Signell et al. (1990)
. This model, used by
Lentz (1994)
to study the weakly stratified northern California inner shelf, reproduced the temporal variability and vertical structure of the observed along-shelf velocity well. However, across-shelf velocity
profiles differed significantly between model and observations and were highly dependent on the form of eddy viscosity used.
2) Eddy viscosity model
In the forward model described above, velocity profiles are estimated given the wind or pressure forcing and an assumed vertical profile of eddy viscosity. Here, we seek an inverse solution to
that estimates the time-dependent vertical profile of eddy viscosity given vertical profiles of horizontal velocity and wind forcing. Reordering Eq.
to solve for eddy viscosity (
) gives
a first-order ODE requiring knowledge of the horizontal pressure gradients (∂
and ∂
), the horizontal velocities, and one boundary condition. We use the surface (wind) forcing
to obtain eddy viscosity at the surface.
Of the three inputs to the inverse solution, the pressure gradients are perhaps the most difficult to measure directly. Estimates of the barotropic across-shelf and along-shelf pressure gradients at
the PISCO stations were made by Kirincich and Barth (2009), yielding a geostrophically balanced pressure gradient in the across-shelf direction and variable pressure gradient in the along-shelf
direction (Figs. 5d,e). Additionally, a test of the thermal wind balance, reported in section 3a found similarities between a vertically sheared Coriolis term and the across-shelf density gradient
offshore of SH during upwelling-favorable winds. However, these pressure gradients were not measured with sufficient accuracy to be used in (8) (Kirincich and Barth 2009) and thus must be treated as
unknowns. To simplify the inverse formulation with these additional unknowns, we assume the majority of the depth-dependent Coriolis term at this shallow-water depth is in an Ekman balance with
vertical diffusion rather than a geostrophic balance with a baroclinic pressure gradient. Thus, the depth-dependent pressure gradients should be much smaller than both the vertically uniform pressure
gradients and the vertical diffusion terms, and they can be ignored. This assumption has been used in previous models (Lentz and Winant 1986; Lentz 1994) and is tested in section 3c.
With these simplifications, given a pair of pressure gradients, we can solve (8) for a vertical profile of eddy viscosity using only the velocity profiles and wind stress as input. However, if the
velocity profiles are not fully explained by the input forcing, the resulting eddy viscosity will be incorrect and possibly complex. The imaginary part of such a result can be understood dynamically
considering the balance of momentum. A residual momentum term will exist if the vertical diffusion term, with its magnitude and vertical structure driven by the unknown eddy viscosity and the
velocity shear, is unable to account for all of the remaining momentum. In the inverse solution, this residual is packaged into an imaginary part of the eddy viscosity, and thus has a component in
each momentum equation (denoted as R[x] and R[y]). We assume that the bulk of this incorrect or additional momentum is due to incorrect vertically uniform pressure gradients. To optimize the
solution, (8) is solved for a matrix of along- and across-shelf pressure gradient pairs, finding the pair that makes the profile of A as positive as possible [as vertical diffusion is a momentum sink
(A > 0) not a source (A < 0)] and minimize the depth-averaged absolute value of the residual momentum terms R[x] and R[y].
The inverse solution is calculated independently for each time step j, providing time series of the estimated eddy viscosity (A; Fig. 5c), the vertically uniform along- and across-shelf pressure
gradients that optimize the inverse solution for the criteria given above (hereafter “matched” pressure gradients; Figs. 5d,e), and the depth-dependent residual momentum terms R[x] and R[y]. The raw,
hourly velocity profiles used in the calculation were normalized by water depth, interpolated onto a regularly spaced vertical grid, and low-pass filtered to isolate the subtidal variability. The
along-shelf component of wind stress, also low-pass filtered, was used as the surface boundary condition. The matched pressure gradients are assumed to be composed of the vertically uniform pressure
gradient and the depth-averaged mean of any other unrepresented term in the momentum equations (e.g., nonlinear advection), while the residual momentum term accounts for the depth-dependent part of
any unrepresented term.
Accuracy testing for the inverse solution was done using the output of several numerical circulation models as described by Kirincich (2007). Summarizing these results, tests indicate that the
inverse calculation was unable to reasonably estimate eddy viscosity when the vertical shear of the horizontal velocity was less than 5 × 10^−3 s^−1. Approximately 4.4% of the observations from
station SH fell below this threshold. Additionally, the inverse did poorly when the true eddy viscosities were less than 5 × 10^−5 m^2 s^−1. Approximately 4.2% of the estimated eddy viscosity values
fell below this threshold. (Instances of poor fit included in these thresholds can also be seen in Fig. 5c as negative values or sharp spikes of A.) Above these levels, tests using the numerical
models imply that rms errors are 20% of the mean eddy viscosity values for upwelling and downwelling if the inverse solution is well formulated. If additional sources or sinks of momentum have been
neglected, potential errors increase with depth from 20% to 70% of the mean values, but approach the mean values themselves in the bottom 2–3 m.
The inverse method described here does not account for measurement errors of the velocity or wind data while estimating vertical diffusion and eddy viscosity. Thus, more complex variational
estimation techniques that do so (Yu and O’Brien 1991; Panchang and Richardson 1993) could be adapted to solve for time-dependent profiles of A. However, we proceed with the basic model described
above to understand how variations in vertical eddy viscosity over the time scales of wind and pressure forcing events can effect across-shelf circulation in the inner shelf. Based on the accuracy
and sensitivity of the method, the inverse calculation formulated here was deemed adequate for this purpose.
c. Calculating momentum balances
The distribution of momentum in the inverse model results can be packaged into four terms for each linear horizontal momentum equation: the measured acceleration, an ageostrophic pressure gradient
formed by the sum of the Coriolis and matched pressure gradient terms, the vertical diffusion of momentum, and a residual momentum term (Figs. 7a–d, 8a–d). Vertical profiles of vertical diffusion and
residual momentum were smoothed (using a three-point boxcar vertical filter) for presentation in Figs. 7 and 8. For comparison with the residual term, estimates of across-shelf and vertical
advection, the second and fourth terms of (1) and (2), are included (Figs. 7e,f, 8e,f). Across-shelf advection uses station SH velocities and an across-shelf velocity gradient between the
measurements at SH and zero at the coastal boundary. Vertical advection was computed by multiplying the vertical gradients of horizontal velocities at station SH by an estimate of the vertical
velocity w. We assumed w had a parabolic vertical structure that was zero at both boundaries and a maximum at middepth, where it equaled the across-shelf surface transport at station SH divided by
the distance to the outside of the surf zone (700 m). In both equations, estimates of the along-shelf momentum flux, the third term in (1) and (2), were an order of magnitude smaller than all other
terms and were neglected hereafter.
3. Results
a. Observed hydrographic variability
We begin with a description of conditions at station SH during the later half of the 2004 upwelling season (Fig. 2). In general, the study area was forced by upwelling-favorable winds (τ = −0.05 N m^
−2) that resulted in southward velocities near 0.1 m s^−1 and dense waters (σ[t] = 25.5–26 kg m^−3) in the inner shelf. Periodic wind reversals occurring near days 203, 220–224, and 236–242 (Fig. 2)
caused reversals of along-shelf velocity and reduced inner-shelf density. Water-column temperatures ranged from 8° to 9°C during strong or sustained upwelling, and 14° to 16°C during these current
reversals. Waters tend to be more stratified during periods of weaker winds (e.g., days 195–202 and 227–232), weakly stratified during upwelling conditions (e.g., days 206–215 and 224–227), and
nearly unstratified during peak downwelling events (e.g., days 219 and 238–241) (Fig. 2). This hydrographic variability implies that conditions regularly transition between fully upwelled (when the
upwelling front intersects the surface offshore of the mooring, leaving weakly stratified conditions inshore) and fully downwelled (similar conditions but for a downwelling front) in response to the
local wind forcing.
These rapid wind-driven fluctuations are further illustrated by the across-shelf sections obtained on 10 and 11 August (yeardays 223 and 224) shown in
Fig. 3
. During this period, winds transition from a short downwelling event (day 222) to strengthening (day 223) and then sustained (day 224) upwelling (
Fig. 2a
). The across-shelf hydrographic sections (
Fig. 3a
) show isopycnals transitioning from nearly horizontal on day 223 (dashed lines) to strongly upwelled on day 224 (solid lines). The
= 25 kg m
isopycnal lies at a depth of 10 m inshore of the 15-m isobath (1.5 km offshore) on day 223, but intersects the surface near the 50-m isobath (6.5 km offshore) one day later. Assuming along-shelf
uniformity, a water parcel at this interface would need an average across-shelf velocity of 0.06 m s
to attain this displacement. Comparing the ship-based hydrography and velocity surveys using the thermal wind equation,
= 9.81 m s
is the gravitational acceleration, shows that the across-shelf density structure was nearly geostrophically balanced by along-shelf velocities on day 224 (
Figs. 3b,c
). Using the surface (
= 0) as the reference level and the ADCP-derived near-surface velocity as the reference velocity, density-derived geostrophic velocities were similar to the ADCP-derived velocities offshore of the
20-m isobath in vertical shear, magnitude, and direction. The rms difference between the two sections was 0.037 m s
overall, but 0.025 m s
within the area of the upwelled isopycnals.
The time series of measured across-shelf surface transport at SH was correlated with the theoretical Ekman transport for the same period (0.73 at zero lag), but it was lower in magnitude (Fig. 4a).
Following the method of Kirincich et al. (2005), the fraction of full theoretical Ekman transport measured over the 60-day study period was 25%. However, using this bulk calculation technique over
time scales similar to the 2–7-day wind and stratification events yielded results that were not statistically significant.
Given the small volume of water inshore of the 15-m isobath, it is likely that this wind-forced across-shelf transport was able to move water masses into and out of the inner shelf on time scales
similar to the fluctuating winds. Assuming a simple mass conservation balance between local density changes and the advection of a density gradient by the across-shelf circulation, these movements
are illustrated by comparing the measured surface transport to the density change at the 11-m CT (Fig. 4b). Episodes of strong offshore (negative) transport occur with positive density changes (e.g.,
days 203, 220, and 224), suggesting an onshore movement of denser waters during active upwelling. The opposite was true during transitions to downwelling events, with onshore (positive) surface
transport occurring with negative density changes, or lighter waters entering the inner shelf. Stratification (shaded in Fig. 4b) was most variable during these transitions, yet instances existed
where both surface transport and density change were large but stratification was small (days 238–240), and where surface transport was large but both density change and stratification were small
(e.g., days 207–212 and 232–233 in Fig. 4b). These findings appear to disagree with the relationship between stratification and transport seen in previous studies (Austin and Lentz 2002; Tilburg 2003
; Kirincich et al. 2005).
The forward models for along-shelf velocity introduced earlier [section 2b(1)] are able to explain the bulk of the velocity variability found at SH. Time series of depth-averaged along-shelf
velocities estimated for the study period using these models, V[lw] following Lentz and Winant (1986) and V[cub] following Lentz (1994), were generally similar to and correlated with (V[o]/V[lw]:
0.81 at zero lag, V[o]/V[cub]: 0.79 at zero lag) measurements from station SH (Fig. 4c). However, the model results failed to explain the variability of the depth-averaged along-shelf velocity during
periods of increased or variable stratification. These times when the modeled along-shelf velocities differed significantly from observations (days 205–206, 215–220, 226–229, and 233–234) occurred
when stratification was high or rapidly changing (Fig. 4c). Further, in agreement with Lentz (1994), the 1D numerical model had difficulty representing the across-shelf circulation described above
(not shown here). Additional times of poor agreement (days 203–204 and 206–214) indicate that secondary forcings (e.g., pressure gradients) may exist in addition to the dominant wind forcing.
b. Estimated eddy viscosity
Similar to the observed variability described above, the eddy viscosity (A) estimated using the inverse calculation at station SH (Fig. 5) had a strong variability (greater than two orders of
magnitude) superimposed on a mean value of 1.6 × 10^−3 m^2 s^−1. Peak eddy viscosities (>0.01 m^2 s^−1) occurred during times of strong surface forcing and rapidly changing stratification (e.g., days
222 and 237). At these times, eddy viscosity was a maximum near the surface. Additional instances of elevated eddy viscosities occurred when A was a maximum near the bottom (e.g., days 195 and 207).
These occasions were associated with periods of strong positive matched along-shelf pressure gradients and positive wind stress. Depth-averaged eddy viscosity during upwelling ranged from A = 1 × 10^
−4 m^2 s^−1 to 2–3 × 10^−3 m^2 s^−1, with a mean upwelling value of 1.3 × 10^−3 m^2 s^−1. In contrast, eddy viscosities were larger during downwelling, having a mean of 2.1 × 10^−3 m^2 s^−1 with peak
values reaching 7 to 9 × 10^−3 m^2 s^−1. The matched along-shelf pressure gradient had magnitudes similar to estimates from Kirincich and Barth (2009), found using the gradient between pressure
sensors at stations SH and SR (Figs. 5d,e), but with increased short-time-scale variability. The matched across-shelf pressure gradient was similar to an estimate of the gradient inferred from the
depth-averaged along-shelf velocity, assuming that a geostrophic balance exists between pressure and velocity.
An assessment of the quality of the inverse result can be made by comparing the estimated depth-averaged along-shelf vertical diffusion term with its theoretical equivalent, the sum of the wind
stress and bottom stress terms of the depth-averaged along-shelf momentum equation:
is a quadratic bottom stress, calculated using the lowest velocity bin of the ADCP (2.5 m above the bottom) and a drag coefficient of 1.5 × 10
Perlin et al. (2005)
. In general, the two time series were similar (
Fig. 5f
) and positively correlated (0.65). They were most similar during upwelling-favorable winds, with a correlation of 0.67, where the values shown in
Fig. 5f
are mostly positive. The time series differed substantially when the estimated term was negative (
Fig. 5f
), which generally occurred during downwelling wind events and/or times of positive along-shelf pressure gradients. These discrepancies might be due to the more barotropic flow conditions thought to
be present during downwelling or pressure-driven events. Here, the majority of the vertical diffusion of turbulent momentum might occur closer to the bottom than the lowest velocity measurement (
Kirincich 2007
). Thus, this comparison is not entirely appropriate during these types of conditions.
Based on these comparisons, the inverse calculation appears to represent eddy viscosity sufficiently well, particularly during upwelling-favorable winds. The discrepancies that exist when A and
velocity shear exceed the thresholds described above were also times when the forward models shown earlier did poorly. These periods appear to be pressure forced, as the matched along-shelf pressure
gradient was large and positive (days 204, 215–219, and 238 in Fig. 5d). However, a portion of the matched gradient might also account for a discrepancy between the forcing and the resulting velocity
profiles, a biased estimate of the mean vertical mixing term or a significant depth-independent nonlinear term.
To analyze the vertical structure of A, we performed an empirical orthogonal function (EOF) analysis of A after removing the depth-averaged mean and normalizing by the standard deviation for each
vertical profile. Of the results, modes one and two account for 50% and 25% of the total variance, while modes three and four account for 10% and 4%. The vertical structure of eddy viscosity at
station SH is well represented by a composite of the first three modes, collectively containing 85% of the variance. Examples of this composite at times of positive (solid profiles) and negative
(dashed profiles) modal amplitudes for modes one and two are similar to A (Figs. 6c,d). Focusing on times when modes one or two are clearly dominant (explaining >55% of the hourly variance), mode one
is positive (surface intensified A; represented by times of darker shading in Fig. 6a) during times of strong downwelling (positive) wind bursts or sustained upwelling (negative) winds (e.g., days
223, 225–227, 232–234, and 236–239). Mode one tends to be dominant and negative (bottom intensified A; light shading in Fig. 6a) during weaker or transitional winds (e.g., days 194, 211, 217, and
228). Mode two is rarely negative and dominant (both surface and bottom intensified; light shading in Fig. 6b). These instances also occur during weak winds or transitions between upwelling and
downwelling events (e.g., days 216 and 218). Times of positive mode 2 dominance, a midwater intensified profile similar in form to the “cubic” A model profiles used here and by Lentz (1994) (dark
shading in Fig. 6b), occurred during short bursts of upwelling-favorable (negative) winds (e.g., days 204, 221, and 250).
c. Momentum balance analysis
An analysis of the across- and along-shelf equations at station SH provides an additional check of the inverse estimate, as well as an idea of the dominant balances present. All terms are shown (
Figs. 7 and 8) as if they were on the left-hand side of the momentum Eqs. (1) and (2). Thus, while we look for the opposite structure (or colors) for balances among the top four panels, we look for
similar structure when comparing the residual term (Figs. 7d and 8d) to the advective estimates (Figs. 7e,f and 8e,f).
In the across-shelf balance, after the depth-independent geostrophic balance was subtracted, a dominant balance exists between the ageostrophic pressure gradient and vertical diffusion. Resembling an
Ekman balance, vertical diffusion opposes the ageostrophic term throughout most of the water column, including a sign change at 5–7-m depth. A similar vertical structure of these terms exists in the
analytical solution of (1) and (2) when the advective terms are neglected (Ekman 1905). The residual term offsets the ageostrophic pressure gradient during upwelling near the bottom of the measured
area (e.g., days 213–216, 232–234, and 241–252 as marked on Fig. 7e), indicating that the Ekman balance may break down here. These results imply that the assumption of a small across-shelf baroclinic
pressure gradient appears reasonable throughout most of the water column at this shallow depth. Estimates of the two advective terms frequently match the residual. Across-shelf advection, the weaker
of the two, appears to match the residual on days 204, 221, and 224 (Fig. 7e). Vertical advection appears to match the sign and vertical structure of the residual more often (days 204, 207–212, 221,
224, and 232–245 in Fig. 7e), but with a slightly smaller magnitude. Combined, these two terms match the vertical structure of large portions of the across-shelf residual momentum term. The measured
acceleration term is the smallest of those shown in Fig. 7.
A similar balance between ageostrophic pressure and vertical diffusion exists in the along-shelf momentum equation. These terms are more frequently vertically uniform but are occasionally intensified
at middepth (Fig. 8). Acceleration is somewhat important during times of strong or rapidly changing wind forcing. In contrast to the across-shelf equation, neither advective term appears to account
for the residual momentum in the along-shelf equation. Vertical advection is frequently vertically uniform (e.g., days 208–212, 225, and 250–245 in Fig. 8f) and thus may account for a portion of the
ageostrophic pressure during these times. The across-shelf advection term often has a vertical structure similar to that of the residual (e.g., days 215–219 and 241–252 in Fig. 8e), but of opposite
sign. This might occur if the core of the along-shelf velocity jet associated with this small-scale upwelling circulation (Kirincich and Barth 2009) was inshore of station SH, reversing the gradient
Spikes or sharp sign changes in vertical diffusion occur in both momentum equations and generally coincide with opposing patterns in the residual momentum terms. Occurring most frequently during
times of rapid transitions between upwelling and downwelling (days 198, 200, 206, 227, and 246), these patterns are the largest magnitude features for each term and further indicate the effects of
measurement or calculation errors. In both equations, the residual tends to be highest in the bottom of the measured area where calculation errors are largest (Kirincich 2007). Despite these
discrepancies, the comparisons highlighted above provide additional support of the vertical diffusion and eddy viscosity estimated from the inverse calculation, and they help infer the sources of the
residual momentum terms.
d. Across-shelf exchange efficiency
Previous studies have found links between the level of stratification present and the across-shelf exchange efficiency (Austin and Lentz 2002; Kirincich et al. 2005), but the observations of
event-scale variations shown earlier do not reveal such a pattern. To explore this further, we computed the Ekman transport fraction, following Kirincich et al. (2005), for varying levels of
stratification. Separating the time series of theoretical and measured transports into six equally sized bins, based on the logarithm of stratification, and computing the transport fraction for each
bin leads to statistically significant results (Fig. 9a). The resulting mean Ekman transport fraction is similar for all levels of stratification, and thus no real trend exists that supports a link
between transport fraction and stratification.
In contrast, a similar binned Ekman transport fraction calculation using depth-averaged A, instead of stratification, gives drastically different results. Here, the fraction of full Ekman transport
decreases as the magnitude of A increases (Fig. 9b). Of the significant (shaded) results, Ekman transport ranged from 60% for A = 0.3 × 10^−3 m^2 s^−1 to 15% for A = 3.8 × 10^−3 m^2 s^−1. The
majority of the measurements fell in the band between fractions of 37% and 30% and A = 1 to 2 × 10^−3 m^2 s^−1. The median fraction given here was 10% higher than the time series mean fraction noted
earlier. A third calculation, binned by the surface wind stress, has a trend similar to that of eddy viscosity. The fraction of full Ekman transport decreases as the magnitude of the wind stress
increases (Fig. 9c), from 65% for stresses under 1 × 10^−3 N m^−2 to 25% at stresses of 0.1 N m^−2.
The distribution of these transport fractions, binned by A, during the 8-day period starting on yearday 218 illustrates how the fraction of full Ekman transport varies with forcing events. The lowest
fractions occurred during the peak downwelling event of day 219 (Fig. 10), while the highest fractions occurred at times of weaker wind forcing or event transitions (days 218, 222.3, and 223). This
variability can also be thought of as more (high fraction) or less (low fraction) of the Ekman spiral fitting into the water column as conditions vary in time. The progression of high fractions to
low fractions and back over the course of a wind event is illustrated by the upwelling events present in this time period (Fig. 10). This is of particular importance because the fractional transport
can be large in the initial or final stages of a wind event, presumably as the water column is more stratified. It is during these times that the total exchange of the inner-shelf water masses is
likely to occur.
In general, downwelling events tend to have lower transport fractions than upwelling events, or a weaker across-shelf transport relative to wind forcing. An analysis of average upwelling and
downwelling characteristics after event onset revealed two key differences responsible for the lower fractions seen during downwelling. First, downwelling events tend to have stronger peak winds than
upwelling events. Second, and perhaps more importantly, during downwelling the vertical shear of the horizontal velocity was reduced after event onset relative to upwelling. Through the inverse
model, these factors contributed to higher eddy viscosities and thus reduced transport fractions during downwelling. This discrepancy has important implications for across-shelf transport in the
inner shelf. As illustrated in the bottom of Fig. 10, the across-shelf transport accumulated over the 8-day period was more negative than the theoretical transport reduced by the mean transport
fraction (25%). This difference is due to reduced transport during the first downwelling event as well as increased transport during upwelling and event transitions. As a result, twice as much water
was upwelled through the region inshore of the mooring over the 8-day period than that predicted using the mean transport fraction.
4. Discussion
Conditions at station SH were dominated by rapid and short fluctuations between upwelling and downwelling, with upwelling events averaging 40 h in length and downwelling events averaging 30 h in
length. Perhaps because of these short-time-scale variations, the expected relationship between stratification and transport fraction was not seen. Additionally, simple wind-driven velocity models
were not able to accurately represent the measured circulation during times of increased stratification or transitions between forcing events. For these reasons, we have used an inverse model for
eddy viscosity to understand the time variability of across-shelf exchange in the inner shelf. While previous efforts to estimate eddy viscosity from measurements exist (Yu and O’Brien 1991), to our
knowledge, none has allowed for additional unknown terms (e.g., pressure gradients or nonlinear advection) or a time-dependent A. Thus, this method may be useful in future dynamical studies of
shallow locations where velocity profiles are well resolved. As seen from the inverse results, resolving these profiles near the bottom appears necessary to further improve the technique.
The varying magnitudes of eddy viscosity during upwelling and downwelling described here were also present in the inner-shelf model studies of Austin and Lentz (2002) and Kuebel Cervantes et al.
(2003). Using constant winds of 0.1 N m^−2 and constant stratification, Austin and Lentz (2002) found vertical eddy viscosities with cubic vertical structure and maximum values of 7 × 10^−3 m^−2 s^−1
during upwelling and 1 × 10^−2 m^−2 s^−1 during downwelling in water depths similar to those studied here. With variable forcing conditions, Kuebel Cervantes et al. (2003) found lower mean values of
9 × 10^−4 m^−2 s^−1 during upwelling and 3.9 × 10^−3 m^−2 s^−1 during downwelling. The lower values of Kuebel Cervantes et al. (2003) were most similar to the estimated eddy viscosities for station
SH reported here. However, both model studies linked this difference in eddy viscosity during upwelling or downwelling to a difference in stratification during upwelling or downwelling. We see no
such relationship in the observations from station SH. In our results, eddy viscosities are higher during downwelling winds because downwelling winds were generally stronger in magnitude with reduced
vertical shear after event onset.
The fraction of full Ekman transport, a relationship between full theoretical Ekman transport and the measured surface transport, can be thought of as a metric of the efficiency of across-shelf
exchange in the inner shelf. Again, variations in this quantity at SH did not correspond to changes in stratification alone, as has been previously suggested (Tilburg 2003; Austin and Lentz 2002;
Kirincich et al. 2005) but were instead linked to variations in estimates of vertical eddy viscosity. When eddy viscosity was small, the inner shelf was fully flushed even during weak wind stresses.
When eddy viscosity was high, across-shelf transport was reduced and residence times increased. Similar results exist for a transport fraction calculation binned by the surface wind stress.
Additionally, this transport fraction pattern was similar to that found in calculations (not shown here) binned by either the level of vertical shear of the horizontal velocity or the gradient
Richardson number, defined as the ratio of the buoyancy frequency and shear squared. A positive relationship with gradient Richardson number and shear but not stratification implies that shear was
the dominant influence on the variability of the gradient Richardson number calculated here.
As wind stress and velocity shear are inputs to the inverse model, it is perhaps not surprising that their effect on the efficiency of across-shelf exchange is similar to that of eddy viscosity. It
is of interest that no clear relationship between stratification and the transport fraction was found here. Given that our observations are dominated by rapid variations in conditions driven by
short-time-scale fluctuations of the wind forcing, we infer that wind forcing and velocity shear set the eddy viscosity, and thus the transport fraction, during these transitional periods. The
effects of stratification on across-shelf exchange may be more important after the forcing has been sustained for a number of inertial periods. However, given the uncertainty existing in the inverse
results, more work is needed to fully understand this difference.
Our results illustrate that northward downwelling-favorable wind bursts lead to high eddy viscosities and low amounts of across-shelf transport. Thus, the net across-shelf circulation can be biased
toward the upwelling of colder, nutrient-rich waters into the inner shelf during periods of fluctuating wind forcing. This result has significant implications for the common use of large-scale,
wind-based upwelling indices to estimate coastal upwelling. Although the total magnitude of upwelled waters, integrated across the shelf, might be accurately predicted with such indices, the amount
of upwelled waters seen at a particular across-shelf location in the inner shelf may vary greatly from the index. In particular, these indices will underestimate the net water upwelled inshore of a
given water depth during periods of variable wind forcing. We believe this result is most applicable to inner-shelf areas where the across-shelf transport is tightly correlated with the measured
winds and both upwelling and downwelling events commonly occur. This result may be complicated, or confounded, by additional forcings (e.g., along-shelf pressure gradients) or spatial variations in
The analysis of the along- and across-shelf momentum equations pointed to possible sources for the residual momentum terms. The Ekman balance appeared to occasionally break down near the bottom of
the water column as the residual term, and not vertical diffusion, opposed the vertical shear of the horizontal velocity in these areas. The baroclinic pressure gradient, observed offshore of 15 m
both here and in previous inner-shelf studies (Lentz et al. 1999; Garvine 2004) but not included in the inverse formulation, is a likely source for this residual momentum. Additionally, estimates of
along-shelf and vertical advection matched the remaining vertical structure of the residual term in the across-shelf equation. In the along-shelf momentum equation, the potential for these terms to
account for the remaining residual momentum was less clear. The importance of across-shelf advection in balancing vertical diffusion in the along-shelf momentum equation during active upwelling was
shown by Kuebel Cervantes et al. (2003) and Lentz and Chapman (2004). However, our estimate for this term was similar in structure to the residual, but opposite in sign, perhaps suggesting an error
in the assumptions made in its calculation. Despite these similarities, the inclusion of these estimates for the advective terms in the inverse calculation did not significantly alter estimates of A
or reduce R[x] and R[y] (Kirincich 2007). Thus, it appears that better estimates of these terms are necessary in future studies to fully understand the time variability of this system.
The episodes of total water mass exchange and the progression of transport efficiency during wind events identified in this study may have significant implications for inner-shelf ecological
communities. Previous ecological studies in upwelling environments show that increased settlement of larval invertebrates was correlated with episodes of increasing water temperatures (Miller and
Emlet 1997; Farrell et al. 1991; Broitman et al. 2005). Along the Oregon coast, the transition from upwelling to downwelling, with its decreased eddy viscosity and increased across-shelf transport
efficiency, allows for a full flushing of the inner shelf and replacement with warm, fresh surface water. Once this transition has occurred, eddy viscosity increases and across-shelf circulation is
reduced or shut down. This process provides a mechanism for focused, successful across-shelf transport or retention of propagules. Whether this aids recruitment depends on the life cycle
characteristics of the individual species. Larvae released during such an event would tend to stay closer to shore for a longer period of time than if released during normal upwelling conditions.
Additionally, with an overall bias toward onshore transport at depth during upwelling, larvae able to adjust their buoyancy might be able to move onshore in a predicable manner.
5. Conclusions
This analysis has described the short-time-scale (2–7 day) variability of forcing, circulation, and hydrography along the central Oregon coast. Conditions in the study area, sheltered from the
regional circulation by an offshore submarine bank, were highly variable in time. With the local circulation driven by along-shelf wind forcing, rapid transitions occurred between upwelling and
downwelling events and a variety of water masses occupied the inner shelf. To understand this variability, we adapted a simple one-dimensional numerical model to estimate the time-dependent vertical
eddy viscosity, a parameterization of the transfer of momentum due to turbulent eddies, from typical observations of velocity and wind forcing. The novelty of the inverse method was that it estimated
eddy viscosity while allowing for additional unknown sources of momentum.
With the results of this inverse calculation, we were able to quantify the effects of variable forcing on across-shelf exchange. The estimated eddy viscosity varied over time scales similar to
forcing events, averaging 1.3 × 10^−3 m^2 s^−1 during upwelling winds and 2.1 × 10^−3 m^2 s^−1 during downwelling winds. The fraction of full Ekman transport present in the surface layer, a measure
of the efficiency of across-shelf exchange at this water depth, was a strong function of the eddy viscosity and wind forcing but not stratification. Transport fractions ranged from 60% during times
of weak or variable wind forcing and low eddy viscosity, to 10%–20% during times of strong downwelling and high eddy viscosity. The increased eddy viscosity and decreased exchange efficiency found
during downwelling events was linked to reduced vertical shear of the horizontal velocity during downwelling events, not to reductions in stratification. These trends result from the rapid
fluctuations between upwelling and downwelling and the relatively short duration of these events, allowing wind stress and velocity shear to dominate the vertical diffusion. Previous model and
observational results finding stronger links between stratification and exchange efficiency were focused on the effects of constant wind forcing or seasonal mean circulation. The difference in eddy
viscosities between upwelling and downwelling led to varying across-shelf exchange efficiencies and, potentially, increased net upwelling over time.
This paper is Contribution Number 310 from PISCO, the Partnership for Interdisciplinary Studies of Coastal Oceans, funded primarily by the Gordon and Betty Moore Foundation and the David and Lucile
Packard Foundation. We thank J. Lubchenco and B. Menge for establishing and maintaining the PISCO observational program at OSU. We also thank Captain P. York, C. Holmes, and S. Holmes for their data
collection efforts, B. Kuebel Cervantes for providing the model output used to test the inverse method, and M. Levine (OSU) and S. Lentz (WHOI) for helpful comments on the manuscript.
• Austin, J., and S. Lentz, 2002: The inner shelf response to wind-driven upwelling and downwelling. J. Phys. Oceanogr., 32 , 2171–2193.
• Barth, J., S. Pierce, and T. Cowles, 2005: Mesoscale structure and its seasonal evolution in the northern California Current System. Deep-Sea Res. II, 52 , 5–28.
• Broitman, B., C. Blanchette, and S. Gaines, 2005: Recruitment of intertidal invertebrates and oceanographic variability at Santa Cruz Island, California. Limnol. Oceanogr., 50 , 1473–1479.
• Chelton, D., 1983: Effects of sampling errors in statistical estimation. Deep-Sea Res., 30 , 1083–1101.
• Ekman, V., 1905: On the influence of the Earth’s rotation on ocean-currents. Arkiv. Math. Astro. Fys., 2 , 1–53.
• Farrell, T., D. Bracher, and J. Roughgarden, 1991: Cross-shelf transport causes recruitment to intertidal populations in central California. Limnol. Oceanogr., 36 , 279–288.
• Garvine, R., 2004: The vertical structure and subtidal dynamcis of the inner shelf off New Jersey. J. Mar. Res., 62 , 337–371.
• Kirincich, A., 2003: The structure and variability of a coastal density front. M.S. thesis, Graduate School of Oceanography, University of Rhode Island, 124 pp.
• Kirincich, A., 2007: Inner-shelf circulation off the central Oregon coast. Ph.D. dissertation, Oregon State University, 179 pp.
• Kirincich, A., and J. Barth, 2009: Alongshelf variability of inner-shelf circulation along the central Oregon coast during summer. J. Phys. Oceanogr., in press.
• Kirincich, A., J. Barth, B. Grantham, B. Menge, and J. Lubchenco, 2005: Wind-driven inner-shelf circulation off central Oregon during summer. J. Geophys. Res., 110 , C10S03. doi:10.1029/
• Kuebel Cervantes, B., J. Allen, and R. Samelson, 2003: A modeling study of Eulerian and Lagrangian aspects of shelf circulation off Duck, North Carolina. J. Phys. Oceanogr., 33 , 2070–2092.
• Lentz, S., 1994: Current dynamics over the northern California inner shelf. J. Phys. Oceanogr., 24 , 2461–2478.
• Lentz, S., 1995: Sensitivity of the inner-shelf circulation to the form of the eddy viscosity profile. J. Phys. Oceanogr., 25 , 19–28.
• Lentz, S., 2001: The influence of stratification on the wind-driven cross-shelf circulation over the North Carolina shelf. J. Phys. Oceanogr., 31 , 2749–2760.
• Lentz, S., and C. Winant, 1986: Subinertial currents on the southern California shelf. J. Phys. Oceanogr., 16 , 1737–1750.
• Lentz, S., and D. Chapman, 2004: The importance of nonlinear cross-shelf momentum flux during wind-driven coastal upwelling. J. Phys. Oceanogr., 34 , 2444–2457.
• Lentz, S., R. Guza, S. Elgar, F. Feddersen, and T. Herbers, 1999: Momentum balances on the North Carolina inner shelf. J. Geophys. Res., 104 , 18205–18226.
• Mellor, G., and T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys., 20 , 851–875.
• Miller, B., and R. Emlet, 1997: Influence of nearshore hydrodynamics on larval abundance and settlement of sea urchins Stronglocentrotus franciscanus and S. purpuratus in the Oregon upwelling
zone. Ecol. Prog. Ser., 148 , 83–94.
• Panchang, V., and J. Richardson, 1993: Inverse adjoint estimation of eddy viscosity for coastal flow models. J. Hydrol. Eng., 119 , 506–524.
• Pawlowicz, R., B. Beardsley, and S. Lentz, 2002: Classical tidal harmonic analysis including error estimates in MATLAB using T−TIDE. Comput. Geosci., 28 , 929–937.
• Perlin, A., J. Moum, J. Klymak, M. Levine, T. Boyd, and P. Kosro, 2005: A modified law-of-the-wall applied to oceanic bottom boundary layers. J. Geophys. Res., 110 , C10S10. doi:10.1029/
• Samelson, R., and Coauthors, 2002: Wind stress forcing of the Oregon coastal ocean during the 1999 upwelling season. J. Geophys. Res., 107 , 3034. doi:10.1029/2001JC000900.
• Signell, R., R. Beardsley, H. Graber, and A. Capotondi, 1990: Effect of wave-current interaction on wind-driven circulation in narrow, shallow embayments. J. Geophys. Res., 95 , 9671–9678.
• Tilburg, C., 2003: Across-shelf transport on a continental shelf: Do across-shelf winds matter? J. Phys. Oceanogr., 33 , 2675–2688.
• Wiseman, W., and R. Garvine, 1995: Plumes and coastal currents near large river mouths. Estuaries, 18 , 509–517.
• Yankovsky, A., R. Garvine, and A. Munchow, 2000: Mesoscale currents on the inner New Jersey shelf driven by the interaction of buoyancy and wind forcing. J. Phys. Oceanogr., 5 , 2214–2230.
• Yu, L., and J. O’Brien, 1991: Variational estimation of the wind stress drag coefficient and oceanic eddy viscosity profile. J. Phys. Oceanogr., 21 , 709–719.
Fig. 1.
The central Oregon shelf with the 2004 PISCO stations (dots) and the Newport C-MAN station (triangle) marked. The bold line offshore of SH marks the transect line occupied during the high-resolution
ship-based surveys. Isobaths (thin black lines) are marked in meters.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 2.
(a) Observed along-shelf wind stress at Newport with Acrobat section times marked by vertical lines with (b) depth-averaged along- (bold) and across-shelf (thin) velocities, (c) temperature, and (d)
density at station SH.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 3.
(a) Observed across-shelf hydrographic structure offshore of SH 15 on 10 August (day 223, dashed lines) and 11 August (day 224, solid lines), (b) the geostrophic velocity [using Eq. (9)] for 11
August, and (c) average northward velocities for 11 August from R/V Elakha’s shipboard ADCP. In (b) and (c) southward velocities (dashed) and northward velocities (solid) are marked every 0.025 m s^
−1, while the bold, solid line is the 0 m s^−1 contour.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 4.
(a) Theoretical (bold) and measured (thin) across-shelf transport in 15 m of water at station SH. (b) Density stratification (shaded background), density change at the 11-m CT (δσ[t], bold), and
measured across-shelf transport divided by 10 (U[s]/10, thin). (c) Density stratification (shaded background) and depth-averaged along-shelf velocities from observations (V[o]), the Lentz and Winant
(1986) analytical model (V[lw]), and the Lentz (1994) 1D numerical model with a cubic eddy viscosity profile (V[cub]).
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 5.
Results of the inverse calculation for station SH. The (a) north wind stress and (b) density stratification are shown along with the estimated (c) eddy viscosity A. The matched (from inverse model)
and estimated (from data) (d) along-shelf and (e) across-shelf pressure gradients are included for comparison along with (f) the estimated (thin) and theoretical (bold) depth-averaged vertical
turbulent diffusion of horizontal momentum terms.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 6.
The vertical structure of the estimated eddy viscosity at station SH. (a), (b) Along-shelf wind (τ^y/H, thick line) and matched along-shelf pressure gradient (dP/dy, thin line) with vertical shading
denoting the occurrence of positive or negative (a) mode-one and (b) mode-two dominant conditions (>55% total variance). Positive (negative) modal conditions are shaded darker (lighter), while no
shading (white) indicates times when the mode was not dominant. (c), (d) Examples of the vertical structure of A during times of positive or negative (c) mode-one and (d) mode-two dominance. Thick
profiles, solid (dashed) during positive (negative) modal amplitudes, are composites of the first three modes, while the thinner profiles show A.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 7.
Terms in the across-shelf momentum balance (in m s^−2): (a) acceleration, (b) ageostrophic pressure gradient, (c) vertical diffusion, (d) residual momentum R[x] (denoted as R in the figure) and
estimates of the (e) along-shelf and (f) vertical advection terms. Terms are shown as they would appear on the left-hand side of Eq. (1). Solid horizontal lines mark areas of interest described in
the text.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 8.
As in Fig. 7 [(d) residual momentum R[y] (denoted as R in the figure)], but for the along-shelf momentum balance and the left-hand side of Eq. (2).
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 9.
Ekman fraction calculation for station SH surface transport and theoretical Ekman transport, following Kirincich et al. (2005) but binned by levels of (a) density stratification, (b) depth-averaged
eddy viscosity (A), and (c) wind stress. Bins where the measured and theoretical transports are significantly correlated (at the 95% confidence interval) are shaded. Confidence intervals for the
regression are shown as vertical lines and the number of hours falling in each bin is listed.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1
Fig. 10.
Temporal variability of eddy viscosity and its effects on circulation during yeardays 218–226. (top) The ranges of A used in the Ekman transport fraction calculation shown in Fig. 9 are distributed
on time series of theoretical Ekman transport (U[tek], large variations) and measured surface transport (U[s], smaller variations). (bottom) The accumulated mean theoretical Ekman (0.25U[tek]; the
mean fraction for this station) and measured surface (U[s]) transports are shown, normalized by the volume of the inner shelf inshore of station SH, to illustrate the effects of eddy viscosity
variability on across-shelf transport.
Citation: Journal of Physical Oceanography 39, 3; 10.1175/2008JPO3969.1 | {"url":"https://journals.ametsoc.org/view/journals/phoc/39/3/2008jpo3969.1.xml","timestamp":"2024-11-05T16:29:04Z","content_type":"text/html","content_length":"858735","record_id":"<urn:uuid:3d2873c3-55cf-4a68-8536-3dd6c60b0330>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00294.warc.gz"} |
LendingClub: Achieving 10%+ Returns with LendingClubLendingClub: Achieving 10%+ Returns with LendingClub
LendingClub: Achieving 10%+ Returns with LendingClub
Founded in 2006, LendingClub rapidly grew to become the worldβ s largest peer-to-peer lending platform, originating $3bn of loans in 2019.
LendingClubβ s business model is to match investors looking to earn returns with borrowers.Β Both borrowers and lenders are able to get better rates than they would from traditional banks.Β
LendingClub charges borrowers an origination fee on the loans, and a servicing fee on payments made to the lenders. In August 2014, the company raised $1 billion in the largest technology IPO of the
Credit Quality on LendingClubβ s Platform
LendingClub initially had 7 major grades of loan rating from A β G.Β As the companyβ s issuance volumes grew in 2013 and 2014, the default rates on the two lowest-quality grades increased
substantially.Β This led to investor returns for these ratings becoming negative from 2015 on.
This collapse in investor returns occurred in spite of LendingClub aggressively raising rates on its lower grade loans β highlighting the importance of careful loan selection by investors.
Key Business Questions
Given the wide variation in returns on the LendingClub platform, a potential investor faces three key questions:
1. Is it possible to predict which loans will be β goodβ ? Defining a precise metric for β goodβ is part of the required analysis, but clearly non-defaulting loans are preferable to those that
do.Β Also, for non-defaulting loans, those with higher interest rates are clearly preferable.
2. How should loans be combined together into an optimized portfolio that diversifies away the risk of any single borrower?
3. If we apply the results from our first two questions to a realistic trading strategy, what aggregate returns are possible?
The following sections discuss our analysis of these questions.
Defaults and Random Portfolio Returns
LendingClub makes available all of its historical loan data including 149 pieces of information describing both the borrower (anonymized) and the loan. This loan database spans the entire history of
LendingClub and is published quarterly.
Our specific dataset included all issue dates until December 2018, a total of about 2.3 million loans.Β We removed about 1 million current loans that had not yet ended either in repayment or
default, as well as a few thousand of very old loans issued before LendingClub registered with the SEC.
When the loan originates, LendingClub assigns each loan a grade (A-G) and a subgrade (A1-G5) to reflect the perceived risk of the borrower. All loans eventually end up in one of the 3 categories:
• paid on time
• pre-paid early (no penalty for the borrower)
• default
High default rates are a key challenge for LendingClub and other peer-to-peer lending platforms. For example, the largest subgrade C1 has a default rate of 19%, whereas the traditional consumer
credit default rates in the U.S. (e.g. credit cards) are in the range of only 4-5%.
Because the default rate gets so large for lower grades, it pushes down the actual returns for investors.Β For the higher grades A and B, random investing can produce close to 4% IRR (annualized
return), but for the middle and lower grades the performance quickly deteriorates, and starting from grade D itβ s already negative. Β
You lose money if you invest in grade D and below randomly:
So given how bad the defaults are for returns, the question is, can we use the information available when the loans originate to detect those that are likely to default?
Predicting Defaults
When building an actual loan portfolio, we should not simply avoid potential defaults at all cost. Such strategy would produce portfolios consisting only of 4%-bearing A1 loans, and we can do much
better than that as we will show below. Therefore, it makes no sense to lump all LendingClub loans in a single training set for the classifier. Instead, each risk category (subgrade) should be
analyzed separately. Β
Predicting potential defaults is a traditional binary classification task, which we approached as follows:
1. Remove all features not known at the time of the issue and engineer new features such as the length of credit history and the flag for having a loan description
2. Train and test a binary classifier for each subgrade using rolling windows. E.g. use all available data up to 2015 to predict defaults for the loans issued in 2016
3. Choose the metrics: precision (lift) for the top predicted probabilities of non-default. We do not care about passing on a good loan (Type II error) because there are about 40,000 new ones issued
every month, we care about investing in a bad loan (Type I error)
4. Deploy and compare several models including XGBoost and neural networks. The simple 2-layer neural network showed the best performance
The table below summarizes neural networkβ s top-10% lifts for a few selected subgrades:
Subgraade C1
Using subgrade C1 as an example, random investment into C1 yields 82% of good loans, but the top-10% classified loans would produce 89%! Β
Such reduction in the number of defaults has a very positive effect on the returns across the subgrades. On the chart below, the black line is the uninformed investment return, and the color lines
represent consecutive levels of selectivity: top-50%, top-10% and top-1% of the loans:
The annualized returns for such selected portfolios reach 10% IRRs and above.Β For example, if we apply our classifier to subgrade C4, we could achieve 11-12% IRR.
Given that for most subgrades, there are 1,500-2,000 new loans issued every month, such classification would be equivalent to investing into the best 15-20 loans monthly (within each subgrade). This
can accommodate both the retail demand but also some smaller institutional demand such as family offices.
Now that we proved that we can reduce the share of defaults significantly, we decided to build concrete portfolios of LendingClub loans and backtest them in a realistic simulation framework.
Portfolio Construction & Optimization
With a powerful classifier in hand, we proceeded to build tools to intelligently select the best performing loans at the time investment decisions. We then combined them in their optimal proportions
to yield the highest portfolio level return given an investor's risk tolerance.
To do this, we first needed to calculate a loan valuation metric that can be used to rank all available loans.Β To get our feet wet, we started with a simple 1-step approach.
1-step Approach: Present Values
This required calculating the present value (PV) of cash flows for each loan using the appropriate market rate (Fed Funds up to the 5-year US treasury rate) spanning the duration of each cash flow to
more realistically calculate each loan's PV and associated internal rate of return (IRR). We then trained a regression model (Gradient Boost Regressor) on the PVs and used the model to predict the
PVs of loans in the test set. The highest performing loans ranked by predicted PVs were run through our trading simulator.
However as seen in the charts below, the distribution of loan PVs is very different between non-defaulting (left) and defaulting (right) loans. This suggests training two separate models, one each
for defaulting and non-defaulting loans, may result in more accurate predictions. This led us to explore a 2-step approach.
2-step Approach: Predicted Expected Returns
In the 2-step approach, instead of using PVs, we adopted expected returns as our loan value metric. We fit 2 models: Model_1 was trained on returns for non_defaulting loans in the train set while
Model_2 was trained on returns for defaulting loans.
Monthly cash flows were estimated in the same manner as in the 1-step approach with the following assumptions:
• fixed cash flows m, re-invested at a monthly rate r
• cash flows received at monthly intervals [0,n], n= last payment month
• total cash flows are re-invested at r till loan term
With these assumptions, the resulting cash flows form a geometric sequence:
with Total cash flows received given by:
Annualized returns for each loan were then calculated using the total cash flows received shown above as the numerator, the funded amount of each loan as the denominator and r as the fixed
re-investment rate of total cash flows received from the last payment date to the term of the loan.
We then weight the return predictions of Model_1 and Model_2 by the relative probabilities from our classifier to get the predicted expected return (R') of each loan in the test set.
Β Formulaically, we have:
We note however from the graphs below that while the regression model (Random Forest Regressor) did well predicting out of sample returns for non-defaulting loans, it struggled to capture the extreme
tails associated with defaults in the defaulting set.
Risk Measures
Each incremental unit of return comes with an associated risk. To quantify this risk, we explored 2 risk measures: Standard Deviation and Expected Shortfall with VAR at the 95% Confidence Level
The training set was grouped by Lending Club's sub-grade buckets and each of the risk measures was calculated using actual returns of loans found within each sub-grade. Each loan in the test set was
then assigned a risk measure based on their sub-grade. We also explored K-means clustering as an alternative clustering technique but ultimately settled on sub-grades as these buckets are much more
intuitive and more clearly defined.
While standard deviations are popular in the literature, the measure assumes normality of return distributions but as we can see in the chart below, the returns here are very heavily left-skewed due
to defaults violating basic normality assumptions.
Standard deviations if used as a risk measure for this dataset will, therefore, tend to underestimate true downside risk for each sub-grade as shown in the bottom graph.
Consequently, our preferred risk measure for this dataset is the expected shortfall and that's predominantly the risk measure used in our portfolio optimization and analysis.
The top 100 loans sorted by predicted expected returns in the test set were seeded to the optimization routine.
Portfolio Optimization
The purpose of the optimization module is to select the optimal allocation to put to each loan subject to funding and budget constraints that maximize portfolio returns for each level of risk
Formulaically we have:
To prove that seeding the optimizer with a set of loans filtered by expected returns informed by the relative probabilities of our classification model resulted in the optimal portfolio, we seeded
the optimizer using other filtering schemes, keeping the risk aversion parameter unchanged and compared results. We tried randomly selecting loans in the test set as well as selecting loans with the
highest predicted returns regardless of default probabilities amongst other filtering schemes. The table below shows that the expected return filter using our classifier and 2-step regression
predictions far outperforms all other metrics across both risk measures.
To more closely simulate a real trading environment, we run the optimization routine on a monthly basis, resulting in optimal monthly portfolios consisting of loans issued that month only and
therefore newly available for investment in that particular month. We then ran these monthly optimized portfolios sequentially through our training simulator.
Simulating an Implementable Trading Strategy
A real-world trading strategy on LendingClub faces two challenges.Β The first is that the predictive model can only use the information available at the time the prediction is made.Β The second is
that the opportunity to invest in a loan exists only for a brief window prior to issuance.Β For example, we are not able to wait until 2016 and then decide weβ d like to purchase a loan from 2012.
In order to address these concerns and test our forecasts more robustly, we implement a trading simulator.Β We began by fitting our model using an expanded window of returns.Β That is, we predicted
2014 loans using data from loans that terminated before 2014, and we predicted 2015 loans using the larger dataset available at that time.Β We continued this way through our entire dataset of
completed loans.
We then used these fitted models to select an optimal portfolio of loans each month.Β For example, we reviewed all the loans originated in January 2014 and selected the 100 best, combining the
classifier with the return forecast and portfolio optimizer described above.Β We introduced some constraints on the tactic, such as the total external funding it was able to draw down ($1 million),
and a maximum monthly spend (to avoid it growing the portfolio to fast at the start).Β Once the tactic had drawn down its maximum amount, loans were purchased only to reinvest cashflows received
from the loans in the portfolio.
To establish a baseline for comparison, the trading simulator also built portfolio containing the same number of loans for each month, but where the loans were selected completely randomly from all
those available.Β We calculated cashflows for each portfolio and compared the resulting IRRs. The table below summarizes our results.
Overall, our selected portfolio generated cashflows with an IRR of 11.9%, compared to a negative return of 2.9% on the random portfolio.Β On average, the model selected higher-yielding lower grade
loans, resulting in a portfolio with an average rating of D3.Β Itβ s notable that the average PV of loans selected substantially outperformed the random portfolio in all three years simulated.
Conclusion and Future Work
We built a tool that can predict the probability of the loan default and its returns for an investor. Based on those predictions we can then construct an optimal portfolio with realistic conditions.
It is therefore an end-to-end tool that can be used by both retail and institutional investors in their investments decision.
Simulated out-of-sample returns can reach 10-12%, which makes LendingClub loans an interesting asset to invest!
We also identified several directions that may further improve portfolio performance:
• implement survival analysis approach and more accurately estimate the variance of returns for defaulted loans
• include macroeconomic data such as unemployment rate as external predictors of default probabilities
As a final step, it would be interesting to replicate this analysis on the data provided by Prosper, LendingClubβ s closest competitor, which also makes its data publicly available.Β
About Authors
Related Articles
Leave a Comment
No comments found. | {"url":"https://nycdatascience.com/blog/student-works/deep-lending-getting-to-10-returns-with-lending-club/","timestamp":"2024-11-13T14:43:09Z","content_type":"text/html","content_length":"243294","record_id":"<urn:uuid:e43dc246-2586-40a5-b495-ef2a26689d97>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00476.warc.gz"} |
You Can Master Trigonometry with Maxima! - Open Source For You
You Can Master Trigonometry with Maxima!
Maxima is a descendant of Macsyma, a breed of computer algebra systems, which was developed at MIT in the late 1960s. Owing to its open source nature, it has an active user community. This is the
17th article in the Mathematics in Open Source series, in which the author deals with fundamental trigonometric expressions.
Trigonometry first gets introduced to students of Standard IX through triangles. Thereafter, students have to wade through a jungle of formulae and tables. A good student is one who can instantly
recall various trigonometric formulae. The idea here is not to be good at rote learning but rather to apply the formulae to get the various end results, assuming that you already know the formulae.
Fundamental trigonometric functions
Maxima provides all the familiar fundamental trigonometric functions, including the hyperbolic ones (see Table 1).
Note that all arguments are in radians. And here follows a demonstration of a small subset of these:
$ maxima -q
(%i1) cos(0);
(%o1) 1
(%i2) cos(%pi/2);
(%o2) 0
(%i3) cot(0);
The number 0 isn t in the domain of cot
-- an error. To debug this try: debugmode(true);
(%i4) tan(%pi/4);
(%o4) 1
(%i5) string(asin(1));
(%o5) %pi/2
(%i6) csch(0);
The number 0 isn t in the domain of csch
-- an error. To debug this try: debugmode(true);
(%i7) csch(1);
(%o7) csch(1)
(%i8) asinh(0);
(%o8) 0
(%i9) string(%i * sin(%pi / 3)^2 + cos(5 * %pi / 6));
(%o9) 3*%i/4-sqrt(3)/2
(%i10) quit();
Simplifications with special angles like %pi/ 10 and its multiples can be enabled by loading the ntrig package. Check the difference below before and after the package is loaded:
$ maxima -q
(%i1) string(sin(%pi/10));
(%o1) sin(%pi/10)
(%i2) string(cos(2*%pi/10));
(%o2) cos(%pi/5)
(%i3) string(tan(3*%pi/10));
(%o3) tan(3*%pi/10)
(%i4) load(ntrig);
(%o4) /usr/share/maxima/5.24.0/share/trigonometry/ntrig.mac
(%i5) string(sin(%pi/10));
(%o5) (sqrt(5)-1)/4
(%i6) string(cos(2*%pi/10));
(%o6) (sqrt(5)+1)/4
(%i7) string(tan(3*%pi/10));
(%o7) sqrt(2)*(sqrt(5)+1)/((sqrt(5)-1)*sqrt(sqrt(5)+5))
(%i8) quit();
A very common trigonometric problem is as follows: given a tangent value, find the corresponding angle. A common challenge is that for every value, the angle could lie in two quadrants. For a
positive tangent, the angle could be in the first or the third quadrant, and for a negative value, the angle could be in the second or the fourth quadrant. So, atan() cannot always calculate the
correct quadrant of the angle. How then, can we know what it is, exactly? Obviously, we need some extra information, say, the actual values of the perpendicular (p) and the base (b) of the tangent,
rather than just the tangent value. With that, the angle location could be tabulated as follows:
This functionality is captured in the atan2() function, which takes two arguments, p and b , and thus does provide the angle in the correct quadrant, as per the table above. Along with this, the
infinities of tangent are also taken care of. Here s a demo:
$ maxima -q
(%i1) atan2(0, 1); /* Zero */
(%o1) 0
(%i2) atan2(0, -1); /* Zero */
(%o2) %pi
(%i3) string(atan2(1, -1)); /* -1 */
(%o3) 3*%pi/4
(%i4) string(atan2(-1, -1)); /* 1 */
(%o4) -3*%pi/4
(%i5) string(atan2(-1, 0)); /* - Infinity */
(%o5) -%pi/2
(%i6) string(atan2(5, 0)); /* + Infinity */
(%o6) %pi/2
(%i7) quit();
Trigonometric identities
Maxima supports many built-in trigonometric identities and you can add your own as well. The first one that we will look at is the set dealing with integral multiples and factors of %pi. Let s
declare a few integers and then play around with them:
$ maxima -q
(%i1) declare(m, integer, n, integer);
(%o1) done
(%i2) properties(m);
(%o2) [database info, kind(m, integer)]
(%i3) sin(m * %pi);
(%o3) 0
(%i4) string(cos(n * %pi));
(%o4) (-1)^n
(%i5) string(cos(m * %pi / 2)); /* No simplification */
(%o5) cos(%pi*m/2)
(%i6) declare(m, even); /* Will lead to simplification */
(%o6) done
(%i7) declare(n, odd);
(%o7) done
(%i8) cos(m * %pi);
(%o8) 1
(%i9) cos(n * %pi);
(%o9) - 1
(%i10) string(cos(m * %pi / 2));
(%o10) (-1)^(m/2)
(%i11) string(cos(n * %pi / 2));
(%o11) cos(%pi*n/2)
(%i12) quit();
Next is the relation between the normal and the hyperbolic trigonometric functions:
$ maxima -q
(%i1) sin(%i * x);
(%o1) %i sinh(x)
(%i2) cos(%i * x);
(%o2) cosh(x)
(%i3) tan(%i * x);
(%o3) %i tanh(x)
(%i4) quit();
By enabling the option variable halfangles, many half-angle identities come into play. To be specific, sin(x/2) gets further simplified in the (0, 2 * %pi) range, and cos(x/2) gets further simplified
in the (-%pi/2, %pi/2) range. Check out the differences, before and after enabling the option variable, along with the range modifications, in the examples below:
$ maxima -q
(%i1) string(2*cos(x/2)^2 - 1); /* No effect */
(%o1) 2*cos(x/2)^2-1
(%i2) string(cos(x/2)); /* No effect */
(%o2) cos(x/2)
(%i3) halfangles:true; /* Enabling half angles */
(%o3) true
(%i4) string(2*cos(x/2)^2 - 1); /* Simplified */
(%o4) cos(x)
(%i5) string(cos(x/2)); /* Complex expansion for all x */
(%o5) (-1)^floor((x+%pi)/(2*%pi))*sqrt(cos(x)+1)/sqrt(2)
(%i6) assume(-%pi < x, x < %pi); /* Limiting x values */ (%o6) [x > - %pi, x < %pi]
(%i7) string(cos(x/2)); /* Further simplified */
(%o7) sqrt(cos(x)+1)/sqrt(2)
(%i8) quit();
Trigonometric expansions and simplifications
Trigonometry is full of multiples of angles, the sums of angles, the products and the powers of trigonometric functions, and the long list of relations between them. Multiples and sums of angles fall
into one category. The products and powers of trigonometric functions fall in another category. It s very useful to do conversions from one of these categories to the other one, to crack a range of
simple and complex problems catering to a range of requirements from basic hobby science to quantum mechanics. trigexpand() does the conversion from multiples and sums of angles to products and
powers of trigonometric functions . trigreduce() does exactly the opposite. Here s a small demo:
$ maxima -q
(%i1) trigexpand(sin(2*x));
(%o1) 2 cos(x) sin(x)
(%i2) trigexpand(sin(x+y)-sin(x-y));
(%o2) 2 cos(x) sin(y)
(%i3) trigexpand(cos(2*x+y)-cos(2*x-y));
(%o3) - 2 sin(2 x) sin(y)
(%i4) trigexpand(%o3);
(%o4) - 4 cos(x) sin(x) sin(y)
(%i5) string(trigreduce(%o4));
(%o5) -2*(cos(y-2*x)/2-cos(y+2*x)/2)
(%i6) string(trigsimp(%o5));
(%o6) cos(y+2*x)-cos(y-2*x)
(%i7) string(trigexpand(cos(2*x)));
(%o7) cos(x)^2-sin(x)^2
(%i8) string(trigexpand(cos(2*x) + 2*sin(x)^2));
(%o8) sin(x)^2+cos(x)^2
(%i9) trigsimp(trigexpand(cos(2*x) + 2*sin(x)^2));
(%o9) 1
(%i10) quit();
In %o5 above, you might have noted that the 2s could have been cancelled for further simplification. But that is not the job of trigreduce(). For that we have to apply the trigsimp() function as
shown in %i6. In fact, many other trigonometric identities-based simplifications are achieved using trigsimp(). Check out the %i7 to %o9 sequences for another such example. | {"url":"https://www.opensourceforu.com/2014/10/can-master-trigonometry-maxima/","timestamp":"2024-11-15T04:05:16Z","content_type":"text/html","content_length":"100277","record_id":"<urn:uuid:68746afe-eb0a-479c-973f-7b0198567029>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00536.warc.gz"} |
3.6 Summarising the co... | PRIMER-e Learning Hub
3.6 Summarising the comparison
In summary:
• I recommend using PERMANOVA in PRIMER.
• I do not recommend using adonis2 in R.
• Except in very limited circumstances, adonis2 does not construct:
□ (i) correct F-ratios; or
□ (ii) correct permutation algorithms.
• The implementation in adonis2 is far too limiting (it can only be correct for one-way cases, and possibly correct for fixed factors only in fully crossed, fully balanced designs).
• Furthermore, $R^2$ values for individual terms in ANOVA models are not a sensible way of comparing their relative importance.
It would be exhausting, fruitless and probably upsetting to list all of the papers that have used adonis2 (or adonis, its predecessor) in R to perform a PERMANOVA for a complex design that have
failed to notice these important problems and limitations. No-one can really be blamed for trying to use adonis2. It seems (on the face of it) like it should work. It has been used to run all sorts
of designs and gets cited rather a lot. Unfortunately, the results of analyses done using adonis2 might be wrong, and the inferences drawn misleading, depending on the model/study design.
In deference to the excellent people who wrote the adonis2 routine (it's clearly a good thing that they created it), I feel certain that they (probably) never intended for this function to be used to
analyse complex experimental designs with random factors, nested factors, etc. It would be helpful for the truly limited scope of adonis2 to be more plainly acknowledged somewhere in the
documentation and/or description of the routine, so that end-users are not mis-led. Perhaps a future R package will address some of these issues.
Importantly, the PERMANOVA routine in PRIMER allows the user:
• to specify whether factors are fixed or random,
• to specify whether a factor is nested in one or more other factors,
• to test interaction terms,
• to include one or more quantitative covariates in the analysis,
• to remove individual terms from a model or to perform pooling,
• to correctly analyse:
□ fixed models, random models & mixed models
□ user-specified contrasts
□ BACI designs (before-after/control-impact),
□ asymmetrical designs (e.g., in environmental impact studies),
□ randomised blocks,
□ split plots,
□ hierarchical designs,
□ repeated measures,
□ unbalanced designs (Type I, II or III sums of squares),
□ ... and more. | {"url":"https://learninghub.primer-e.com/books/should-i-use-primer-or-r/page/36-summarising-the-comparison","timestamp":"2024-11-02T09:08:49Z","content_type":"text/html","content_length":"55587","record_id":"<urn:uuid:3f440a4d-9aa7-42df-87ee-a57673c1fe0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00261.warc.gz"} |
(In)Stability for the Blockchain: Deleveraging Spirals and Stablecoin Attacks
We develop a model of stable assets, including non-custodial stablecoins backed by cryptocurrencies. Such stablecoins are popular methods for bootstrapping price stability within public blockchain
settings. We derive fundamental results about dynamics and liquidity in stablecoin markets, demonstrate that these markets face deleveraging feedback effects that cause illiquidity during crises and
exacerbate collateral drawdown, and characterize stable dynamics of the system under particular conditions. The possibility of such ‘deleveraging spirals’ was first predicted in the initial release
of our paper in 2019 and later directly observed during the ‘Black Thursday’ crisis in Dai in 2020. From these insights, we suggest design improvements that aim to improve long-term stability. We
also introduce new attacks that exploit arbitrage-like opportunities around stablecoin liquidations. Using our model, we demonstrate that these can be profitable. These attacks may induce volatility
in the ‘stable’ asset and cause perverse incentives for miners, posing risks to blockchain consensus. A variant of such attacks also later occurred during Black Thursday, taking the form of mempool
manipulation to clear Dai liquidation auctions at near zero prices, costing $8m USD.
1. Introduction
In 2009, Bitcoin [1] introduced a new notion of decentralized cryptocurrency and trustless transaction processing. This is facilitated by blockchain, which introduced a new way for mistrusting agents
to cooperate without trusted third parties. This was followed by Ethereum [2], which introduced generalized scripting functionality, allowing ‘smart contracts’ that execute algorithmically in a
verifiable and somewhat trustless manner. Cryptocurrencies promise notions of cryptographic security, privacy, incentive alignment, digital usability, and open accessibility while removing most
facets of counterparty risk. However, as these cryptocurrencies are, by their nature, unbacked by governments or physical assets, and the technology is quite new and developing, their prices are
subject to wild volatility, which affects their usability.
Cryptocurrency volatility
Cryptocurrencies face difficult technological, usability, and regulatory challenges to be successful long-term. Many cryptocurrency systems develop different approaches to solving these problems.
Even assuming the space is long-term successful, there is large uncertainty about the long-term value of individual systems.
The value of these systems depends on network effects: value changes in a nonlinear way as new participants join. In concrete terms, the more people who use the system, the more likely it can be used
to fulfill a given real world transaction. The success of a cryptocurrency relies on a mass of agents—e.g., consumers, businesses, and/or financial institutions—adopting the system for economic
transactions and value storage. Which systems will achieve this adoption is highly uncertainty, and so current cryptocurrency positions are very speculative bets on new technology. Further,
cryptocurrency markets face limited liquidity and market manipulation. In addition, the decentralized control and privacy features of cryptocurrencies can be at odds with desires of governments,
which introduces further uncertainty around attempted interventions in the space.
These uncertainties drive price volatility, which feeds back into fundamental usability problems. It makes cryptocurrencies unusable as short-term stores of value and means of payment, which
increases the barriers to adoption. Indeed, today we see that most cryptocurrency transactions represent speculative investment as opposed to typical economic activity.
Stablecoins aim to bootstrap price stability into cryptocurrencies as a stop-gap measure for adoption. Current projects take one of two forms:
• Custodial stablecoins rely on trusted institutions to hold reserve assets off-chain (e.g., $1 USD per coin). This introduces counterparty risk that cryptocurrencies otherwise solve.
• Non-custodial (or decentralized) stablecoins create on-chain risk transfer markets via complex systems of algorithmic financial contracts backed by volatile cryptoassets.
We focus on non-custodial stablecoins and, more generally, the stable asset and risk transfer markets that they represent. Non-custodial systems are not well understood whereas custodial stablecoins
can be interpreted using existing well-developed financial literature. Further, non-custodial stablecoins operate in the public/permissionless blockchain setting, in which any agent can participate.
In this setting, malicious agents can participate in stablecoin systems. As we will see, this can introduce new economic attacks.
1.1 Non-custodial (decentralized) stablecoins
The non-custodial stablecoins that we consider create systems of contracts on-chain with the following features encoded in the protocol. We refer to these as DStablecoins.
• Risk is transferred from stablecoin holders to speculators. Stablecoin holders receive a form of price insurance whereas speculators expect a risky return from a leveraged position.
• Collateral is held in the form of cryptoassets, which backs the stable and risky positions.
• An oracle provides pricing information from off-chain markets.
• A dynamic deleveraging process balances positions if collateral value deviates too much.
• Agents can change their positions through some pre-defined process.
These systems are non-custodial (or decentralized) because the contract execution and collateral are all completely on-chain; thus they potentially inherit all of the benefits of cryptocurrencies,
such as minimization of counterparty risk. DStablecoins are variants on contracts for difference, which we describe next. The risk transfer typically works by setting up a tranche structure in which
losses (or gains) are borne by the speculators and the stablecoin holder holds an instrument like senior debt. There are also other non-collateralized (or algorithmic) stablecoins—for a discussion of
these, see [3]. We don’t consider these directly in this paper; however, we discuss in Section 7 how our model can accommodate these systems as well.
Contract for difference
Two parties enter an overcollateralized contract, in which the speculator pays the buyer the difference (possibly negative) between the current value of a risky asset and its value at contract
termination. For example, a buyer might enter 1 Ether into the contract and a speculator might enter 1 Ether as collateral. At termination, the contract Ether is used to pay the buyer the original
dollar value of the 1 Ether at the time of entry. Any excess goes to the speculator. If the contract approaches undercollateralization (if Ether price plummets), the buyer can trigger early
settlement or the speculator can add more collateral.
Variants on contracts for difference
DStablecoins differ from basic contracts for difference in that:
(1) the contracts are multi-period and agents can change their positions over time;
(2) the positions are dynamically deleveraged according to the protocol; and
(3) settlement times are random and dependent on the protocol and agent decisions.
The typical mechanics of these contracts are as follows:
• Speculators lock cryptoassets in a smart contract, after which they can create new stablecoins as liabilities against their collateral up to a threshold. These stablecoins are sold to stablecoin
holders for additional cryptoassets, thus leveraging their positions.
• At any time, if the collateralization threshold is surpassed, the system attempts to liquidate the speculator’s collateral to repurchase stablecoins/reduce leverage.
• The stablecoin price target is provided by an oracle. The target is maintained by a dynamic coin supply based on an ‘arbitrage’ idea. Notably, this is not true arbitrage as it is based on
assumptions about the future value of the collateral.
□ If price is above target, speculators have increased incentive to create new coins and sell them at the ‘premium price.’
□ If price is below target, speculators have increased incentive to repurchase coins (reducing supply) to decrease leverage ‘at a discount.’
• Stablecoins are redeemable for collateral through some process. This can take the form of global settlement, in which stakeholders can vote to liquidate the entire system, or direct redemption
for individual coins. Settlement can take 24 hours – 1 week.
• Additionally, the system may be able to sell new ownership/decision-making shares as a last attempt to recapitalize a failing system—e.g., the role of MKR in Dai (see [4]).
DStablecoin risks
DStablecoins face two substantial risks:
1. Risk of market collapse; and
2. Oracle/governance manipulation.
Our model in this paper focuses on market collapse risk. We further remark on oracle/governance manipulation in Section 7.
Existing DStablecoins
At the time of initial writing in 2019, major non-custodial stablecoins included Dai, BitShares Market Pegged Assets (like bitUSD), and Steem Dollars. In the latter, Steem market cap is essentially
collateral; Steem Dollars can be redeemed for $1 USD worth of newly minted Steem, and so redemptions affect all Steem hodlers via inflation. Since then, many new stablecoins have arisen based on
similar ideas by UMA, Reflexer, and Liquity, as well as endogenous collateral stablecoins like Synthetix sUSD, Terra UST, and Celo Dollar (see [5] for further discussion). Notably, unlike custodial
stablecoins, Dai is not currently considered as emoney or payment method subject to the Payment Services Directive in the European Union since there is no single issuer or custodian. Thus it does not
have AML/KYC requirements.
In an academic white paper, [6] proposed a variation on cryptocurrency-collateralized DStablecoin design. It standardizes the speculative positions by restricting leverage to pre-defined bounds using
automated resets. A consequence of these leverage resets is that stablecoin holders are partially liquidated from their positions during downward resets—i.e., when leverage rises above the allowed
band due to a cryptocurrency price crash. This compares with Dai, in which stablecoin holders are only liquidated in global settlement. An effect of this difference is that, in order to maintain a
stablecoin position in the short-term, stablecoin holders need to re-buy into stablecoins (at a possibly inflated price) after downward resets. Of the many designs, it is unclear which deleveraging
method would lead to a system that survives longer. This motivates us to study the dynamics of DStablecoin systems.
Non-custodial stablecoins have now experienced a wide array of volatility events, failures, and attacks. Since the initial release of this paper in 2019, Black Thursday in March 2020 saw massive
liquidation events result in a substantial depegging in Dai [7], mirroring our results in Section 3 and Section 4, and miner mempool manipulation that contributed to Dai liquidation auctions clearing
at near zero prices at a cost of $8m USD to the Maker system [8], mirroring attack surfaces we described in Section 6. Prior to this, as discussed in [9], Nubits has traded at cents on the dollar
since 2018 (Figure 1(a)), and bitUSD and Steem Dollars have broken their USD pegs periodically (Figure 1(b)). Many additional examples of stablecoin mechanism failures and exploitations occurred
through the rest of 2020 (see [5][10]). Yet, the stablecoin space has remained heated with projects such as Dai growing rapidly and many new contenders arising, including UMA, Reflexer, Celo, and
Liquity. The work in this paper has proven consequential for the progression of these projects (e.g., [11][12]).
(a) NuBits trades at cents on the dollar
(b) BitUSD has broken its USD peg
Figure 1: Depeggings in decentralized stablecoins.
1.2 Relation to prior work
Stablecoins are active cryptocurrencies, for which pre-existing models do not understand how the collateral rule enforces stability and how the interaction of different agents can affect stability.
With the notable exception of [6], rigorous mathematical work on non-custodial stablecoins is lacking. They applied option pricing theory to valuing tranches in their proposed DStablecoin design
using advanced PDE methods. In doing so, they need the simplifying assumption that DStablecoin payouts (e.g., from interest/fee payments and liquidations from leverage resets) are exogenously stable
with respect to USD. This may circularly cause stability. In reality, these payouts are made in volatile cryptocurrency (ETH). From these ETH payments, stablecoin holders can:
1. Hold ETH and so take on ETH exposure
2. Use the ETH to re-buy into stablecoin, likely at an inflated price as it endogenously increases demand after a supply contraction
3. Convert the ETH to fiat, which requires waiting for block confirmations in an exchange (possibly hours) during times when ETH is particularly volatile and paying costs for fiat conversion (fees,
potentially taxes). Notably, this is not available in all jurisdictions.
To maintain a DStablecoin position, stablecoin holders need to re-buy into DStablecoins at each reset at endogenously higher price. Stablecoin holders additionally face the risk that the size of the
DStablecoin market collapses such that the position cannot be maintained (and so ends up holding ETH). As no stable asset models exist to understand these endogenous effects, the analysis can’t be
easily extended using the traditional financial literature. Our focus in this paper is complementary to understand these endogenous stable asset effects.
[13] studied the evolution of custodial stablecoins. A few works on stablecoins have also arisen since the initial release of our paper. [14] described governance attack surfaces in non-custodial
stablecoins, which is extended with general models in [5]. [15] presented an analysis of credit risk stemming from collateral type in the Maker system. And [16][17] modeled stability in the Terra and
Celo stablecoins under different scenarios of Brownian motion without the endogenous market feedback effects we study in this paper.
In the context of central counterparty clearinghouses, the default fund contributions, margin requirements and participation incentives have been studied in, e.g., [18], [19], and [20]. The critical
question in this area is understanding the effects of a liquidation policy of a member’s portfolio in the case of a significant event. The counterpart of this in a decentralized setting is
understanding the impact of DStablecoin deleveraging on system stability.
Stablecoin holders bear some resemblance to agents in currency peg and international finance models, e.g., [21] and [22]. In these models, the market maker is essentially the government but is
modeled with mechanical behavior and is not a player in the game. For instance, in [22], devaluation is modeled by a simple exogenous threshold rule: the government abandons the peg if the net demand
for currency breaches the threshold and is otherwise committed to maintaining the peg. In contrast to currency markets, no agents are committed to maintaining the peg in DStablecoin markets. The best
we can hope is that the protocol is well-designed and that the peg is maintained with high probability through the protocol’s incentives. The role of government is replaced by decentralized
speculators, who issue and withdraw stablecoins in a way to optimize profit. A fully strategic model would be a complicated dynamic game—these tend to be intractable and, indeed, are avoided in the
currency peg literature in favor of a sequence of one period games. We enable a more endogenous modeling of speculators’ optimization problems under a variety of risk constraints. Our model is a
sequence of one-period optimization problems, in which dynamic coupling comes through the risk constraints.
DStablecoin speculators are similar to market makers in market microstructure models (e.g., [23]). Like classical market microstructure, we do have a multi-period system with multiple agents subject
to leverage constraints that take recurring actions according to their objectives. In contrast, in the DStablecoin setting, we do not have a truly stable asset that is efficiently and instantaneously
available. Instead, agents make decisions that endogenously affect the price of the ‘stable’ asset and affect the agents’ future decisions and incentives to participate in a non-stationary way. In
turn, the (in)stability results from the dynamics of these decisions.
Since the initial release of our paper in June 2019, [24] has described a complementary model of non-custodial stablecoins related to the model in this paper. That paper explores a different model of
liquidation structure that affects speculator decision-making and applies martingale methods to analytically characterize stability. In contrast, in this paper we derive stability results about a
simpler model that is more amenable to simulations, which we perform, and demonstrate stablecoin attacks that can arise from profitable bets against other agents.
1.3 This Paper
We develop a dynamic model for non-custodial stablecoins that is complex enough to take into account the feedback effects discussed above and yet remains tractable. Our model can be interpreted as a
market microstructure model in this new type of asset market.
Our model involves agents with different risk profiles; some desire to hold stablecoins and others speculate on the market. These agents solve optimization problems consistent with a wide array of
documented market behaviors and well-defined financial objectives. As is common in the literature on market microstructure and currency peg games, these agents’ objectives are myopic. These
objectives are coupled for non-myopic risk using a flexible class of rules that are widely established in financial markets; these allow us to model the effects of a range of cyclic and
counter-cyclic behaviors. The exact form of these rules is selected and self-imposed by speculators to match their desired responses and not part of the stablecoin protocol. Thus well-established
manipulation of similar rules as applied to traditional financial regulation is not a problem here. Our model goes largely beyond a one-period model. We introduce this model with supporting rationale
for design choices in Section 2.
Using our model, we make the following contributions:
We relate these results to historical stablecoin events and apply these insights to suggest design improvements that aim to improve long-term stability. Based on these insights, we also suggest that
interactions between multiple speculators and attackers may be the most interesting relationships to explore in more complex models.
2. Model
Our model couples a number of variables of interest in a risk transfer market between stablecoin holders and speculators. The stablecoin protocol dictates the logic of how agents can interact with
the smart contracts that form the system; the design of this influences how the market plays out. Many DStablecoin designs have been proposed. We set up our model to emulate a DStablecoin protocol
like Dai with global settlement, but the model is adaptable to different design choices. Note that our model is formulated with very few parameters given the problem complexity.
Our model builds on the model of traditional financial markets in [25] but is new in design by incorporating endogenous stablecoin structure. In the model, we assume that the underlying consensus
layer (e.g., blockchain) works well to confirm transactions without censorship or attack and that the system of contracts executes as intended.
Two agents participate in the market:
• The stablecoin holder seeks stability and chooses a portfolio to achieve this; and
• The speculator chooses leverage in a speculative position behind the DStablecoin.
Stablecoin holders are motivated by risk aversion, trade limitations, and budget constraints. They are inherently willing to hold cryptoassets. In the current setting, this means they are likely
either traders looking for short-term stability, users from countries with unstable fiat currencies, or users who are using cryptocurrencies to move money across borders. In the future,
cryptocurrencies may be more accepted in economic exchange. In this case, stablecoin holders may be ordinary consumers who face risk aversion and budgeting for required consumption.
Speculators are motivated by (1) access to leverage; and (2) security lending to borrow against their Ether holdings without triggering tax incidence or giving up Ether ownership. In order to begin
participating, speculators need to either have confidence in the future of cryptocurrencies, think they can make money trading the markets, or face unusually high tax rates (or other barriers) that
make security lending cheaper than outright selling assets. The model in this paper focuses on the first motivation. We propose an extension to the model that considers the second motivation.
There are two assets. For simplicity, we give these assets specific names; however, they could be abstracted to other cryptocurrencies or outside of a cryptocurrency setting.
• Ether: high risk asset whose USD market prices $p^E_t$ are exogenous; and
• DStablecoin: a ‘stable’ asset collateralized in Ether whose USD price $p^D_t$ is endogenous
Notably, a large DStablecoin system may have endogenous amplification effects on Ether price, similarly to how CDOs affected underlying assets in the 2008 financial crisis. We discuss this further in
Section 7 but leave formal modeling of this to future work.
There are several barriers for trading between crypto and fiat, which motivate our choice of assets. Most crypto-fiat pairs are through Bitcoin or Ether, which act as a gateway to other cryptoassets.
Trading to fiat can involve moving assets between a number of exchanges and can take considerable time to confirm on the blockchain. Trading to a stablecoin is comparatively simple. Trading to fiat
can also trigger more clear tax incidence. Additionally, some countries have imposed strict capital controls on trading between fiat and crypto.
Model outline
At $t=0$, the agents have endowments and prior beliefs. In each period $t$:
1. New Ether price is revealed
2. Ether expectations are updated
3. Stablecoin holder decides portfolio weights
4. Speculator, seeing demand, decides leverage
5. DStablecoin market is cleared
2.1 Stablecoin holder
The stablecoin holder starts with an initial endowment and decides portfolio weights to attain the desired stability. The following table defines the agent’s state variables.
The stablecoin holder weights its portfolio by $\mathbf{w_t}$. We denote the components as $w^E_t$ and $w^D_t$ for Ether and DStablecoin weights respectively. The stablecoin holder’s portfolio value
at time $t$ is
$\mathcal{A}_t = \bar n_t p^E_t + \bar m_t p^D_t = \bar n_{t-1} p^E_t + \bar m_{t-1} p^D_t.$
Given weights, $\bar n_t$ and $\bar m_t$ will be determined based on the stablecoin clearing price $p_t^D$.
The basic results in Section 3 hold generally for any $\mathbf{w_t}\geq 0$ (i.e., there is no shorting). In this case, $\mathbf{w_t}$ could be chosen, e.g., from Sharpe ratio optimization,
mean-variance optimization, or Kelly criterion (among others). In Section 4 and Section 5, in order to focus on the effects of speculator decisions, we simplify the stablecoin holder as exogenous
with unit price-elastic demand. In this case, DStablecoin demand is constant in dollar terms.
2.2 Speculator
The speculator starts with an endowment of Ether and initial beliefs about Ether’s returns and variance and decides leverage to maximize expected returns subject to protocol and self-imposed
constraints. The following tables define variables and parameters for the speculator.
2.2.1 Ether expectations
The speculator updates expected returns $r_t$, log-returns $\mu_t$ (used for the variance estimation), and variance $\sigma_t^2$ based on observed Ether returns as follows:
\begin{aligned} r_t &= (1-\gamma) r_{t-1} + \gamma \frac{p^E_t}{p^E_{t-1}}, \\ \mu_t &= (1-\delta)\mu_{t-1} + \delta \log \frac{p_t^E}{p^E_{t-1}}, \\ \sigma_t^2 &= (1-\delta) \sigma_{t-1}^2 + \delta
\Big( \log \frac{p^E_t}{p^E_{t-1}} - \mu_t\Big)^2. \end{aligned}(1)
For fixed memory parameters $\gamma,\delta$ (lower memory parameter = longer memory), these are exponential moving averages consistent with the RiskMetrics approach commonly used in finance [26]. For
sufficiently stepwise decreasing memory levels and assuming i.i.d. returns, this process will converge to the true values supposing they are well-defined and finite. In reality, speculators don’t
outright know the Ether return distribution and, as we will see in the simulations, the stablecoin system dynamics occur on timescales shorter than required for convergence of expectations. Thus, we
focus on the simpler case of fixed memory parameters.
Note that $\gamma eq \delta$ may be reasonable. Current cryptocurrency markets are not very price efficient, and so traders might reasonably take into account momentum when estimating returns while
using a wider memory for estimating covariance.
We additionally consider the case in which the speculator knows the Ether distribution outright and $\gamma=\delta=0$. This is consistent with a rational expectations standpoint but ignores how the
speculator arrives at that knowledge.
2.2.2 Optimize leverage: choose $\Delta_t$
The speculator is liable for $\mathcal{L}_t$ DStablecoins at time $t$. At each time $t$, it decides the number of DStablecoins to create or repurchase. This changes the stablecoin supply $\mathcal{L}
_t = \mathcal{L}_{t-1} + \Delta_t$. If $\Delta_t>0$, the speculator creates and sells new DStablecoin in exchange for Ether at the clearing price. If $\Delta_t<0$, the speculator repurchases
DStablecoin at the clearing price.
Strictly speaking, the speculator will want to maximize its long-term withdrawable value. At time $t$, the speculator’s withdrawable value is the value of its ETH holdings minus collateral required
for any issued stablecoins: $n_t p_t^E - \beta\mathcal{L}_t$. Maximizing this is not amenable to a myopic view, however, as maximizing the next step’s withdrawable value is only a good choice when
the speculator intends to exit in the next step.
Instead, we frame the speculator’s objective as maximizing expected equity: $n_t p_t^E - \mathbf{E}[p^D] \mathcal{L}_t$. In this, the speculator expects to be able to settle liabilities at a
long-term expected value of $\mathbf{E}[p^D]$. The market price of DStablecoin will fluctuate above and below $1 USD naturally depending on prevailing market conditions. The actual expected value is
nontrivial to compute as it depends on the stability of the DStablecoin system. For individual speculators with small market power, we argue that $\mathbf{E}[p^D]=1$ is a an assumption they may
realistically make, as we discuss further below. This is additionally the value realized in the event of global settlement.
We suggest that this optimization is a candidate for ‘honest’ behavior of a speculator as it is consistent with the speculator acting on perceived arbitrage in mispricings of DStablecoin from the
peg. In essence, the speculator expects to increase (reduce) leverage ‘at a discount’ when $p_t^D$ is above (below) target. This is the typically cited mechanism by which these systems maintain their
peg and thus how the designers intend for speculators to behave. However, this assumes that $p_t^D$ is sufficiently stable/mean-reverting to $1 USD and so this behavior may not in fact be a best
Aggregate vs. individual speculators
In our model, the single speculative agent, which is not a price-taker, is intended to reflect the aggregate behavior of many individual speculators, each with small market power. In a normal liquid
market, an individual speculator would be able to repurchase DStablecoins at dollar cost and walk away with the equity. By maximizing equity, the aggregate speculator considers its liabilities to be
$1 USD per DStablecoin. This may turn out to be untrue during liquidity crises as the repurchase price may be higher. In our model, speculator’s don’t know the probability of crises and instead
account for this in a conservative risk constraint.
Formal optimization problem
The speculator chooses $\Delta_t$ by maximizing expected equity in the next period subject to a leverage constraint:
\begin{aligned} \max_{\Delta_t} \hspace{0.5cm} & r_t\Big(n_{t-1}p^E_t + \Delta_t p^D_t(\mathcal{L}_t)\Big) - \mathcal{L}_t \\ \text{s.t.} \hspace{0.5cm} & \Delta_t \in \mathcal{F}_t \end{aligned}
where $\mathcal{F}_t$ is the feasible set for the leverage constraint. This is composed of two separate constraints: (1) a liquidation constraint that is fundamental to the protocol; and (2) a risk
constraint that encodes the speculator’s desired behavior. Both are introduced below.
If the leverage constraint is unachievable, we assume the speculator enters a ‘recovery mode’, in which it tries to maximize its chances of returning to the normal setting. In this case, it solves
the optimization using only the liquidation constraint. If the liquidation constraint is unachievable, the DStablecoin system fails with a global settlement.
2.2.3 Liquidation constraint: enforced by the protocol
The liquidation constraint is fundamental to the DStablecoin protocol. A speculator’s position undergoes forced liquidation at time $t$ if either (1) after $p_t^E$ is revealed, $n_{t-1} p^E_t < \beta
\mathcal{L}_{t-1}$, or (2) after $\Delta_t$ is executed, $n_t p_t^E < \beta \mathcal{L}_t$. The speculator aims to control against this as liquidations can occur at unfavorable prices and are
associated with fees in existing protocols (we exclude these fees from our simple model, but they can be easily added).
Define the speculator’s leverage as the $\beta$-weighted ratio of liabilities to assets
$\lambda_t = \frac{\beta \cdot \text{liabilities}}{\text{assets}}.$
The liquidation constraint is then $\lambda_t \leq 1$.
2.2.4 Risk constraint: self-imposed speculator behavior
The risk constraint encodes the speculator’s desired behavior into the model. We assume no specific type for the risk constraint in our analytical results, which are generic. For our simulations, we
explore a variety of speculator behaviors via the risk constraint. We first consider Value-at-Risk (VaR) as an example of a constraint realistically used in markets. This is consistent with
narratives shared by Dai speculators about leaving a margin of safety to avoid liquidations. We then construct a generalization that goes well beyond VaR and allows us to explore a spectrum of
pro-cyclical and counter-cyclical behaviors encoded in the risk constraint.
Manipulation and instability resulting from similar externally-imposed VaR rules is a well-known problem in the risk management and financial regulatory literature (see e.g., [25]). This is of less
concern here as the precise parameters of the risk constraint are selected and self-imposed by speculators to approximate their own utility optimization and are not part of the DStablecoin protocol.
Further, we consider constraints that go beyond VaR. We instead need to show that our results are robust to a variety of risk constraints that speculators could select.
Example: VaR-based constraint
The VaR-based version of the risk constraint is
$\lambda_t \leq \exp(\mu_t - \alpha \sigma_t),$
where $\alpha>0$ is inversely related to riskiness. This is consistent with VaR for normal and maximally heavy-tailed symmetric return distributions with finite variance.
Let $\text{VaR}_{a,t}$ be the $a$-quantile per-dollar VaR of the speculator’s holdings at time $t$. This is the minimum loss on a dollar in an $a$-quantile event. With a VaR constraint, the
speculator aims to avoid triggering the liquidation constraint in the next period with probability $1-a$, i.e., $\mathbf{P} \Big( n_t p^E_{t+1} \geq \beta \mathcal{L}_t \Big) \geq 1-a.$ To achieve
this, the speculator chooses $\Delta_t$ such that
$\Big(n_{t-1}p^E_t + \Delta_t p_t^D(\mathcal{L}_t)\Big) (1-\text{VaR}_{a,t}) \geq \beta \mathcal{L}_t.$
This requires $\lambda_t \leq 1-\text{VaR}_{a,t}$, which addresses the probability that the liquidation constraint is satisfied next period and implies that it is satisfied this period.
Define $\tilde \lambda_t := \exp(\mu_t -\alpha\sigma_t)$. Then $\tilde \lambda_t$ is increasing in $\mu_t$ and decreasing in $\sigma_t$. Further, the fatter the speculator thinks the tails of the
return distribution are, the greater $\alpha$ will be, and the lesser $\tilde \lambda_t$ will be, as we demonstrate next.
VaR constraint with normal returns
If the speculator assumes Ether log returns are $(\mu_t, \sigma_t)$ normal, then $\text{VaR}_{a,t} = 1 - \exp\Big(\mu_t + \sqrt{2} \sigma_t \text{erf}^{-1}(2a-1)\Big).$ Defining $\alpha = - \sqrt{2}\
text{erf}^{-1}(2a-1)$, which is positive for appropriately small $a$, the VaR constraint is $\lambda_t \leq 1 - \text{VaR}_{a,t} = \exp(\mu_t - \alpha \sigma_t).$
VaR constraint with heavy tails
If Ether log returns $X$ are symmetrically distributed with finite mean $\mu_t$ and finite variance $\sigma_t^2$, then for any $\alpha>1$, Chebyshev’s inequality gives us
$\mathbf{P}(X < \mu_t -\alpha\sigma_t) \leq \frac{1}{2\alpha^2}.$
For the maximally heavy-tailed case, this inequality is tight. Then for VaR quantile $a$, we can find the corresponding $\alpha$ such that $a = \frac{1}{2\alpha^2}$. The log return VaR is $\mu_t-\
alpha\sigma_t$, which gives the per-dollar $\text{VaR}_{a,t} = 1-\exp(\mu_t-\alpha\sigma_t)$. Then the VaR constraint is $\lambda_t \leq \exp(\mu_t - \alpha\sigma_t)$.
Generalized risk constraint
Similarly to [25], we can generalize the bound to explore a spectrum of different behaviors:
$\ln \tilde \lambda = \mu_t - \alpha \sigma_t^b,$
where $\alpha$ is an inverse measure of riskiness and $b$ is a cyclicality parameter. A positive $b$ means that $\tilde \lambda_t$ decreases with perceived risk (pro-cyclical). A negative $b$ means
that $\tilde \lambda_t$ increases with perceived risk (counter-cyclical).
2.3 DStablecoin market clearing
The DStablecoin market clears by setting demand = supply in dollar terms:
$w^D_t \Big(\bar n_{t-1}p^E_t + \bar m_{t-1}p^D_t(\mathcal{L}_t)\Big) = \mathcal{L}_t p^D_t(\mathcal{L}_t).$
The demand (left-hand side) comes from the stablecoin holder’s portfolio weight and asset value. Notice that while the asset value depends on $p_t^D$, the portfolio weight $w_t^D$ does not. That is,
the stablecoin holder buys with market orders based on weight. This simplification allows for a tractable market clearing; however, it is not a full equilibrium model.
We justify this choice of simplified market clearing with the following observations:
• The clearing is similar to constant product market maker model used in the Uniswap decentralized exchange (DEX) [27].
• Sophisticated agents are known to be able to front-run DEX transactions [28]. As speculators are likely more sophisticated than ordinary stablecoin holders, in many circumstances they can see
demand before making supply decisions.
• Evidence from Steem Dollars suggests that demand need not decrease tremendously with price in the unique setting in which stable assets are not efficiently available. Steem Dollars is a
stablecoin with a mechanism for price ‘floor’ but not ‘ceiling’. Over significant stretches of time, it has traded at premiums of up to $15\times$ target.
In most of our results, the time period context is clear. To simplify notation, in a given time $t$, we drop subscripts and write with the following quantities:
With $\Delta > y$, which turns out to be always true as discussed later, the clearing price is
$p_t^D(\Delta) = \frac{x}{\Delta-y}.$
As the model is defined thus far, stablecoin holders only redeem coins for collateral through global settlement. However, this assumption is easily relaxed to accommodate algorithmic or manual
3. Stable Asset Market Dynamics
We derive tractable solutions to the proposed interactions and results about liquidity and stability.
3.1 Solution to the speculator’s decision
We first introduce some basic results about the speculator’s leverage optimization problem.
Solving the leverage constraint
Proposition 1: Let $\Delta_{\min} \geq \Delta_{\max}$ be the roots of the polynomial in $\Delta$
$-\beta \Delta^2 + \Delta\Big( \tilde \lambda (z+x) - \beta(\mathcal{L} - y)\Big) - \tilde\lambda zy + \beta\mathcal{L} y.$
Assuming $\Delta > y$,
• If $\Delta_{\min},\Delta_{\max} \in \mathbb{R}$, then $[\Delta_{\min},\Delta_{\max}]\cap(y,\infty)$ is the feasible set for the leverage constraint.
• If the roots are not real, then the constraint is unachievable.
(See Proof in Appendix A)
Setting $\tilde \lambda = 1$ gives the expression for the liquidation constraint alone.
The condition $\Delta > y$ makes sense for two reasons. First, if $\Delta < y$ then $p^D_t < 0$. Second, as we show below, the limit $\lim_{\Delta \rightarrow y^+} p^D_t = \infty$. Thus, if we start
in the previous step under the condition $\Delta > y$, then the speculator will never be able to pierce this boundary in subsequent steps. We further discuss the implications of this condition later.
Solving the leverage optimization
Proposition 2: Assume that the speculator’s constraint is feasible and let $[\Delta_{\min}, \Delta_{\max}] \cap (y, \infty)$ be the feasible region. Define $r:=r_t$, let $\Delta^* = y + \sqrt{-yrx}$,
and define
$f(\Delta) = r\Delta \frac{x}{\Delta - y} - \Delta.$
Then the solution to the speculator’s optimization problem is
• $\Delta^*$ if $\Delta^* \in [\Delta_{\min}, \Delta_{\max}] \cap (y, \infty)$
• $\Delta_{\min}$ if $\Delta^*<\Delta_{\min}$
• $\Delta_{\max}$ if $\Delta^*>\Delta_{\max}$
(See Proof in Appendix A)
3.2 Maintenance condition for the stable asset market
The next result describes a bound to the speculator’s ability to maintain the market. This bound takes the form of
(a lower bound on collateral) - (capital available to enter the market),
which must be sufficiently high for the system to be maintainable.
Proposition 3: The feasible set for the speculator’s liquidation constraint is empty when
$\Big(\tilde \lambda(x+z) - \beta\mathcal{L} w^D \Big)^2 < 4\beta \tilde\lambda \mathcal{L}xw^E$
(See Proof in Appendix A)
In Proposition 3, $\beta\mathcal{L}w^D \geq 0$ is interpreted as a lower bound on the capital required to maintain the DStablecoin market into the next period (i.e., the collateral required for the
minimum size of the DStablecoin market), $\tilde\lambda \in [0,1]$, and $x+z \geq 0$ is the capital available to enter the DStablecoin market from both the supply and demand sides. The inequality
then states that the difference between the capital available to enter the market and the lower bound maintenance capital must be sufficiently high for the market to be maintainable by the
speculator. The constraint $\Delta < y$ implies that the case of the negative difference does not work.
3.3 Deleveraging effects, limits to market liquidity
Limits to the speculator’s ability to decrease leverage.
The next result presents a fundamental limit to how quickly the speculator can reduce leverage by repurchasing DStablecoins, given the modeled market structure. Note that this limit applies even if
the speculator can bring in additional capital. The term $-y = \mathcal{L}(1-w^D)$ represents the ‘free supply’ of DStablecoin available for exchange, which can be increased by a positive $\Delta$.
Proposition 4: The speculator with asset value $z$ cannot decrease DStablecoin supply at $t$ more than
$\Delta^- := \frac{z}{z+x}y.$
Further, even with additional capital, the speculator cannot decrease the DStablecoin supply at $t$ by more than $y$.
(See Proof in Appendix A)
Deleveraging affects collateral drawdown through liquidity crises
The result leads to a DStablecoin market price effect from leverage reduction. This can lead to a deleveraging spiral, which is a feedback loop in leverage reduction and drying liquidity. In this,
the speculator repurchases DStablecoin to reduce leverage at increasing prices as liquidity dries up as repurchase tends to push up $p_t^D$ if outside demand remains the same. At higher prices, more
collateral needs to be sold to achieve deleveraging, leaving relatively less in the system. Subsequent deleveraging, whether voluntary or through liquidation, becomes more difficult as the price
effects compound.
Whether or not a spiraling effect occurs will depend on the demand behavior of stablecoin holders. The action of the stablecoin holder may actually exacerbate this effect: during extreme Ether price
crashes, stablecoin holders will tend to increase their DStablecoin demand in a ‘flight to safety’ move. Table 1 illustrates an example scenario of a deleveraging spiral in a simplified setting with
constant unit demand elasticity and in which the speculator’s risk constraint is the liquidation constraint. Similar results hold under other constant demand elasticities. The system starts in a
steady state. the Ether price declines trigger three waves of liquidations, forcing the speculator to liquidate her collateral to deleverage at rising costs.
If Ether prices continue to go down, the deleveraging spiral is only fixed if (1) more money comes into the collateral pool to create more DStablecoins; or (2) people lose faith in the system and no
longer want to hold DStablecoins, which can cause the system to fail.
There is no guarantee that (1) always happens.
This liquidity effect on DStablecoin price makes sense because the stablecoin (as long as it’s working) should be worth more than the same dollar amount of ETH during a downturn because the
stablecoin comes with additional protection. If the speculator is forced to buy back a sizeable amount of the coin supply, it will have to do so at a premium price.
One might think the spiral effect is good for stablecoin holders. As we explore in Section 6, this can be the case for a short-term trade. However, as we will see, the speculator’s ability to
maintain a stable system may deteriorate during these sort of events as it has less control or less willingness to control the coin supply. Deleveraging effects can siphon off collateral value, which
can be detrimental to the system in the long-term.
This suggests the question: do alternative non-custodial designs suffer similar deleveraging problems? We compare to an alternative design described in [6]. In this design, the stablecoin is
restricted to pre-defined leverage bounds, at which algorithmic ‘resets’ partially liquidate both stablecoin holder and speculator positions at $1 USD price. While this quells the price effect on
collateral, it shifts the deleveraging risk from speculator to stablecoin holder. The stablecoin holder is liquidated at $1 USD price but, if they want to maintain a stablecoin position, they have to
re-buy in to a smaller market at inflated price. Of the many designs, it is unclear which deleveraging method would lead to a system that survives longer.
Results explain real market data
A preliminary analysis of Dai market data suggests that our results apply. Figure 2a shows the Dai price appreciate in Nov-Dec 2018 during multiple large supply decreases. This is consistent with an
early phase of a deleveraging spiral. Figure 2b shows trading data from multiple DEXs over January 2019 – February 2019: price spikes occur in the data reportedly from speculator liquidations [29].
This provides empirical evidence that liquidity is indeed limited for lowering leverage in Dai markets. Further, as discussed in the next section, Dai empirically trades below target in many normal
Figure 2(a): Model Results explain data from Dai market. Dai deleveraging feedback in November 2018 – December 2018 (image from coinmarketcap.com).
Figure 2(b): Model Results explain data from Dai market. Dai normally trades below target with spikes in price due to liquidations (image from dai.stablecoin.science).
Since releasing the initial version of this paper in June 2019, massive liquidation events around Black Thursday in March 2020 provide additional strong evidence of deleveraging effects in the Dai
market. Figure 3(a) depicts a $\sim 50\%$ ETH price cash on March 12, 2020, which precipitated a cascade of cryptocurrency liquidations. Figure 3(b) depicts the price effects of these liquidations on
Dai prices on DEXs. Speculators deleveraging during this event had to pay premiums of $\sim 10\%$ and face consistent premiums $>2\%$ weeks into the aftermath. Concurrently, Maker was affected by
global mempool flooding on Ethereum. This additionally contributed to Dai liquidation auctions clearing at near zero prices, which may in fact have amplified the deleveraging feedback effects.
Altogether, Dai traded at significant premiums over this time despite Maker being in a much riskier state in terms of collateral and liquidations. See [30] and [8] for further discussion of this
Figure 3(a): Black Thursday in March 2020. ∼ 50% ETH price crash (image from OnChainFX).
Figure 3(b): Black Thursday in March 2020. Liquidation price effect on Dai DEX trades (image from dai.stablecoin.science).
4. Stability results
We now characterize stable price dynamics of DStablecoins when the leverage constraint is non-binding. For this section, we make the following simplifications to focus on speculator behavior:
• The market has fixed dollar demand at each $t$: $w^D_t \mathcal{A}_t = \mathcal{D}$. This is consistent with the stablecoin holder having unit-elastic demand, or having an exogenous constraint to
put a fixed amount of wealth in the stable asset.
• Speculator’s expected Ether return is constant $r_t = \hat r>1$. This means they always want to fully participate in the market and is consistent with $\gamma=0$.
This amounts to setting $x = \mathcal{D}$ and $y=-\mathcal{L}$. Now the DStablecoin market clearing price is $p^D_t = \frac{\mathcal{D}}{\mathcal{L}_t}.$ The leverage constraint (assuming $\mathcal
{L} + \Delta > 0$) becomes
$-\beta\Delta^2 + \Delta(\tilde\lambda(z+\mathcal{D}) - 2\beta\mathcal{L}) + \mathcal{L}(\tilde\lambda z - 2\beta - \beta\mathcal{L}) \geq 0.$
The speculator’s maximization objective becomes $\hat r\Delta \frac{\mathcal{D}}{\mathcal{L}+\Delta} - \Delta,$ which gives
$\Delta^* = -\mathcal{L} + \sqrt{\mathcal{L}\mathcal{D}\hat r}.$
While we prove a stability result in this simplified setting, we believe the results can be extended beyond the assumption of constant unit-elastic demand.
4.1 Stability if leverage constraint is non-binding
Proposition 5: Assume $w_t^D \mathcal{A}_t = \mathcal{D}$ (DStablecoin dollar demand) and $r_t = \hat r$ (speculator’s expected Ether return) remain constant. If the leverage constraint is inactive
at time $t$, then the DStablecoin return is
$\frac{p^D_t}{p^D_{t-1}} = \sqrt{\frac{\mathcal{L}}{\mathcal{D}\hat r}}.$
(See Proof in Appendix A)
Supposing that $\mathcal{D}\approx \mathcal{L}$ (i.e., the previous price was close to the $1 USD target) and the constraint is inactive, Proposition 5 tells us that the DStablecoin behaves stably
like the payment of a coupon on a bond.
Consider estimators for DStablecoin log returns $\bar \mu_t$ and volatility $\bar \sigma_t$ computed in a similar way to Ether expectations in Equation 1. When the leverage constraint is non-binding,
DStablecoin log returns remain $\bar \mu_t \approx 0$, the contribution to volatility at time $t$ is $\ln \frac{p_t^D}{p_{t-1}^D} - \bar \mu_t \approx 0$, and the DStablecoin tends toward a steady
state with stable price and zero variability. The next theorem formalizes this result to describe stable dynamics of price and the volatility estimator under the condition that the system doesn’t
breach the speculator’s leverage threshold.
Theorem 1: Assume $w_t^D \mathcal{A}_t = \mathcal{D}$ (DStablecoin demand) and $r_t = \hat r$ (speculator’s expected Ether return) remain constant. Let $\mathcal{L}_0=\mathcal{D}$ and $\bar \mu_0, \
bar \sigma_0$ be given. If the leverage constraint remains inactive through time $t$, then
$\mathcal{L}_t = \mathcal{D}\hat{r}^{\frac{2^t-1}{2^t}},$
$%\hspace{1cm} \ln \frac{p^D_t}{p^D_{t-1}} = -2^{-t} \ln \hat r \hspace{1cm} \bar \mu_t = \begin{cases} (1-\delta)^t \bar \mu_0 - \delta \frac{(1-\delta)^t-2^{-t}}{2(1-\delta)-1} \ln \hat r, & \text{
if } \delta eq 1/2 \\ 2^{-t}\Big( \bar \mu_0 - \frac{1}{2} t \ln \hat r \Big), & \text{ if } \delta = 1/2 \end{cases}$
$\bar\sigma_t^2 = \begin{cases} \sum_{k=1}^t (1-\delta)^{t-k}\delta \Big( (1-\delta)^k \bar \mu_0 - \frac{(1-\delta)^k -2^{-k+1}(1-\delta)}{2(1-\delta)-1}\ln \hat r \Big)^2 + (1-\delta)^t\bar\sigma_0
^2, & \text{ if } \delta eq 1/2 \\ 2^{-t} \sum_{k=1}^t 2^{-k-1} \Big( (k/2-1)\ln \hat r - \bar \mu_0\Big)^2 + 2^{-t} \bar\sigma_0^2, & \text{ if } \delta=1/2 \end{cases}$
Further, assuming the constraint continues to be inactive and that $\delta \leq \frac{1}{2}$, the system converges exponentially to the steady state $\mathcal{L}_t \rightarrow \mathcal{D}\hat r$, $\
bar \mu_t \rightarrow 0$, $\bar\sigma_t^2 \rightarrow 0.$
(See Proof in Appendix A)
Notice that if the leverage constraint in the system is reached, we can still treat the system as a reset of $\bar\mu_0$ and $\bar\sigma_0$ when we reach a point at which the constraint is no longer
binding. While the system subsequently remains without a binding constraint, we again converge to a steady state starting from the new initial conditions.
Interest rates and trading below $1 USD
A consequence of Theorem 1 is that the DStablecoin will trade below target during times in which Ether expectations are high. This is empirically seen in Figure 2(b). An interest rate charged to
speculators can balance the market (the ‘stability fee’ in Dai). This can temper expectations by effectively reducing $r$ in Theorem 1. In the stable steady state, setting the interest rate to offset
the average expected ETH return will achieve the price target. However, this is practically difficult as $r$ changes over time and is difficult to measure accurately. It also depends on holding
periods of speculators. It is an open question how to target these fees in a way that maintains long-term stability.
4.2 Instability if leverage constraint is binding
When the speculator’s leverage constraint is binding, DStablecoin price behavior can be more extreme. We argue informally that this can lead to high volatility in our model. The probability
distribution for the leverage constraint to be binding in the next step has a kink at the boundary of the leverage constraint. In particular, it becomes increasingly likely that the leverage
constraint is binding in a subsequent step due to deleveraging effects described previously. Note that feedback of large liquidations on Ether price, if added to the model, will add to this effect.
We show such instability computationally in Figure 4a in simulation results. In this figure, the shape of the inactive histogram reflects the speculator’s willingness to sell at a slight discount
when the leverage constraint is non-binding due to the constant $\hat r$ assumption.
We relax this assumption in Figure 4b, which shows the effects on volatility of different speculator memory parameters. This figure is a heat map/2D histogram. A histogram over $y$-values is depicted
in the third dimension (color: light=high density, dark=low density) for each $x$-value. Each histogram depicts realized volatilities across 10k simulation paths using the simulation setup introduced
in the next section and the given memory parameter ($x$-value). Horizontal lines depict selected percentiles in these histograms. The dotted line depicts the historical level of Ether volatility for
Figure 4(a): DStablecoin volatility, 10k simulation paths of length. Histogram of DStablecoin returns when leverage constraint is binding vs. non-binding with constant r̂.
Figure 4(b): DStablecoin volatility, 10k simulation paths of length. Heat map of volatility under different speculator γ=δ memory parameters.
In Figure 4b, volatility is bounded away from 0 even in non-binding leverage constraint scenarios; the distance increases with the memory parameter. This happens because $r$ updates faster with a
higher memory parameter. As the speculator’s objective then changes at each step, the steady state itself changes. Thus we expect some nonzero volatility, although it remains low in most cases.
In not-so-rare cases, however, volatility can be on the order of magnitude of actual Ether volatility in these simulations. As seen in Figure 5, this result is robust to a wide range of choices for
the speculator’s risk constraint. This suggests that DStablecoins perform well in median cases, but are subject to heavy tailed volatility.
Figure 5(a): Heatmaps of DStablecoin volatility for different speculator risk management behaviors. Ether returns∼t-distr(df=3,μ=0).
Figure 5(b): Heatmaps of DStablecoin volatility for different speculator risk management behaviors. Ether returns∼t-distr(df=3,μ=r[0]).
Figure 5(c): Heatmaps of DStablecoin volatility for different speculator risk management behaviors. Ether returns∼normal(μ=0).
5. Simulation Results
We now explore simulation results from the model considering a wide range of choices for the speculator’s risk constraint. Unless otherwise noted, the simulations use the following parameter set with
a simplified constant demand assumption ($\mathcal{D}=100$) and a t-distribution with df=3 to simulate Ether log returns. This carries over the simplified model from Section 4, although other choices
are also amenable to simulation. Cryptocurrency returns are well known for having very heavy tails. This choice gives us these heavy tails with finite variance. Note, however, that this doesn’t
capture path dependence of Ether returns. We instead assume Ether returns in each period are independent. We run simulations on 10k paths of 1000 steps (days) each. This is enough time to look at
short-term failures and dynamics over time. The simulation code is available with full details at https://github.com/aklamun/Stablecoin_Deleveraging.
Note that our simulations study daily movements. We choose this time step to examine these systems under reasonable computational requirements. More realistic simulations might study intraday
movements. One plausible scenario of a Dai freeze is if the price feed moves too far too fast instraday, so that speculators don’t have enough time to react before liquidations are triggered and
keepers (who perform actual liquidations) are unable to handle the avalanche of liquidations. As the price feed in Dai faces an hourly delay in the price feed, hourly time steps are a natural choice
for follow-up simulations. This said, daily time steps can actually be reasonable due to a behavioral trend in Dai data: most Dai speculators realistically don’t track their positions with very high
frequency as supported by overall high liquidation rates.
5.1 Speculator behavior affects volatility
We compare DStablecoin performance under the following speculator behaviors encoded in the risk constraint.
Figure 5 compares the effects on volatility of these behavioral constraints under various Ether return distributions. These figures are heatmaps/2D histograms similar to that in Figure 4b. The
results suggest that DStablecoins face significant tail volatility (on the order of Ether volatility) even under comparatively ‘nice’ assumptions on Ether return distributions, such as with
significant upward drift (Figure 5b) and a normal distribution (Figure 5c). Figure 7 depicts relative (% difference) mean-squared difference of simulated volatility for the different risk management
methods vs. a risk neutral speculator. The mean-squared difference is large, suggesting that the speculator’s risk management method has a large effect on volatility.
The results suggest how speculator behavior can affect DStablecoin volatility within the model. Stricter cyclic risk management (e.g., VaR) on the part of the (single) speculator can lead to
increased DStablecoin volatility without improving the safety of the system. Whether countercyclic (setting constraint to increase leverage during downturns) or cyclic (setting constraint to decrease
leverage during downturns), the resulting DStablecoin volatility is connected with how narrow the feasible region for the constraint becomes. A risk neutral speculator, which has the widest feasible
region for the constraint, leads to the lowest volatility. Stricter risk management serves to reduce the feasible region. Note that these results may be different if there are multiple types of
speculators, for instance some that are cyclic and others that are countercyclic.
Figure 4b further suggests that a higher speculator memory parameter (lower memory) tends to increase volatility in typical cases. This makes sense as high memory parameters can lead to noise chasing
on the part of the speculator. Note that keeping the speculator’s expected Ether returns and variance constant is equivalent to setting a static risk constraint.
5.2 Stable asset failure is dominated by collateral asset returns
We define the DStablecoin’s failure (or stopping) time to be either (1) when the speculator’s liquidation constraint is unachievable; or (2) when the DStablecoin price remains below $0.5 USD. In
these cases, a global settlement would be reasonable, leaving DStablecoin holders with Ether holdings with high volatility in subsequent periods.
Figure 6 compares the effects on failure time of these behavioral risk constraints. The stopping time distributions appear comparable across a wide range of selections for the speculator’s risk
constraint. They are additionally comparable across the memory parameters studied above. Figure 7 depicts relative mean-squared difference of simulated stopping times for the different risk
management methods vs. a risk neutral speculator. In calculating the mean-squared difference, we only include cases in which the failure is realized within the simulation. The mean-squared difference
is small (1-2 orders of magnitudes smaller than for volatility), providing additional evidence that the stopping time is largely independent of the speculator’s risk management. In particular, a
large proportion of failure events would not have been prevented by different speculator risk management within the model.
Figure 6(a): Heatmaps of DStablecoin failure times for different speculator risk management behaviors. Ether returns∼t-distr(df=3,μ=0).
Figure 6(b): Heatmaps of DStablecoin failure times for different speculator risk management behaviors. Ether returns∼normal(μ=0).
Figure 7: Relative mean-squared difference (MSD) of simulated volatility and stopping time for given speculator strategy vs. risk neutral strategy. Different lines represent different output
(volatility or stopping time) and different return distribution assumptions for the simulations.
DStablecoin failure probabilities appear to be dominated by Ether returns as opposed to speculator behavior. The results suggest that DStablecoins may not be long-term stable, even under
comparatively ‘nice’ assumptions for Ether return distributions. To avoid failure, they would essentially rely on more speculator capital entering the system during downturns.
6. Stablecoin Attacks
Attacking a DStablecoin is different than traditional currency attacks. The focus is not on breaking the willingness of the central bank to maintain a peg. It instead involves manipulating the
interaction of agents. We show that stablecoin design can enable profitable trades against stability that attack the system. These come from the existence of profitable trades around liquidations and
the ability of miners to reorder and censor transactions to extract value.
6.1 Expanded Model: Adding an Attacker
We consider an expanded model under the fixed outside demand setting of the previous section. In the expansion, we consider an attacker, who can speculatively enter/exit the DStablecoin market. The
attacker can buy $\delta$ dollar-value of DStablecoin at some time $t$ with the goal of selling it at a later time $s$ for $\delta + \varepsilon$. These occurrences change the demand structure: $\
mathcal{D}_t = \mathcal{D} + \delta$, $\mathcal{D}_s = \mathcal{D} - (\delta + \varepsilon)$.
6.2 Profitable bets on liquidations
Table 2 illustrates an example scenario for a profitable bet on liquidations. The attacker injects $\delta = 1$ in demand at $t=1$, which acquires $1.0008$ DStablecoins at $p_1^D$. In $t=3$, after
the liquidation, the attacker is then able to extract $\delta +\varepsilon = 1.083$ from selling the DStablecoin. This yields a return of $8.3\%$. This is akin to a short squeeze on existing
speculators. It takes advantage of the fact that liquidations occur at DStablecoin market rate, which in turn affects the market rate.
Table 2: Example scenario of a profitable bet on liquidations.
The attacker can do better by choosing $\delta,\varepsilon$ to maximize $\varepsilon$ subject to $\frac{\delta + \epsilon}{p_2^D} \leq \frac{\delta}{p_o^D}.$ Choosing $\delta=4.5, \varepsilon=0.59$
(not optimal) yields a return of $13\%$. The attacker could also spread out $\delta$ over a longer period of time to achieve lower purchase prices.
From a practical perspective, the optimization is sensitive to misestimation of demand elasticity. While Dai has hit prices as high as $1.37 historically, it hasn’t typically reached prices above
$1.09. Thus smaller bets (relative to supply) may be safer. Regardless, these can be large opportunities in large systems. In addition, outside of this model, real implementations create arbitrage of
$5-13\%$ to automate liquidations.
6.3 Attacks
Attack 1:
An attacker bets on an ETH decline and manipulates the market to trigger and profit from spiraling liquidations. This uses the short squeeze-like trades in the previous example. It can also be
supplemented with a bribe to miners to freeze collateral top-ups. The attacker could also enter as a new speculator at the high DStablecoin prices after the attack and thus leverage up at a discount.
Outside of the model, the attack may have a negative effect on the long-term DStablecoin demand due to the induced volatility. This can be further beneficial to the attacker, who can then also
deleverage in the future at a discount.
Attack 2:
The attacker is also a miner and reorganizes the recent transaction history (such as by initiating a fork) to be on the receiving end of arbitrage opportunities from liquidations. For instance,
following an ETH decline, the miner could trigger and profit from spiraling liquidations. In a fork, the attacker creates a new timeline that inherits the ETH price trajectory (via oracle
transactions). The attacker can then censor speculator transactions (e.g., collateral top-ups) to trigger new liquidations and extract profit around all liquidations, which are guaranteed in the
timeline. If the stablecoin system is large, the miner extractable value can be large (and is additive with other sources of extractable value). This creates the perverse incentive for miners to
perform this attack if the attack rewards are greater than lost mining rewards. This is similar to the time-bandit attack in [28].
In Attack 1, the attacker takes on market risk as the payoff relies on a future ETH decline and liquidation. It is a speculative attack that can induce volatility in the stablecoin. In Attack 2, the
attacker’s payoffs are guaranteed if the attack fork is successful. These payoffs incentivize blockchain consensus attack. A possible equilibrium is for miners to collude and share this value.
These attacks occur in a permissionless setting, in which agents can enter/exit at any time with a degree of anonymity. While in traditional finance, market manipulation rules can be enforced
legally, in decentralized finance, enforcement is only possible to the extent that it can be codified within the protocol and incentive structure. We leave to future study a full exploration of these
incentive structures in a game theoretic setting based on foundations for blockchain forking models set in, e.g., [31].
Since the initial release of this paper, this attack surface around stablecoin liquidations was exploited in related ways to Attack 2. In Attack 2, a miner reorganizes the recent history to extract
profit from arbitrage opportunities from liquidations. In reality on Black Thursday, mempool manipulation contributed to the clearing of $8m USD of Dai liquidation auctions at near zero prices [8].
We discuss some preliminary ideas toward mitigating attack potential. Liquidations could be spread over a longer time period. This could potentially lessen deleveraging spirals by smoothing demand
and increase the costs to a forking attack. However, it presents a trade-off in that slow liquidations come with higher risks to the stablecoin becoming under-collateralized. We also suggest tying
oracle prices and DEX transactions to recent block history so that a reorganization attack can’t easily inherit price and exchange history. Practically, however, this may be difficult to tune in a
way that’s not disruptive as small forks happen normally.
7. Discussion
In general, it is impossible to build a stablecoin without significant risks. As speculators participate by making leveraged bets, there is always an undiversifiable cryptocurrency risk. However, a
stablecoin can aim to be an effective store of value assuming the cryptocurrency market as a whole is not undermined. In this case, it is conceivable to sustain a dollar peg if the stablecoin
survives transitory extreme events. That is, to achieve long-term probabilistic stability, a stablecoin should maintain a high probability of survival.
Failure risks
DStablecoins are complex systems with substantial failure risks. Our model demonstrates that they can work well in mild settings, but may have high volatility outside of these settings. As we explore
in this paper, the market can collapse due to feedback effects on liquidity and volatility from deleveraging effects during crises. These effects can exacerbate collateral drawdown. Surviving these
events may rely on bringing in increasing amounts of new capital to expand the DStablecoin supply during such crises. In these events speculators may not always be willing and able to take these new
risky positions. Indeed, there are may examples of speculative markets drying up during extreme market movements. As we explore below, continued stability during these events additionally relies on
new capital entering the system in a well-behaved manner as profitable attacks are possible.
As suggested by our simulations, stablecoin holders face the direct tail risk of cryptocurrencies. If the market loses liquidity, there is no guarantee that forced liquidation of speculators’
collateral will be possible within reasonable pricing limits. Further, volatile cryptocurrency markets can, in unlikely events, move too fast for speculators to adapt their positions. In these cases,
stablecoin holders can only truly rely on the cryptocurrency value from global settlement.
The DStablecoin design also relies on trusted oracles to provide real world price data, which could be subject to manipulation. In MakerDAO’s Dai, for instance, oracles are chosen by MKR token
holders, who vote on system parameters. This opens a potential 51% attack, in which enough speculators buy up MKR tokens, change the system to use oracles that they manipulate, and trigger global
settlement at unfavorable rates to stablecoin holders while pocketing the difference themselves when they recover their excess collateral. A hint of manipulation in oracles or large acquisitions of
MKR could potentially trigger market instability issues on its own.
Note that Dai has protections from oracle attacks. First, there is a threshold of maximum price change and an hourly delay on new prices taking effect. This means that emergency oracles have time to
react to an attack. Second, at current prices 51% of MKR is substantially more expensive than the ETH collateral supply. However, this second point does not have to be true in general–at least unless
Dai holders otherwise bid up the price of MKR for their own security. The value of MKR is linked to expectations around Dai growth as fees paid in the system are used to reduce MKR supply. At some
point, the expectation may not be enough to lift MKR value above collateral on its own. This raises the question of whether fees should be used to reduce MKR supply at all. Alternatively, MKR value
could be completely based on the potential value of a 51% attack, which may also grow with Dai growth, and the value of fees could be put to different uses, as we discuss further below.
A good fee mechanism may quell deleveraging spirals
Dai imposes fees on speculators when they liquidate positions (e.g., liquidation penalty, stability fee, penalty ratio). These can amplify deleveraging effects by increasing deleveraging costs and
disincentivizing new capital from entering the system during crises. An alternative design with automatic counter-cyclic fees could enhance stability by reducing feedback effects. For instance, fees
could be collected while the system is performing well, but these fees could be removed (or made negative) automatically during liquidity crises in order to limit feedback effects and remove
disincentives to bringing new capital into the system.
Speculators in Dai can pay back liabilities at any time and come and go from the system, which raises concerns about herd behavior in crises. A herd trying to deleverage can trigger a deleveraging
spiral. Dynamic fees tuned to inflow/outflow could additionally disincentivize herd behavior to deleverage at the same time.
An alternative ‘collateral of last resort’ idea in Dai
In Dai, MKR serves a certain ‘last resort’ role in addition to governance. If there is a collateral shortfall, then new MKR is minted and sold to cover Dai liabilities making up the shortfall. This
may not always be possible as the MKR market can similarly face illiquidity and the market cap may not be high enough to cover shortfalls. In some settings, MKR holders might actually have an
incentive to trigger a global settlement early before MKR would be inflated. A Dai shutdown would have some effect on the price of MKR, but the cost may be small if MKR holders expect a successful
relaunch of Dai after the crisis. An early shutdown is not ideal for Dai holders, as they will want to hold the stable asset for longer during extreme events. In addition to incentive alignment being
unclear in MKR’s ‘last resort’ role, the invocation of the role only helps cover the aftermath of a crisis (an existing shortfall) as opposed to quelling the effects that cause the crises.
We propose an alternative ‘last resort’ role of governance tokens that instead aims to quell deleveraging spirals. This could be achieved by automatically positioning the MKR supply as system
collateral against which Dai can be minted to expand supply in crises. To illustrate, if there is a massive deleveraging by speculators, leading to excess demand for Dai and an inflated Dai price,
then new Dai could be automatically minted against the MKR supply as collateral to help balance the market. In this way, a deleveraging spiral is damped: should a new wave of speculator deleveraging
be triggered, it will not compound the price effect from the past wave. System fee revenue could also be put to this use.
Uses of limited fee revenue
Dai produces limited fee revenue, most of which rewards MKR investors. There is additionally a Dai savings rate that rewards Dai holders using fee revenue and serves as another tool to balance the
Dai market (e.g., to boost demand for Dai when the price is below target). There is an inherent trade-off in using fee revenue, however. A Dai savings rate uses this revenue to improve stability in
relatively normal settings in which a higher fee itself serves to balance the market. Alternatively, fee revenue can be channeled to an emergency fund that lessens the severity of crises–for instance
as suggested above. These fees and their potential uses can be incorporated into our model to compare the effects of different design choices.
Our results suggest tools and indicators that can warn about volatility in DStablecoins. We can find proxies for the free supply, estimate the price impact of liquidations, and track the entrance of
new capital into speculative positions. We can connect this information with model results to estimate the probability of liquidity problems given the current state. This information is also useful
in valuing token positions in these systems (e.g., Dai, MKR, and the speculator’s leveraged position).
Some exchanges have bundled select stablecoins into a single market that ensures 1-to-1 trading (e.g., [32]). In this case, exchanges are essentially providing insurance to their users against
stablecoin failures. These arrangements could lead to a run on exchanges in the event that some stablecoins fail. It is unclear if these exchanges are subject to regulation to protect users against
this, and it is further unclear if such regulations would be sufficient to account for risks in stablecoins. Our model provides insight into the risks (to exchanges and users) if such arrangements in
the future include non-custodial stablecoins.
Future directions
We suggest expansions to our model to explore wider settings.
• Incorporate more speculator decisions, such as locking and unlocking collateral and holding different assets, accommodating speculators with security lending motivation. This makes the
speculator’s optimization problem multi-dimensional. In this expanded setting, speculators may make more long-term strategic decisions considering whether tomorrow they would have to buy back
stablecoins and at what price.
• Consider multiple speculators with different utility functions who participate in the DStablecoin market. In this expanded setting, we can consider the conditions under which new capital may
enter the system and formally study the economic attack described above and the effects of external incentives.
• Incorporate additional assets, such as a custodial stablecoin that faces counterparty risk. This would allow us to study long-term movements between stablecoins in the space and learn about
systemic effects that could be triggered by counterparty failures. This is further relevant in evaluating systems like Maker’s multi-collateral Dai. However, this comes with a trade-off of a new
counterparty risk that is very hard to measure. In particular, it’s not just custodian default risk, but also risk of targeted interventions on centralized assets. Such interventions (e.g., from
a government who wants to shut down Dai) could be highly correlated with cryptocurrency downturns as that is when the system is naturally weakest.
• Incorporate endogenous feedback of liquidations on Ether price, which becomes relevant if the DStablecoin system becomes large relative to the Ether market. This is similarly important for
endogenous collateral stablecoins like Synthetix sUSD and Terra UST, in which a system equity-like asset is used as collateral (see [5]).
Additionally, our existing model can be adapted to analyze DStablecoins with different design characteristics. For instance,
• DStablecoins with more general collateral settlement, in which stablecoin holders can individually redeem stablecoins for collateral. This is possible, for instance, in bitUSD and Steem Dollars,
and more recently in Celo Dollars. In this case, the stablecoin acts as a perpetual option to redeem collateral, and stablecoin volatility will be additionally related to the settlement terms.
• DStablecoins without speculator agents (e.g., Steem Dollars, in which the whole marketcap of Steem acts as collateral, or Celo Dollars, in which Celo reserves act as collateral). In these
systems, stablecoin issuance is automated with the rest of the protocol. Our model can be adapted by removing speculator decisions and modeling the growth of collateral from block rewards and
growth of stablecoin from other processes.
• Some non-collateralized algorithmic stablecoins. We believe this setting can also be interpreted in our model by thinking of implicit collateral that ends up describing user faith in the system
(see [5]). The underlying mechanics would be similar, simply recreating ‘out of thin air’ the value of the underlying asset as opposed to building on top of the value of an existing asset. The
stability of the system ultimately still relies on how people perceive this value over time similarly to how perceived value of Ether changes.
We thank David Easley, Steffen Schuldenzucker, Christopher Chen, Akaki Mamageishvili, Peter Zimmerman, Sergey Ivliev, Tomasz Stanczak, Sid Shekhar, as well as the participants of the ECB P2P
Financial Systems (2019) workshop, Crypto Valley Conference (2019), and Crytpo Economics Security Conference (2019) for their valuable feedback. This paper is based on work supported by NSF CAREER
award #1653354. AK thanks Lykke, Binance, and Amherst College for additional financial support.
A. Derivation of Results
Proposition 1
PROOF. In each period $t$, we determine the leverage constraint by setting $\tilde \lambda = \lambda$ and solving for $\Delta$. Using the formulation of $p^D_t$ from the market clearing, we have the
following equation for $\Delta$:
$\tilde \lambda \Big(z + \Delta \frac{x}{\Delta - y}\Big) = \beta(\mathcal{L} + \Delta).$
Given $\Delta>y$, this transforms to the quadratic equation for $\Delta$
$-\beta \Delta^2 + \Delta\Big( \tilde \lambda (z+x) - \beta(\mathcal{L} - y)\Big) - \tilde\lambda zy + \beta\mathcal{L} y =0.$
This is a downward facing parabola. The speculator’s leverage constraint is satisfied when the polynomial is positive. The roots, if real, bound the feasible region of the speculator’s constraint.
Due to the requirement that $\Delta > y$, the feasible set is given by $[\Delta_{\min}, \Delta_{\max}] \cap (y, \infty)$. When there are no real roots, the polynomial is never positive, and so the
constraint is unachievable.
Proposition 2
PROOF. By Proposition 1, $[\Delta_{\min}, \Delta_{\max}] \cap (y, \infty)$ is indeed the feasible region. Incorporating the market clearing, the speculator decides $\Delta$ in each period $t$ by
\begin{aligned} \max \hspace{0.5cm} & r\Big(z + \Delta \frac{x}{\Delta - y}\Big) - \mathcal{L} - \Delta \\ \text{s.t.} \hspace{0.5cm} & \Delta \in [\Delta_{\min}, \Delta_{\max}] \cap (y, \infty) \end
This optimization is solvable in closed form by maximizing over critical points. Maximizing the objective is equivalent to maximizing
$f(\Delta) = r\Delta \frac{x}{\Delta - y} - \Delta.$
We first consider the case of $\Delta$ approaching $y$ from above and show that this boundary is not relevant in the maximization. The limit is
$\lim_{\Delta \rightarrow y^+} f(\Delta) = -\infty.$
To see this, note that $\mathcal{L}_{t-1} ~=~ \bar m_{t-1} ~\geq~ w^D_t \bar m_{t-1}$, and so in order to have $\mathcal{L}_t = w^D_t \bar m_{t-1}$, we must have $\Delta<0$. Thus the sign of the term
that tends to infinity is negative. The limit is $-\infty$ because the price for the speculator to buy back DStablecoins goes to $\infty$.
To find the critical points of $f$, we set the derivative equal to zero:
$\frac{df}{d\Delta} = -\frac{\Delta^2 - 2\Delta y + y(rx +y)}{(\Delta-y)^2}=0$
Assuming $\Delta eq y$, the solutions are the roots to the quadratic $\Delta^2 + -2y\Delta + y(rx+y)=0$. Notice that the axis of this parabola is at $\Delta=y$. When there are two real solutions,
then exactly one of them will be $>y$. Given $y\leq 0$ and $x\geq 0$ and noting $r\geq 0$, a real solution always exists and the relevant critical point is
$\Delta^* = y + \sqrt{-yrx}.$
If it is feasible, $\Delta^*$ is the solution to the speculator’s optimization problem. If $\Delta^*$ is not feasible, then we need to choose along the boundary. The possible cases are as follows.
Suppose $\Delta^* < \Delta_{\min}$. Then $\Delta_{\min}$ is feasible since $\Delta^*>y$ implies $\Delta_{\min}>y$. Since $f$ is monotone decreasing to the right of $\Delta^*$, $f(\Delta_{\min})>f(\
Delta_{\max})$, and so $\Delta_{\min}$ is the solution.
Suppose $\Delta^* > \Delta_{\max}$. By our assumption that the constraint is feasible, we have that $\Delta_{\max}$ is feasible. Since $f$ is monotone decreasing to the left of $\Delta^*$ on the
feasible region, $f(\Delta_{\max})>f(\Delta_{\min})$, and so $\Delta_{\max}$ is the solution.
Proposition 3
PROOF. The speculator’s leverage constraint is unachievable when the quadratic has no real solutions or when all real solutions are $<y$. The first case occurs when
$\Big(\tilde \lambda (z+x) - \beta(\mathcal{L}-y)\Big)^2 + 4\beta(-\tilde \lambda zy + \beta \mathcal{L} y) < 0.$
Noting that $y = -w^D \mathcal{L}$ and $\mathcal{L} - y = \mathcal{L}(2-w^D)$ and expanding and simplifying terms yields
$\beta \tilde\lambda \mathcal{L} \Big( 2zw^D + 2x(2-w^D)\Big) - (\beta\mathcal{L}w^D)^2 > \Big(\tilde \lambda(x+z)\Big)^2$
Completing the square by subtracting $4\beta\tilde\lambda\mathcal{L} x(1-w^D)$ from each side then gives the result.
Proposition 4
PROOF. Setting $z=-\Delta p_t^D = -\Delta \frac{x}{\Delta-y}$ gives the lower bound $\Delta^- := \frac{z}{z+x}y>y$.
Note that $\bar m_t = \mathcal{L}_t$, and so $y = \mathcal{L}(w^D - 1) = -w^E \mathcal{L} \leq 0.$ The term $w^D_t \bar m_{t-1}$ presents a lower bound on the size of the DStablecoin market in the
next step from the demand side, and so the speculator can’t decrease the size of the market faster than $y$, even with additional capital beyond $z$. As shown above, $\Delta \rightarrow y^+$
coincides with $p^D_t \rightarrow \infty$. The speculator pays increasingly large amounts to buy back more DStablecoins as liquidity dries in the market.
Proposition 5
PROOF. With inactive constraint, $\mathcal{L}_t = \sqrt{\mathcal{L}\mathcal{D}\hat r}$, $p^D_t = \frac{\mathcal{D}}{\sqrt{\mathcal{L}\mathcal{D}\hat r}} = \sqrt{\frac{\mathcal{D}}{\mathcal{L}\hat r}}
$, and $\frac{p^D_t}{p^D_{t-1}} = \frac{\sqrt{\frac{\mathcal{D}}{\mathcal{L}\hat r}}}{\frac{\mathcal{D}}{\mathcal{L}}} = \sqrt{\frac{\mathcal{L}}{\mathcal{D}\hat r}}.$
Theorem 1
PROOF. It is straightforward to verify $\mathcal{L}_t = \mathcal{D}\hat{r}^{\frac{2^t-1}{2^t}}$ by induction using $\mathcal{L}_t = \sqrt{\mathcal{L}_{t-1} \mathcal{D} \hat r}$. Then
$\frac{p_t^D}{p_{t-1}^D} = \sqrt{\frac{\mathcal{L}_{t-1}}{\mathcal{D}\hat r}} = \sqrt{\frac{\mathcal{D}\hat{r}^{\frac{2^{t-1}-1}{2^{t-1}}}}{\mathcal{D}\hat r}} = \hat{r}^{\frac{1}{2}\Big(\frac{2^
{t-1}-1}{2^{t-1}}-1\Big)} = \hat{r}^{-2^{-t}}.$
And so $\ln \frac{p_t^D}{p_{t-1}^D} = -2^{-t} \ln \hat r$.
Next, as $\bar\mu_t = (1-\delta)\bar\mu_{t-1} + \delta \ln \frac{p_t^D}{p_{t-1}^D}$, it is straightforward to verify by induction that
$\bar\mu_t = (1-\delta)^t \bar\mu_0 - \delta \ln \hat r \sum_{k=1}^t 2^{-k}(1-\delta)^{t-k}.$
Case I:
$\delta = 1/2$. The series in $\bar\mu_t$ becomes
$\sum_{k=1}^t 2^{-k}(1-\delta)^{t-k} = \sum_{k=1}^t 2^{-k} 2^{-(t-k)} = \sum_{k=1}^t 2^{-t} = \frac{t}{2^t}.$
Then we have $\bar\mu_t = 2^{-t}\Big( \bar\mu_0 - \frac{1}{2}t \ln \hat r\Big)$. The first term $\rightarrow 0$ since $0\leq \delta < 1$. The second term $\rightarrow 0$ by L’Hopital’s rule. Thus $\
bar\mu_t \rightarrow 0$ as $t\rightarrow \infty$.
The contributing term to volatility at time $t$, after substituting and simplifying terms, is
$\ln \frac{p_t^D}{p_{t-1}^D} - \bar\mu_t = \frac{t/2-1}{2^t}\ln \hat r - 2^{-t} \bar\mu_0.$
Then DStablecoin volatility evolves according to
\begin{aligned} \bar\sigma_t^2 &= (1-\delta)\bar\sigma_{t-1}^2 + \delta\Big(\ln \frac{p^D_t}{p^D_{t-1}} - \bar\mu_t\Big)^2 \\ &= \sum_{k=1}^t (1-\delta)^{t-k} \delta \Big(\ln \frac{p_k^D}{p_{k-1}^D}
-\bar\mu_k\Big)^2 + (1-\delta)^t \bar\sigma_0^2 \\ &= \sum_{k=1}^t 2^{-(t-k)} \delta \Big( \frac{k/2-1}{2^k}\ln \hat r - 2^{-k} \bar\mu_0 \Big)^2 + 2^{-t} \bar\sigma_0^2 \\ &= \sum_{k=1}^t 2^{-(t-k)}
\delta 2^{-2k} \Big( (k/2-1)\ln \hat r - \bar\mu_0\Big)^2 + 2^{-t} \bar\sigma_0^2 \\ &= 2^{-t} \sum_{k=1}^t 2^{-k-1} \Big( (k/2-1)\ln \hat r - \bar\mu_0\Big)^2 + 2^{-t} \bar\sigma_0^2. \\ \end
The second line follows from straightforward induction. As $t\rightarrow\infty$, the series converges from exponential decay. Then both terms $\rightarrow 0$ because of the factor of $2^{-t}$. Thus $
\bar\sigma_t^2 \rightarrow 0$.
Case II:
$\delta eq 1/2$. The series in $\bar\mu_t$ is a geometric progression
\begin{aligned} \sum_{k=1}^t 2^{-k}(1-\delta)^{t-k} &= \sum_{k=1}^t (1-\delta)^t \Big(2(1-\delta)\Big)^{-k} \\ &= \frac{(1-\delta)^t\Big( 2(1-\delta)^{-1} - 2^{-t-1}(1-\delta)^{-t-1}\Big)}{1- 2(1-\
delta)^{-1}} \\ &= \frac{(1-\delta)^t - 2^{-t}}{2(1-\delta)-1} \end{aligned}
Then we have $\bar\mu_t = (1-\delta)^t \bar\mu_0 - \delta \frac{(1-\delta)^t-2^{-t}}{2(1-\delta)-1} \ln \hat r$, which converges to 0 as $t\rightarrow\infty$.
The contributing term to volatility at time $t$, after substituting and simplifying terms, is
$\ln \frac{p_t^D}{p_{t-1}^D} - \bar\mu_t = (1-\delta)^t \bar\mu_0 - \frac{(1-\delta)^t -2^{-t+1}(1-\delta)}{2(1-\delta)-1}\ln \hat r.$
The DStablecoin volatility evolves according to
\begin{aligned} \bar\sigma_t^2 &= \sum_{k=1}^t (1-\delta)^{t-k} \delta \Big(\ln \frac{p_k^D}{p_{k-1}^D} -\bar\mu_k\Big)^2 + (1-\delta)^t \bar\sigma_0^2 \\ &= \sum_{k=1}^t (1-\delta)^{t-k}\delta \Big(
(1-\delta)^k \bar\mu_0 - \frac{(1-\delta)^k -2^{-k+1}(1-\delta)}{2(1-\delta)-1}\ln \hat r \Big)^2 + (1-\delta)^t \bar\sigma_0^2. \\ \end{aligned}
Note that because $(1-\delta) \geq 1/2$, we have
\begin{aligned} |(1-\delta)^t - 2^{-t+1}(1-\delta)| &\leq (1-\delta)^t + 2^{-t+1}(1-\delta) \\ &\leq 2(1-\delta)^t. \end{aligned}
Thus we have
\begin{aligned} \bar\sigma_t^2 &\leq (1-\delta)^t \sum_{k=1}^t \frac{\delta}{(1-\delta){^k}} \Big( (1-\delta)^k \bar\mu_0 + \frac{2(1-\delta)^k}{2(1-\delta)-1}\ln \hat r \Big)^2 + (1-\delta)^t \bar\
sigma_0^2 \\ &= (1-\delta)^t \sum_{k=1}^t (1-\delta)^{k}\delta \Big( \bar\mu_0 + \frac{2}{2(1-\delta)-1}\ln \hat r \Big)^2 + (1-\delta)^t \bar\sigma_0^t. \end{aligned}
As $t\rightarrow\infty$, the series converges from exponential decay. Then both terms $\rightarrow 0$ because of the factor of $(1-\delta)^t$. Thus $\bar\sigma_t^2 \rightarrow 0$. | {"url":"https://cryptoeconomicsystems.pubpub.org/pub/klages-mundt-blockchain-instability/release/3?readingCollection=a1e776d2","timestamp":"2024-11-08T12:21:44Z","content_type":"text/html","content_length":"1049739","record_id":"<urn:uuid:c6a3bb99-bf10-47f6-9055-15f22e33ccc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00542.warc.gz"} |
Usage: 3dCM [options] dset
Output = center of mass of dataset, to stdout.
Note: by default, the output is (x,y,z) values in RAI-DICOM
coordinates. But as of Dec, 2016, there are now
command line switches for other options (see -local*
-mask mset :Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be averaged from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-automask :Generate the mask automatically.
-set x y z :After computing the CM of the dataset, set the
origin fields in the header so that the CM
will be at (x,y,z) in DICOM coords.
-local_ijk :Output values as (i,j,k) in local orientation.
-roi_vals v0 v1 v2 ... :Compute center of mass for each blob
with voxel value of v0, v1, v2, etc.
This option is handy for getting ROI
centers of mass.
-all_rois :Don't bother listing the values of ROIs you want
the program will find all of them and produce a
full list.
-Icent :Compute Internal Center. For some shapes, the center can
lie outside the shape. This option finds the location
of the center of a voxel closest to the center of mass
It will be the same or similar to a center of mass
if the CM lies within the volume. It will lie necessarily
on an edge voxel if the CMass lies outside the volume
-Dcent :Compute Distance Center, i.e. the center of the voxel
that has the shortest average distance to all the other
voxels. This is much more computational expensive than
Cmass or Icent centers
-rep_xyz_orient RRR :when reporting (x,y,z) coordinates, use the
specified RRR orientation (def: RAI).
NB: this does not apply when using '-local_ijk',
and will not change the orientation of the dset
when using '-set ..'. | {"url":"https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/alpha/3dCM_sphx.html","timestamp":"2024-11-11T02:20:34Z","content_type":"text/html","content_length":"9924","record_id":"<urn:uuid:0cbd3c19-decd-44ec-88f4-422906b441d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00041.warc.gz"} |
Source code for stonesoup.deleter.error
"""Contains collection of error based deleters"""
from typing import Sequence
import numpy as np
from ..base import Property
from .base import Deleter
class CovarianceBasedDeleter(Deleter):
""" Track deleter based on covariance matrix size.
Deletes tracks whose state covariance matrix (more specifically its trace)
exceeds a given threshold.
covar_trace_thresh: float = Property(doc="Covariance matrix trace threshold")
mapping: Sequence[int] = Property(default=None,
doc="Track state vector indices whose corresponding "
"covariances' sum is to be considered. Defaults to"
"None, whereby the entire track covariance trace is "
def check_for_deletion(self, track, **kwargs):
"""Check if a given track should be deleted
A track is flagged for deletion if the trace of its state covariance
matrix is higher than :py:attr:`~covar_trace_thresh`.
track : Track
A track object to be checked for deletion.
`True` if track should be deleted, `False` otherwise.
diagonals = np.diag(track.state.covar)
if self.mapping:
track_covar_trace = np.sum(diagonals[self.mapping])
track_covar_trace = np.sum(diagonals)
if track_covar_trace > self.covar_trace_thresh:
return True
return False | {"url":"https://stonesoup.readthedocs.io/en/stable/_modules/stonesoup/deleter/error.html","timestamp":"2024-11-11T19:51:26Z","content_type":"text/html","content_length":"13358","record_id":"<urn:uuid:e39a01be-6893-4b51-ba92-704ff5859a7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00358.warc.gz"} |
Pandas Percentile: Calculate Percentiles of a Dataframe
Calculating percentiles is a crucial aspect of data analysis, as it allows comparing individual data points to the overall distribution of a dataset. One of the key functions that Pandas provides is
the ability to compute percentiles flexibly and efficiently using the quantile function.
In Pandas, the quantile() function allows users to calculate various percentiles within their DataFrame with ease. By specifying the desired percentile value, or even an array of percentile values,
analysts can immediately identify key values within their dataset and draw important insights.
In this comprehensive guide, we’ll delve into the depths of understanding how to calculate percentiles in Pandas DataFrames. We’ll start with a detailed explanation of how percentiles can be
calculated using Pandas. We’ll also provide practical examples and use cases to help you apply these concepts in your own data analysis tasks.
How to Calculate Percentiles and Quantiles in Pandas
The Pandas library provides a useful function quantile() for working with percentiles and quantiles in DataFrames. This helps in understanding the central tendency and dispersion of the dataset.
In this section, we’ll discuss the quantile() method, its parameters, and an alternative solution using NumPy.
1. Quantile Method
The quantile() function in Pandas is used to calculate quantiles for a given Pandas Series or DataFrame. Let’s look at its syntax.
DataFrame.quantile(q, axis, numeric_only, interpolation)
• q: The percentile to calculate between [0-1]
• axis(Optional): The requested axis to calculate the percentile for. 0(default) for columns and 1 for rows.
• numeric_only(Optional): Calculates the percentile values on numeric-only columns or rows when set to True. [True|False]
• interpolation(Optional): The type of interpolation to employ when the percentile is between two values. [linear, mid, higher, lower, nearest]
You can use this function to calculate either a single percentile value or an array of multiple percentile values. For example, let’s calculate the 75th percentile of the DataFrame below:
import pandas as pd
data = {'Col1': [1,2,3,4,5,6,7,8,9,10]}, {'Col2':[11,12,13,14,15,16,17,18,19,20]}
df= pd.DataFrame(data)
seventy_fifth_percentile = data.quantile(0.75)
This code will return the 75th percentile of both columns in the df DataFrame. You can change the 0.75 to any value between 0 and 1 to calculate other percentiles as well.
You can also specify multiple percentiles at once by passing a list of quantiles within the range of 0 to 1. For example:
import pandas as pd
# Creating a sample DataFrame
data = {'A': [10, 20, 30, 40, 50], 'B': [15, 25, 35, 45, 55]}
df = pd.DataFrame(data)
# Calculate multiple percentiles
percentiles = [0.25, 0.5, 0.75]
result = df.quantile(percentiles)
This code will produce the following output, which shows the 25th, 50th, and 75th percentiles for each column of the DataFrame:
A B
0.25 20.0 25.0
0.50 30.0 35.0
0.75 40.0 45.0
It is important to remember that the quantile() function returns a DataFrame when calculating multiple percentiles. Let’s look at some of the other arguments that the quantile() function takes:
2. Interpolation Method
When a quantile falls between two data points, Pandas allows you to choose from different interpolation methods: ‘linear’, ‘lower’, ‘higher’, ‘midpoint’, and ‘nearest’. By default, Pandas uses linear
Here’s a brief explanation of each method:
• linear: This method performs linear interpolation between the two closest data points.
• lower: Returns the lower of the two closest data points.
• higher: Returns the higher of the two closest data points.
• midpoint: Returns the midpoint of the two closest data points.
• nearest: Returns the closest data point.
For example, if you want to recalculate the 75th percentile using the ‘higher‘ interpolation method, you can modify the previous code snippet like this:
seventy_fifth_percentile = df.quantile(0.75, interpolation='higher')
Col1 8
Col2 18
Name: 0.75, dtype: int64
Numeric Columns Only
When set to True, the numeric_only argument sets the function to calculate the percentiles for only numeric axes. It automatically excludes axes with non-numerical values like strings, dates, etc.
Here’s an example:
import pandas as pd
data = {'col1': [1, 2, 3, 4], 'col2': [5, 6, 7, 8], 'col3': ['a', 'b', 'c', 'd']}
df = pd.DataFrame(data)
df.quantile(0.6, numeric_only=True)
This would return the following output, excluding the non-numeric ‘col3‘:
col1 2.8
col2 6.8
Name: 0.6, dtype: float64
The axis parameter specifies whether the given quantile will be calculated column-wise or row-wise. Its default value is 0 which calculates it column-wise.
Let’s calculate the previous example row-wise:
df.quantile(0.6, numeric_only=True, axis=1)
0 3.4
1 4.4
2 5.4
3 6.4
Name: 0.6, dtype: float64
This returns the 60th percentile values for each row in the DataFrame.
3. Using NumPy
You can also use the NumPy library to calculate percentiles and quantiles. NumPy provides a percentile function, which can be used like this:
import numpy as np
data_values = np.array([1,2,3,4,5,6,7,8,9,10])
seventy_fifth_percentile = np.percentile(data_values, 75)
This code snippet calculates the 75th percentile of the given dataset using NumPy’s percentile() function.
In conclusion, Pandas and NumPy both provide efficient ways to calculate percentiles and quantiles. Choosing between the two depends on your specific needs and preferences, as well as which library
is more commonly used in your project.
How to Generate Summary Statistics and Descriptive Information Using Pandas
In this section, we’ll discuss how to calculate summary statistics and descriptive information using Pandas in Python. We’ll explore the describe and groupby methods.
1. Describe Method
The describe() method is a powerful function in Pandas, used to generate summary statistics on a dataset. This includes measures such as count, mean, standard deviation, min, max, and specific
percentiles of data. You can use it with both numeric and object data types, and the output will vary depending on the input provided.
Here’s an example of describe method applied on a DataFrame:
import pandas as pd
data = {'col1': [1, 2, 3, 4], 'col2': [5, 6, 7, 8]}
df = pd.DataFrame(data)
This would produce the following output:
col1 col2
count 4.0 4.0
mean 2.5 6.5
std 1.290994 1.290994
min 1.0 5.0
25% 1.75 5.75
50% 2.5 6.5
75% 3.25 7.25
max 4.0 8.0
2. Groupby Method
You can also calculate summary statistics for specific groups using the groupby function. This can be helpful when you want to analyze data based on categories or specific conditions.
Here’s an example:
import pandas as pd
data = {'Category': ['A', 'A', 'B', 'B'], 'Value': [1, 2, 3, 4]}
df = pd.DataFrame(data)
grouped_df = df.groupby('Category').describe()
This would yield the following output:
A count 2.0
mean 1.5
std 0.707107
min 1.0
25% 1.25
50% 1.5
75% 1.75
max 2.0
B count 2.0
mean 3.5
std 0.707107
min 3.0
25% 3.25
50% 3.5
75% 3.75
max 4.0
The groupby() function groups each unique element in the ‘Category‘ column together, then we apply the describe() function to it.
How to Calculate Percentile Rank Using Pandas
The Percentile Rank is a value that tells us the percentage of values in a dataset that are equal to or below a certain value. We can calculate it using the rank() function in Pandas.
The syntax is as follows:
• rank_column_name: name of the new column that will display the percentile rank.
• column_name: Name of the DataFrame column to be ranked.
• pct: If set to True, it will display the rank in percentage(%) format.
The function also takes in parameters like the numeric_only argument. Let’s go through a few examples that demonstrate how to calculate percentiles using the Pandas library.
Example 1
In this example, we’ll calculate the percentile rank of a given column in a Pandas DataFrame. Let’s consider a sample dataset containing student scores.
To calculate the percentile rank of the ‘Score’ column, you can use the following code snippet:
import pandas as pd
data = {"Student": ["A", "B", "C", "D", "E"],
"Score": [75, 85, 90, 60, 95]}
df = pd.DataFrame(data)
df["percent_rank"] = df["Score"].rank(pct=True)
Student Score percent_rank
0 A 75 0.4
1 B 85 0.6
2 C 90 0.8
3 D 60 0.2
4 E 95 1.0
The resulting DataFrame includes an additional column, “percent_rank” that displays the percentile ranks of the scores.
Example 2
In this example, we’ll group data by categories and then calculate percentile ranks within those groups.
We have a dataset with scores and their respective categories. Here’s how to calculate percentile ranks within each group using the groupby() function in Pandas:
import pandas as pd
data = {"Category": ["Hot", "Cold", "Hot", "Cold", "Hot", "Cold"],
"Score": [80, 90, 70, 95, 85, 89]}
df = pd.DataFrame(data)
df["percent_rank"] = df.groupby("Category")["Score"].transform("rank", pct=True)
Category Score percent_rank
0 Hot 80 0.666667
1 Cold 90 0.666667
2 Hot 70 0.333333
3 Cold 95 1.000000
4 Hot 85 1.000000
5 Cold 89 0.333333
The DataFrame will now reflect the percentile rank due to this transformation.
Note: If you want to remove this new column, you can easily use any one of the drop commands.
How to Visualize Percentiles With the Seaborn Library
If you’d like a more visual approach to working with percentiles and distributions, consider using the Seaborn library in conjunction with Pandas. Seaborn is a Python data visualization library built
on top of Matplotlib, offering various statistical graphing options.
A popular function in Seaborn for displaying percentiles is the boxplot(). Here’s an example of using Seaborn to create a box plot showcasing the distribution of scores in each category:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
data = {"Category": ["Hot", "Cold", "Hot", "Cold", "Hot", "Cold"],
"Score": [80, 90, 70, 95, 85, 89]}
df = pd.DataFrame(data)
sns.boxplot(x="Category", y="Score", data=df)
This code produces a box plot where the box represents the interquartile range (IQR), which contains the 25th to 75th percentile, and whiskers denoting the minimum and maximum values within 1.5 times
the IQR. Outliers are also indicated if they exist.
You can learn more about this in this video on how to create enhanced box plots in Power BI using Python.
Final Thoughts
In conclusion, having a thorough mastery of the Pandas percentile calculation using the quantile function is essential for anyone involved in data analysis.
By utilizing the quantile() function provided by Pandas, you can easily identify the distribution and patterns within your data, gaining a deeper understanding of its characteristics. With its
ability to perform complex calculations in just a few lines of code, this powerful tool can enhance data-driven decision-making and streamline the entire data analysis process.
Whether you’re working with financial data, survey results, or any other domain, the ability to compute percentiles will empower you to make informed decisions and uncover valuable information!
If you enjoyed reading this, you can also check out this article on How to Export a Pandas DataFrame to Excel in Python.
Frequently Asked Questions
What is Pandas percentile?
In Pandas, a percentile is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations falls. For example, the 20th percentile is the
value (or score) below which 20% of the observations may be found. The Pandas function quantile() is used to find this value. The argument to quantile() is a number between 0 and 1, where 0.25
represents the 25th percentile, 0.5 represents the median (or 50th percentile), and so on.
Does the quantile() function work on non-numeric data types?
No, the quantile() function in Pandas does not work on non-numeric data types. Percentiles and quantiles are statistical measures that are defined in terms of ordered numerical data, so they don’t
make sense for non-numeric data types like strings or dates. However, you can use it on boolean data, as booleans are treated as 0s and 1s (False and True respectively).
Can the quantile() function be applied to Panda Series?
Yes, the quantile() function can be applied to a Pandas Series. A Series in Pandas is a one-dimensional labeled array capable of holding any data type. If your Series contains numeric data, you can
use the quantile() function to compute the value at a specific percentile. For example, if s is a Pandas Series, you can find the 25th percentile with s.quantile(0.25). This will return the value
below which 25% of the data in the Series falls.
How to find the 25 percentile in Pandas?
To find the 25th percentile (also known as the first quartile) in a Pandas DataFrame or Series, you can use the quantile() function with 0.25 as the argument. For instance, if you have a DataFrame df
and you want to find the 25th percentile of the column ‘column_name‘, you would use df[‘column_name’].quantile(0.25). This will return the value at the 25th percentile for the specified column.
A comprehensive guide to utilizing the Matplotlib library for data visualization and analysis in Python.
Mastering Data Analytics with Matplotlib in Python | {"url":"https://blog.enterprisedna.co/pandas-percentile-calculate-percentiles-of-a-dataframe/","timestamp":"2024-11-05T16:24:41Z","content_type":"text/html","content_length":"512250","record_id":"<urn:uuid:2506c5eb-ecc7-4c1a-9675-526a0ad2bfe9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00269.warc.gz"} |
Proof (Truth) - Psynso
A proof is sufficient evidence or argument for the truth of a proposition. The concept arises in a variety of areas, with both the nature of the evidence or justification and the criteria for
sufficiency being area-dependent. In the area of oral and written communication such as conversation, dialog, rhetoric, etc., a proof is a persuasive perlocutionary speech act, which demonstrates the
truth of a proposition. In any area of mathematics defined by its assumptions or axioms, a proof is an argument establishing a theorem of that area via accepted rules of inference starting from those
axioms and other previously established theorems. The subject of logic, in particular proof theory, formalizes and studies the notion of formal proof. In the areas of epistemology and theology, the
notion of justification plays approximately the role of proof, while in jurisprudence the corresponding term is evidence, with burden of proof as a concept common to both philosophy and law.
In most areas, evidence is drawn from experience of the world around us, with science obtaining its evidence from nature, law obtaining its evidence from witnesses and forensic investigation, and so
on. A notable exception is mathematics, whose evidence is drawn from a mathematical world begun with postulates and further developed and enriched by theorems proved earlier.
As with evidence itself, the criteria for sufficiency of evidence are also strongly area-dependent, usually with no absolute threshold of sufficiency at which evidence becomes proof. The same
evidence that may convince one jury may not persuade another. Formal proof provides the main exception, where the criteria for proofhood are ironclad and it is impermissible to defend any step in the
reasoning as “obvious”; for a well-formed formula to qualify as part of a formal proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous
well-formed formulae in the proof sequence.
Proofs have been presented since antiquity. Aristotle used the observation that patterns of nature never display the machine-like uniformity of determinism as proof that chance is an inherent part of
nature. On the other hand, Thomas Aquinas used the observation of the existence of rich patterns in nature as proof that nature is not ruled by chance. Augustine of Hippo provides a good case study
in early uses of informal proofs in theology. He argued that given the assumption that Christ had risen, there is resurrection of the dead and he provided further arguments to prove that the death of
Jesus was for the salvation of man.
Proofs need not be verbal. Before Galileo, people took the apparent motion of the Sun across the sky as proof that the Sun went round the Earth. Suitably incriminating evidence left at the scene of a
crime may serve as proof of the identity of the perpetrator. Conversely, a verbal entity need not assert a proposition to constitute a proof of that proposition. For example, a signature constitutes
direct proof of authorship; less directly, handwriting analysis may be submitted as proof of authorship of a document. Privileged information in a document can serve as proof that the document’s
author had access to that information; such access might in turn establish the location of the author at certain time, which might then provide the author with an alibi. | {"url":"https://psynso.com/proof-truth/","timestamp":"2024-11-05T10:19:47Z","content_type":"text/html","content_length":"115348","record_id":"<urn:uuid:25cefdab-0dc4-480e-9096-f2478a1166cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00585.warc.gz"} |
Elena Kosygina : Excited random walks
Javascript must be enabled
Elena Kosygina : Excited random walks
The idea behind excited random walks (ERWs), roughly speaking, is to take a well-known underlying process (such as, for example, simple symmetric random walk (SSRW)) and modify its transition
probabilities for the "first few" visits to every site of the state space. These modifications can be deterministic or random. The resulting process is not markovian, and its properties can be very
different from those of the underlying process. I shall give a short review of some of the known results for ERW (with SSRW as underlying process) on the d-dimensional integer lattice and then
concentrate on a specific model for d=1. For this model we can give a complete picture including functional limit theorems.
0 Comments
Comments Disabled For This Video | {"url":"https://www4.math.duke.edu/media/watch_video.php?v=3efd953bac919548da734a29c1d776ab","timestamp":"2024-11-11T10:28:10Z","content_type":"text/html","content_length":"47183","record_id":"<urn:uuid:79bc47d0-6e5a-4421-8c1c-39fbb19cbd82>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00594.warc.gz"} |
Add 3 Numbers by Making 10 First Worksheets (First Grade, printable)
Printable “3 addends” worksheets for first & second grade:
Add 3 numbers (make 10) (eg. 5 + 3 + 5)
Add 3 single digit numbers (eg. 4 + 7 + 8)
3 Addends Word Problems
Online Worksheets
3 Addends, Sprint 1
3 Addends, Sprint 2
Add 3 Numbers by Making a 10 First Worksheets
First Grade math worksheets to help students practice making a 10 first when adding 3 numbers or addends.
How to use the make ten strategy when adding 3 numbers?
Try to look for number pairs or number bonds that make up 10 and add that first to get a ten before adding the third number.
Combine 2 and 8 to make 10, then add 5:
(Have a look at these worksheets if you need to revise the number pairs of 10. Have a look at these worksheets if you need to practice adding to a ten.)
Have a look at this video if you need help using the Make 10 strategy when adding 3 numbers.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Add 3 Numbers by Making a 10 First Worksheets.
More Add 3 Numbers by a Making 10 First Worksheets
Add the numbers. Try to make a 10 first.
Add 3 Numbers by Making 10 First Worksheet #1
(Answers on the second page)
Add 3 Numbers by Making 10 First Worksheet #2
(Answers on the second page)
Add 3 Numbers by Making 10 First #1 (Interactive)
Add 3 Numbers by Making 10 First #2 (Interactive)
Teen Numbers
Compose Teen Number
Decompose Teen Number
Compose & Decompose Teen Numbers
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/add-3-numbers-make-10-first.html","timestamp":"2024-11-05T03:23:06Z","content_type":"text/html","content_length":"38311","record_id":"<urn:uuid:2d265876-2a91-4325-9b83-bbe6cf870fc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00641.warc.gz"} |
Ordinal Numbers Pdf Download - OrdinalNumbers.com
Ordinal Numbers Pdf Download – A limitless number of sets can be easily enumerated using ordinal numerals as a tool. They can also be used to generalize ordinal numbers. 1st The foundational idea of
mathematics is the ordinal. It is a numerical number that indicates where an object is in a list. The ordinal number … Read more | {"url":"https://www.ordinalnumbers.com/tag/ordinal-numbers-pdf-download/","timestamp":"2024-11-02T11:26:07Z","content_type":"text/html","content_length":"45948","record_id":"<urn:uuid:e810cf3c-b545-4146-875f-3f2af7066283>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00584.warc.gz"} |
What is Pythagoras Theorem? Formula, Example, Application
Just like other concepts, Pythagoras theorem has its own relevance in mathematics. It explains the relation between the sides of the right-angled triangle. This theorem was invented by famous Greek
mathematician named “Pythagoras”.
This formula states the relationship between base, perpendicular and hypotenuse of a right angled triangle. Let’s discuss the same in detail.
Pythagoras Theorem
According to Pythagoras Theorem, “In a right angled triangle, the square of the hypotenuse is equal to the sum of squares of the other two sides which are knows as base and perpendicular”.
Let’s us understand the concept.
As we know that in a right triangle we have three sides Hypotenuse, perpendicular and base. The side that is opposite to the 90 degree angle is called the hypotenuse which is also the longest
So, according to Pythagoras theorem:
$\displaystyle (\text{ }Hypotenuse~){{~}^{2}}~~~=~~\text{ }{{\left( {\text{ }Base\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }Perpendicular\text{ }} \right){{~}^{2}}$
In the above right triangle ABC right angled at B , side AC is the hypotenuse (the longest side), side BC serves as the base and side AB is the perpendicular.
The base and perpendicular can be used interchangeably.
So, according to Pythagoras Theorem
$\displaystyle (\text{ }AC~){{~}^{2}}~~~=~~\text{ }{{\left( {\text{ }BC\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }AB\text{ }} \right){{~}^{2}}$
So, this Pythagorean Theorem builds a relationship between all the three sides of a right angled triangle.
We can use this theorem to find the unknown side of a triangle when two of its sides are given.
Let us take some examples:
Example 1
Question. Find the hypotenuse of a right triangle if the base is 3 cm and perpendicular is 4 cm ?
Solution: According to Pythagoras theorem
$(\text{ }Hypotenuse~){{~}^{2}}~~~=~~\text{ }{{\left( {\text{ }Base\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }Perpendicular\text{ }} \right){{~}^{2}}$
$=~~\text{ }{{\left( {\text{ }3\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }4\text{ }} \right){{~}^{2}}$
= 9 + 16
= 25
Hypotenuse = √ 25
Hypotenuse = 5
Therefore , Hypotenuse of the triangle is 5 cm .
Example 2
Question: Find the base of a right triangle if the hypotenuse of the triangle is 17 cm and the perpendicular is 15 cm?
Solution: As it’s a right angled triangle, we can Apply Pythagoras theorem
$\displaystyle (\text{ }Hypotenuse~){{~}^{2}}~~~=~~\text{ }{{\left( {\text{ }Base\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }Perpendicular\text{ }} \right){{~}^{2}}$
By putting the given values in the above relation we get ,
$\displaystyle (\text{ }17~){{~}^{2}}~~~=~~\text{ }{{\left( {\text{ }Base\text{ }} \right)}^{2}}~~\text{ }+\text{ }\left( {\text{ }15\text{ }} \right){{~}^{2}}$
$\displaystyle ~~~~~\text{ }289~~~~~~~~=~\text{ }(~Base\text{ })~2~\text{ }+~\text{ }225~$
$\displaystyle ~289~\text{ }-~\text{ }225~\text{ }=~\text{ }\left( {\text{ }Base\text{ }} \right){{~}^{2}}$
$\displaystyle 64~~~~~~=~\text{ }(~Base\text{ }){{~}^{2}}$
Base = √ 64
Base = 8
Therefore , the base of the triangle is 8 cm .
So, this is how we can use the Pythagoras theorem to find out the unknown side of the right triangle. This theorem is not only used in geometry, this is also used in real–life scenarios as well.
Real Life Applications of Pythagoras Theorem
• We can use Pythagoras theorem to check whether a triangle is a right triangle or not.
• In ocean studies, Pythagorean formula is used to calculate the speed of the sound waves in water/sea.
• Metrological department and aerospace industry uses this theorem to determine the sound source and its range.
• In navigation this theorem is used to find the shortest distance between given points.
• In construction, architecture and planning, this theorem is used to calculate the slope of roof, dam work, drainage system, etc .
Read More – Mathematics Questions
View More – Useful links for Your Child’s Development
Unleash the Power of visualization to break tough concepts
Wanna be the next Maths wizard? Discover the new way of learning concepts with real-life Visualization techniques and instant doubt resolutions. | {"url":"https://telgurus.co.uk/what-is-pythagoras-theorem/","timestamp":"2024-11-08T21:20:27Z","content_type":"text/html","content_length":"119403","record_id":"<urn:uuid:53c9ee07-4c38-443c-ae4c-3d680a01eb91>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00100.warc.gz"} |
boundary conditions?
What is Dirichlet boundary conditions?
Brian Rogers 2022-10-28 • comments off
What is Dirichlet boundary conditions?
The boundary is usually denoted as ∂C. In a two-dimensional domain that is described by x and y, a typical Dirichlet boundary condition would be. f ( x , y ) = g ( x , y , ) , where: ( x , y ) ∈ ∂ C.
Here the function g may not only depend on x and y, but also on additional independent variables, e.g., the time t.
What are Dirichlet and Neumann conditions?
In thermodynamics, Dirichlet boundary conditions consist of surfaces (in 3D problems) held at fixed temperatures. Neumann boundary conditions. In thermodynamics, the Neumann boundary condition
represents the heat flux across the boundaries.
What is Dirichlet boundary value problem?
In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on
the boundary of the region.
How do you solve wave equations with Neumann boundary conditions?
The (Neumann) boundary conditions are ux(0,t) = ux(L, t)=0. ux(0,t) = X (0)T(t)=0 and ux(L, t) = X (L)T(t)=0. Since we don’t want T to be identically zero, we get X (0) = 0 and X (L)=0. ( αn cos (knπ
L t ) + βnL knπ sin (knπ L t )) cos nπx L .
What is meant by Dirichlet?
What is the Dirichlet model?
The Dirichlet model describes patterns of repeat purchases of brands within a product. category. It models simultaneously the counts of the number of purchases of each brand over. a period of time,
so that it describes purchase frequency and brand choice at the same time.
What is Dirichlet formula?
In many situations, the dissipation formula which assures that the Dirichlet integral of a function u is expressed as the sum of -u(x)[Δu(x)] seems to play an essential role, where Δu(x) denotes the
(discrete) Laplacian of u. This formula can be regarded as a special case of the discrete analogue of Green’s Formula.
What are boundary conditions in differential equations?
In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution
to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
What are Dirichlet Neumann and Robbins boundary conditions?
It is possible to describe the problem using other boundary conditions: a Dirichlet boundary condition specifies the values of the solution itself (as opposed to its derivative) on the boundary,
whereas the Cauchy boundary condition, mixed boundary condition and Robin boundary condition are all different types of …
What is Dirichlet and Neumann conditions?
What is Dirichlet and Neumann boundary conditions?
Dirichlet boundary conditions specify the value of the function on a surface . 2. Neumann boundary conditions specify the normal derivative of the function on a surface, 3. Robin boundary conditions.
What are Dirichlet and Neumann boundary condition?
1. Dirichlet boundary conditions specify the value of the function on a surface . 2. Neumann boundary conditions specify the normal derivative of the function on a surface, 3.
What is Dirichlet and Neumann boundary condition?
Where is Dirichlet distribution used?
Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models.
How do I manually install EGit?
How do I manually install EGit? Installing EGit in Eclipse you can look in the “All Available Sites” drop down panel if EGit is existing there or add…
Does Walmart still offer site to store?
Does Walmart still offer site to store? Shop Online: Customers can access Site to Store at www.walmart.com/sitetostore or search for Site to Store on the Walmart.com homepage. After…
What is a heat stable allergen?
What is a heat stable allergen? Some allergens or, more properly, some allergenic foods, are described as heat stable (e.g. milk, egg, fish, peanuts, and products thereof), while…
How can I contact Nick Jenkins?
How can I contact Nick Jenkins? How to hire Nick Jenkins. Contact the Champions Speakers agency to provisionally enquire about Nick Jenkins for your event today. Simply call…
What is a Cas9 Nickase?
What is a Cas9 Nickase? A Cas9 nickase variant can be generated by alanine substitution at key catalytic residues within these domains: the RuvC mutant D10A produces a…
How accurate is kinetic inRide?
How accurate is kinetic inRide? Using the inRide pod and a magnet in the resistance unit roller, we take speed at the wheel and translate that into power… | {"url":"https://rf-onlinegame.com/what-is-dirichlet-boundary-conditions/","timestamp":"2024-11-02T23:37:00Z","content_type":"text/html","content_length":"58335","record_id":"<urn:uuid:b06215f7-188d-491f-8beb-bec7e68882b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00231.warc.gz"} |
The Stacks project
Definition 55.3.1. A numerical type $T$ is given by
\[ n, m_ i, a_{ij}, w_ i, g_ i \]
where $n \geq 1$ is an integer and $m_ i$, $a_{ij}$, $w_ i$, $g_ i$ are integers for $1 \leq i, j \leq n$ subject to the following conditions
1. $m_ i > 0$, $w_ i > 0$, $g_ i \geq 0$,
2. the matrix $A = (a_{ij})$ is symmetric and $a_{ij} \geq 0$ for $i \not= j$,
3. there is no proper nonempty subset $I \subset \{ 1, \ldots , n\} $ such that $a_{ij} = 0$ for $i \in I$, $j \not\in I$,
4. for each $i$ we have $\sum _ j a_{ij}m_ j = 0$, and
5. $w_ i | a_{ij}$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0C6Y. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0C6Y, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0C6Y","timestamp":"2024-11-07T15:58:30Z","content_type":"text/html","content_length":"33985","record_id":"<urn:uuid:7ed3191d-3c30-4a96-856d-2e6b63f4ecfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00473.warc.gz"} |
Generating Z-Scores in R - Statistical Analysis Tutorial | BridgeText - Online Dissertation Writing Service and Help
For a normally distributed variable, a z score assigns a number to each data point based on its distance, in standard deviations, from the mean. For example, if the mean of variable iq is 100, with
sd 15, then an iq of 100 has a z score of 0, an iq of 85 has a z score of -1, and an iq of 115 has a z score of 1. In this blog, we’ll show you how to use R code to create z scores for each value in
a normal distribution.
Generate Normally Distributed Data
First, let’s generate a normally distributed variable, iq, with 10,000 observations, a mean of 100, and a standard deviation of 15.
iq2 <- rnorm(10000, mean=100, sd=15)
iq <- round(iq2)
You can confirm that R has generated an IQ variable with a mean very close to 100 and a standard deviation very close to 15. You can also generate a histogram to visually confirm the normality of IQ
Create z Scores
Now try the following code, which will take every value of IQ, subtract the mean from it, and divide it by the standard deviation, leading to the generation of z scores.
iqmean <- mean(iq)
iqsd <- sd(iq)
z <- (iq-iqmean)/iqsd
iqdata <- data.frame(iq, z)
head(iqdata, n=20)
Confirm that each IQ score now has a z score associated with it:
You used iq2, iqmean, and iqsd to help you get to the z scores, so you needn’t include these variables in your data frame.
BridgeText can help you with all of your statistical analysis needs.
Why Isn’t ChatGPT a One-Stop Paper Solution?
15 July
Why ChatGPT Can’t Yet Replace Academic Writing Services
24 July
What is Academic Writing?
19 July
Scatter Plot Styles in Python
26 July | {"url":"https://www.bridgetext.com/generating-z-scores-in-r","timestamp":"2024-11-02T09:24:33Z","content_type":"application/xhtml+xml","content_length":"33890","record_id":"<urn:uuid:aeb82d3a-5ba3-4b13-bd80-f0406b0b0077>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00895.warc.gz"} |
Jayce Getz : An approach to nonsolvable base change
Javascript must be enabled
Jayce Getz : An approach to nonsolvable base change
In the 1970's, inspired by the work of Saito and Shintani, Langlands gave a definitive treatment of base change for automorphic representations of the general linear group in two variables along
prime degree extensions of number fields. To give some idea of the depth and utility of his work, one need only remark that some consequences of it were crucial in Wiles' proof of Fermat's last
theorem. In this talk we will report on work in progress on base change for automorphic representations of GL(2) along nonsolvable Galois extensions of number fields. We will attempt to explain this
assuming only a little algebraic number theory.
• Category: Presentations
• Duration: 01:14:50
• Date: February 8, 2012 at 11:55 AM
• Views: 211
0 Comments
Comments Disabled For This Video | {"url":"https://www4.math.duke.edu/media/watch_video.php?v=3b660e2cff7c4381a26cb1bc59a62c5e","timestamp":"2024-11-11T18:15:55Z","content_type":"text/html","content_length":"46984","record_id":"<urn:uuid:682b79d1-dfed-4311-be60-2b093d418b18>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00168.warc.gz"} |
Are there guarantees for the reliability and accuracy of algorithms in C programming solutions provided for aerospace simulations for unmanned aerial vehicle (UAV) operations? | Hire Someone To Do C Programming Assignment
Are there guarantees for the reliability and accuracy of algorithms in C programming solutions provided for aerospace simulations for unmanned aerial vehicle (UAV) operations? The answer is good. The
key is precisely that they are independent of any practical constraints on the design of the code. With minimal assumptions for every model system, a prototype UAV (with a prototype of its own) and
its implementation follow the path given in the Section 3 of this outline paper. The previous paper focused on two important theoretical issues beyond simply the computational burden of designing an
initial simulation — the specification of the initial conditions for the production process and its implementation — and whether there are any trade-offs in their specification. It was underlined
here that these two conditions of formalism should be formally verified. The problem addressed in this paper is, however, not simply conceptual. There are as yet, no general guarantees of the speed
of construction and implementation for an aircraft design — there are no guarantees for the quality of the solution. The main obstacle is, however, that when a simulation of an actual space craft is
performed, there is some additional cost and additional potential biases in the design of the spacecraft, not taken into account when estimating a cost-conscious attitude and aerodynamic drag to
ensure the craft uses the correct nominal dimension of the runway model. Consequently such an evaluation of the maneuverability of space craft is required to help specify their proper height and
length. This is not to say that we are comparing methods designed to measure not-at-herself damage, but rather that the design of the simulation code does not contain any set of limitations and
limitations for which a mathematical design can be expected to be exact. However the following discussion is adapted from the proposal of [Krishnamapramanam, R. L. (1989) Improving the Simulation of
Spacecraft Design, in Proceedings of the Palazeti Society C-71, 1413-1420.) It turns out that the best results offer no guarantee concerning the fact that the code in question is in fact similar to
that of the real design presented in this paper. The mainAre there guarantees for the reliability and accuracy of algorithms in C programming solutions provided for aerospace simulations for unmanned
aerial vehicle (UAV) operations? Are there features available for the development of algorithms that help with Air Force engineers in developing military software? Are there features beyond the
capability of existing C-programs that could be used with these low-profile models? And what are the pros and cons of evaluating these features with algorithms developed for a certain unmanned
vehicle? The authors of this article have some experience establishing a self-contained program including the usage of C programming for the modeling a high-profile tool that was developed to
simulate a UAV’s landing, search or aerial chase using the CPL version 2 for the analysis and training of C programming software. “Data integration” seems to be one of the features that exists for
our program in our source code and therefore we don’t hold a copy of it official website our repository as our personal files are only in the repository. But just prior to getting started with this
program is the program written by our own author, Scott Dyer, in October 2007 as part of an audit report for our company Spacecraft Technologies as to the degree of inaccuracy. For you to gain
confidence in that analysis if you use the source code isn’t to make the difference of the software, therefore, it’s probably better to not use the C code for its benefit. In C-programs we have to
take to the test environment to define data sets (see this issue for more details here) that were derived from the source code. How many C programs are there to construct the project you outline? How
many programs are you working on (we have a list of you)? What is the number of running code? In software analysis software it is a lot harder to find solutions that fit the requirements better than
those that simply are not usable themselves.
We Do Your Online Class
There are many other ways to express our code in C and to take the time needed to make recommendations for your software. You just need to work out how to identify the potential issues there.Are
there guarantees for the reliability and accuracy of algorithms in C programming solutions provided for aerospace simulations for unmanned aerial vehicle (UAV) operations? C++ Solutions Overview Some
C++ algorithms cannot lead to fair algorithms but should lead to robust algorithms for large unmanned operations. Aerospace Solution Given an algorithm which cannot be trusted, there are four
possible mechanisms to guarantee good quality: Fail, by the algorithm itself, the algorithm will not be trusted — if it is trusted, it will always win. Fail, although the error can be bad, if it are
very small. If you are experienced in trying certain algorithms, you risk too many failures, and you would rather be wrong than just letting the algorithm. Aerospace Solution When thinking about the
safety of unmanned operations in space you think about what makes a proper and safe operation possible. Here are pointers and additional examples. Aerospace solution An example of a Aerospace
solution is to use a two sided triangle. Although the angle of the triangle is less than 5 degrees, it is not necessary to change the angle of the triangle at all. Here are examples of Aerospace
solutions: Aerospace solution So if you see a triangle of radius 6, they say that it should be 2π from your measurement. However if you want to avoid getting a wrong measurement and if the angle of
your triangle is less than 5, it should be 3π (and you can say smaller). If you see an an irrational angle, you don’t want to be wrong; a good ratio rule would be such that if you get a Visit Website
of radius 50, a circle of radius 100 and a circle of diameter 800, you want to be away from your measurement. Whereis Aerospace solution Aerospace solution is a computer software wrapper, which
computes the distance from the measuring point to the given location. The algorithm has the following properties: You have to remember the distance coordinate; its precise value depends on the
current location; it depends on the measurement. The area of the triangle is 10 inches so you must not exceed 20 inches at any distance. The measurement is 90 degrees as it will approximate the point
at which you have reached that location. The measurements are made in a discrete space of points with the same coordinates, and at the given distance it gives an estimate (in our case 80 degrees).
This information is similar to a distance formula, because the area of a triangle is constant. The boundary of the triangle is 80 degrees.
Paying Someone To Take Online Class Reddit
The process is called an estimation process. There a certain starting point from which you create the measurement and you want to be sure you are there when the measurement is made. Another process
for trying to estimate the measurements is to start an algorithm from which the distance measurement can be made since its estimate is known. You then modify the measurement by using each coordinate.
The resulting distance is approximately 20. If all you need | {"url":"https://chelponline.com/are-there-guarantees-for-the-reliability-and-accuracy-of-algorithms-in-c-programming-solutions-provided-for-aerospace-simulations-for-unmanned-aerial-vehicle-uav-operations","timestamp":"2024-11-14T14:24:31Z","content_type":"text/html","content_length":"172617","record_id":"<urn:uuid:2eb98113-b391-4c37-9a42-c906847893d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00467.warc.gz"} |
Why Standard Macro Models, DSGEs, Crash During Crises | naked capitalism
Lambert: Just when you need them the most….
By David F. Hendry, Director, Program in Economic Modeling, Institute for New Economic Thinking at the Oxford Martin School, Grayham E. Mizon, Professor of Economics and Fellow of Nuffield College,
Oxford University. Originally published at VoxEU.
In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the
available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or
expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.
Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’.
Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events
unfolded can assist in planning for the future.
The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking
linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate,
extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’
models – or DSGE models.
DSGE models
DSGE Models Play a Prominent Role in the Suites of Models Used by Many Central Banks (e.g. Bank of England 1999, Smets and Wouters 2003, and Burgess et al. 2013). The supposedly ‘structural’ Bank of
England’s Quarterly Model (BEQM) broke down during the Financial Crisis, and has since been replaced by another system built along similar lines where the “behaviour of the central organising model
should be consistent with the theory underpinning policymakers’ views of the monetary transmission mechanism (Burgess et al. 2013, p.6)”, a variant of the claimed “trade-off between ‘empirical
coherence’ and ‘theoretical coherence’” in Pagan (2003).
Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as y[t]), depends inter alia on its ‘expected future value’. (In formal terms, this is
written as E[t]y[t][+1]], where the ‘t’ after the ‘E’ indicates the date at which the expectation is formed, and the ‘t+1’ after the ‘y’ indicates the date of the variable). For example, y[t] may be
the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.^1
Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the
mathematical basis of a DSGE model fails when distributions shift (Hendry and Mizon 2014). This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents
are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are
The key is the difference between intrinsic and extrinsic unpredictability. Intrinsic unpredictability is the standard economic randomness – a random draw from a known distribution. Extrinsic
unpredictability is an ‘unknown unknown’ so that the conditional and unconditional probabilities of outcomes cannot be accurately calculated in advance.^2
Extrinsic Unpredictability and Location Shifts
Extrinsic unpredictability derives from unanticipated shifts of the distributions of economic variables at unpredicted times. Of these, location shifts (changes in the means of distributions) have
the most pernicious effects. The reason is that they lead to systematically biased expectations and forecast failure. Figure 1 records UK unemployment over 1860–2011, with some of the major
historical shifts highlighted.
Figure 1 Location shifts over 1860–2011 in UK unemployment, with major historical events
Four main epochs can be easily discerned in Figure 1:
• A business-cycle era over 1860–1914
• World War I and the inter-war period to 1939 with much higher unemployment
• World War II and post-war reconstruction till 1979 with historically low levels
• A more turbulent period since, with much higher and more persistent unemployment levels
As Figure 2, panel (a) confirms for histograms and non-parametric densities, both the means and variances have shifted markedly across these epochs. Panel b shows distributional shifts in UK price
inflation over the same time periods. Most macroeconomic variables have experienced abrupt shifts, of which the Financial Crisis and Great Recession are just the latest exemplars.
Figure 2 Density shifts over four epochs in UK unemployment and price inflation
Extrinsic unpredictability and economic analyses
Due to shifts in the underlying distributions, all expectations operators must be three-way time dated: one way for the time the expectation was formed, one for the time of the probability
distribution being used, and the third for the information set being used. We write this sort of expectation as EDε[t] [ε[t+1]|I[t−1]], where ε[t+1] is the random variable we care about, Dε[t] (·) is
the distribution agents use when forming the expectation, and I[t−1] is the information set available when the expectation is formed. This more general formulation allows for a random variable being
unpredictable in its mean or variance due to unanticipated shifts in its conditional distribution.
Conditional Expectations
The importance of three-way dating can be seen by looking at how one can fall into a trap by ignoring it. For example, conditional expectations are sometimes ‘proved’ to be unbiased by arguments like
the following. Start with the assertion that next quarter’s income equals expected future income plus an error term whose value is not known until next quarter. By definition of a conditional
expectation, the mean of the error must be zero. (Formally the expectation is denoted as E[y[t+1]|I[t]] where I[t] is the information set available today.)
Econometric models of inflation – such as the new-Keynesian Phillips curve in Galí and Gertler (1999) – typically involve unknown expectations like E[y[t+1]|I[t]]. The common procedure is to replace
them by the actual outcome y[t+1] – using the argument above to assert that the actual and expected can only differ by random shocks that have means of zero. The problem is that this deduction
assumes that there has been no shift in the distribution of shocks. In short, the analysis suffers from the lack of a date on the expectations operator related to the distribution (Castle, Doornik,
Hendry and Nymoen 2014).
The basic point is simple. We say an error term is intrinsically unpredictable if it is drawn from, for example, a normal distribution with mean µ[t] and a known variance. If the mean of the
distribution cannot be established in advance, then we say the error is also extrinsically unpredictable. In this case, the conditional expectation of the shock needs not have mean zero for the
outcome at t+1. The forecast is being made with the ‘wrong’ distribution – a distribution with mean µ[t], when in fact the mean is µ[t+1]. Naturally, the conditional expectation formed at t is not an
unbiased predictor of the outcome at t +1.
Implications for DSGE models
It seems unlikely that economic agents are any more successful than professional economists in foreseeing when breaks will occur, or divining their properties from one or two observations after they
have happened. That link with forecast failure has important implications for economic theories about agents’ expectations formation in a world with extrinsic unpredictability. General equilibrium
theories rely heavily on ceteris paribus assumptions – especially the assumption that equilibria do not shift unexpectedly. The standard response to this is called the law of iterated expectations.
Unfortunately, as we now show, the law of iterated expectations does not apply inter-temporally when the distributions on which the expectations are based change over time.
The Law of Iterated Expectations Facing Unanticipated Shifts
To explain the law of iterated expectations, consider a very simple example – flipping a coin. The conditional probability of getting a head tomorrow is 50%. The law of iterated expectations says
that one’s current expectation of tomorrow’s probability is just tomorrow’s expectation, i.e. 50%. In short, nothing unusual happens when forming expectations of future expectations. The key step in
proving the law is forming the joint distribution from the product of the conditional and marginal distributions, and then integrating to deliver the expectation.
The critical point is that none of these distributions is indexed by time. This implicitly requires them to be constant. The law of iterated expectations need not hold when the distributions shift.
To return to the simple example, the expectation today of tomorrow’s probability of a head will not be 50% if the coin is changed from a fair coin to a trick coin that has, say, a 60% probability of
a head.
In macroeconomics, there are two sources of updating the distribution.
• The first concerns conditional distributions where new information shifts the conditional expectation (i.e., Ey[t][y[t+1]|y[t−1]] shifts to Ey[t] [y[t+1]|y[t]]).
Much of the economics literature (e.g. Campbell and Shiller 1987) assumes that such shifts are intrinsically unpredictable since they depend upon the random innovation to information that becomes
known only one period later.^3
• The second occurs when the distribution used to form today’s expectation Ey[t][.] shifts before tomorrow’s expectation Ey[t+1][.] is formed.
The point is that the new distributional form has to be learned over time, and may have shifted again in the meantime.^4 The mean of the current and future distributions (µ[t] and µ[t+1]) need to be
estimated. This is a nearly intractable task for agents – or econometricians – when distributions are shifting.
Using artificial data from a bivariate generating process where the parameters are known, Figure 3, panels a–c, show essentially that the same V-shape can be created by changing many different
combinations of the parameters for the dynamics, the intercepts, and the causal links in the two equations, where panel d shows their similarity to the annualized % change in UK GDP over the Great
For example, the intercept in the equation for the variable shown in panel a was unchanged, but was changed 10-fold in panel c. A macro economy can shift from many different changes, and as in Figure
3, economic agents cannot tell which shifted till long afterwards, even if another shift has not occurred in the meantime.
Figure 3 Near-identical location shifts despite changes in many different parameter combinations
The derivation of a martingale difference sequence from ‘no arbitrage’ in, for example, Jensen and Nielsen (1996) also explicitly requires no shifts in the underlying probability distributions. Once
that is assumed, one can deduce the intrinsic unpredictability of equity price changes and hence market (informational) efficiency. Unanticipated shifts also imply unpredictability, but need not
entail efficiency. Informational efficiency does not follow from unpredictability per se, when the source is extrinsic rather than intrinsic. Distributional shifts occur in financial markets, as
illustrated by the changing market-implied probability distributions of the S&P500 in the Bank of England Financial Stability Report (June 2010).
In other arenas, ‘location shifts’ (i.e. shifts in the distribution’s mean) can play a positive role in clarifying both causality, as demonstrated in White and Kennedy (2009), and testing ‘super
exogeneity’ before policy interventions (Hendry and Santos 2010). Also, White (2006) considers estimating the effects of natural experiments, many of which involve large location shifts. Thus, while
more general theories of the behaviour of economic agents and their methods of expectations formation are required under extrinsic unpredictability, and forecasting becomes prone to failure, large
shifts can also help reveal the linkages between variables.
Unanticipated changes in underlying probability distributions – so-called location shifts – have long been the source of forecast failure. Here, we have established their detrimental impact on
economic analyses involving conditional expectations and inter-temporal derivations. As a consequence, dynamic stochastic general equilibrium models are inherently non-structural; their mathematical
basis fails when substantive distributional shifts occur.
1 Burgess et al. (2013, p.A1) assert that this results in ‘detrended, stationary equations’, though no evidence is provided for stationarity. Technically, while the ‘t’ in the operator Et usually
denotes the period when the expectation is formed, it also sometimes indicates the date of information available when the expectation was formed. The distribution over which the expectation is formed
is implicitly timeless – the ‘t’ does not refer to the date of some time-varying underlying probability distribution.
2 This is akin to the concept in Knight (1921) of his ‘unmeasurable uncertainty’: unexpected things do happen.
3 To give a practical example, consider an unknown pupil who takes a series of maths tests. While the score on the first test is an unbiased predictor of future scores, the teacher learns about the
pupil’s inherent ability with each test. The teacher’s expectations of future test performance will almost surely change in one direction or the other. After the first test, however, one doesn’t know
whether expectations will rise or fall.
4 Even if the distribution, denoted f[t+1] (y[t+1]|y[t]), became known one period later, the issue arises since: Ey[t+1][y[t+1]| y[t]] − Ey[t][y[t+1]| y[t−1]] equals Ey[t+1] [y[t+1]|y[t]] − Ey[t+1][y
[t+1]|y[t−1]]+(Ey[t+1][y[t+1]|y[t−1]] − Ey[t][y[t+1]|y[t−1]]) which equals ν[t] + (µ[t+1] − µ[t]).
18 comments
1. New Economic Thinking ?!?
1. I know this may not seem daring enough to you, but since every central bank (along with lots of other official forecasters) relies on DSGE models, this is basically saying the emperor has no
clothes. But it’s doing it in economese, which means the message may reach the intended audience, but to the layperson, it reads like typical impenetrable (by design) Serious Economist work.
1. But the emperor is also corrupt, even when they know what theories are right. For example, they still base their policy on the Phillips Curve even though they misuse it and misunderstand
it [ intentionally? ]
2. So in other words if they can’t be persuaded they’re wrong by reality, maybe theory can get through to them. But you still have the problem the article describes, the Expectation of Expectations
of Economic Model Correctness at t+1 is a function of observations today and expectations of correctness at t+1. Since observations today are considered irrelevant to correctness both today and
at t+1, and the expectation of future correctness is a probability distribution with a mean at 100% and standard deviation of zero, there’s not much wiggle room (no pun intended, get it: wiggle/
variation/dispersion/variance/skew/etc. OK it’s not funny) its hard to be optimistic.. Oh well, at least the authors didn’t mention Paul Krugman once! They get 3 stars just for that.
3. Necessary for understanding, and assumed, so therefore not provided, is the formal definition of expectation or conditional expectation (the notation here is the ‘E’ in the article) as taken from
probability theory. If you already have that then I expect (pun intended) that the rest of the article isn’t so bad, although requiring a bit of work.
4. It is inevitable DGSE models don’t work in crashes.
How is it you can write a complete article about modelling the future and not even mention Chaos Theory or non-linear feedback? Non-linear feedback, chaos, most certainly modifies underlying
probability distributions.
A crash is a chaotic event, a strange attractor or a catastrophic event. It is possible to predict such an event will occur, it is not possible to predict when, nor is it possible to predict the
state of the system after the catastrophic event.
Not only is the economic math flawed, that is incomplete, but it is also trivial to a significant degree.
1. well, nobody’s perfect. at least they mentioned martingales and didn’t once mention Krugman. He’s the most talked about man on the planet. How did that happen? You’d think it would be
somebody like Adele, even though she’s not a man. Why talk about Professor Krugman? I have no idea, that’s why I don’t. But Adele, that’s another issue entirely. Martingales are another thing
somebody can legitimately talk about. You’d think they were birds of some kind, just going by the name on its own. All these guys really need to start over, frankly. Get out the coffee and
the blackboard and say “OK, let’s pretend we’re going to make sense. What would we do?”
1. That was clear immediately after the bubble burst – because virtually all economists (the exceptions can be counted on, literally, one hand) either ignored or, worse, denied it. Even
though, as Dean Baker repeatedly pointed out, it was plain as the nose on your face.
Baker never quite admitted it, but that meant the whole profession – no, that’s the wrong word: the whole field is invalid.
You don’t start over with the same guys. You start over with the 5 or 6 who got it right, and you can a large portion of the math, which I strongly suspect is fake.
2. One needs quite a lot of data to estimate whether a crash is a chaotic or stochastic event, as I recall –unless there are some new high-power/low-data tests out there, which might well be,
several orders of magnitude more observations at the relevant time interval than exist for macroeconomies. I think that’s the biggest reason that chaos modelling never took off, because there
certainly was interest about 20 years ago.
1. A recent article on chaos theory in economics, which has a nice survey of the area:
Note that these authors’ emiprical example is a daily exchange rate series from which they used about 7000 ovservations for testing. Also, these tests are only for the possibility that
chaos exists in the series. Being able to make even a short term forecast from a possibly chaotic series is something else again.
2. I’d assert they are all chaotic. Never stochastic, because of the feedback and non-linearity (Humans, greed and fear in the feedback loop)
3. Synoia, Thanks. +1,000 (Also, I love artistic depictions of chaos/fractals.)
Vulnerability to errors presented by unforeseeability or intentional disregard of material exogenous factors (i.e., “Assume them away!” or ideological views re fraud, criminality, etc.),
selection of variables (correlation vs causation), number and weighting of variables, iterative compounding errors in multivariate analysis over time.
Modeling is driven in part by a very human desire for certainty and predictability, as well as to support policy biases. For example, I suspect the models B-squared used as Fed chairman as
support for QE-ZIRP policy do not show the compounding magnitude of the imbalances that develop over time stemming from the policy, or maybe they do and there was an “I’ll be gone, you’ll be
gone” attitude in play.
All this is not to say that economic modeling is without value over short-term time horizons. Just that the results need to be considered with a healthy degree of skepticism, and policies
formed and implemented, and affairs conducted with prudent respect for “unforeseen” risks.
5. Technical question: does all the “math” actually add anything, or is it hand-waving?
Because it sure looked like decoration to me – but my math tolerance is a bit limited.
1. Haha, that is what Deidre McCloskey argues, that in most economics papers, the critical part of the argument is actually in the narrative, and the mathed-up parts are trivial. But economists
convince themselves that their use of math = rigor.
1. Classic problem, that is, difference between accuracy and precision.
I learnt about accuracy and precision at university, and never really grasped the full implication until it hit me in the face one day. We had 9 digit digital frequency meters (precision
1 part in 10 to the 9th) with an accuracy of 1 in 10 to the 5th.
The DGSE models are precise when chaos is not evident (and thus of little use except to keep management happy).
The models are never accurate. Nor are they precise over chaotic events.
One could run massive Monte Carlo simulations of chaotic events, however, how would one pike the correct answer from multitudes of results before the event?
Or, there is no way to calibrate the models.
A living example of this is US Foreign Policy. It’s full of non-linear events which yield strange results. Or its full of wishful thinking which prove false when reality (Putin) happens.
Better to use a dartboard. Less expensive, especially if someone else is buying the beer.
6. “Unanticipated changes in underlying probability distributions – so-called location shifts – have long been the source of forecast failure.”
In other words, we have no clue concerning anything.
1. No, physics, chemistry, and even much of biology are both precise and accurate. There are exceptions – cosmology, for instance, is mostly speculation inspired by a the latest data, which
often contradict the last batch of speculation.
Economics is another matter, both because it covers a chaotic system and also, even more important, because it isn’t science; it’s political ideology, lightly disguised – the real purpose of
all the math. Plus a large dollop of wishful thinking.
Personally, I loved Econ 210. The basic market mechanism is just applied feedback theory. Tricky in practice, but valid. Then the corruption sets in…
7. Terrific responses by Synoia and Chauncey Gardiner. I second Chauncey Gardiner’s statement regarding the value of economic modeling over short-time horizons and think that means that in the short
run, economic models can be both accurate and precise. This is largely due to the fact that most economists are running similar versions of the model and that leads to a self-fulfilling prophecy
(until it doesn’t), at least when dealing with financial economic models. | {"url":"https://www.nakedcapitalism.com/2014/07/why-dsge-crash-during-crisis.html","timestamp":"2024-11-10T11:26:15Z","content_type":"text/html","content_length":"114477","record_id":"<urn:uuid:2ca58026-7338-4124-b546-dbe64dc95382>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00468.warc.gz"} |
Modification of funelim part of Platform 8.19
Hi following the tutorials, we had modify Equations so that funelim also rewrite the equation and simplifies the goal. From what I can see my version that seems up to date coq-equations 1.3+8.19 do
not seems to integrate it. Will there be in the release for the Coq platform 8.19 ? Would it be possible ?
We are very close to a release of Coq Platform (the quickchick issue currently being discussed is the last remaining road blocker) - I guess it will be end of next week. A smooth version replacement
shouldn't be a problem, but you need to state this in the tracker issue for Equations (https://github.com/mattam82/Coq-Equations/issues/585).
(That is reopen the issue and state the version you want).
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237659-Equations-devs-.26-users/topic/Modification.20of.20funelim.20part.20of.20Platform.208.2E19.html","timestamp":"2024-11-10T19:12:17Z","content_type":"text/html","content_length":"3527","record_id":"<urn:uuid:ab3bc890-f874-4935-b20f-43737d0e6f68>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00330.warc.gz"} |
From Geometry to Behavior: An Introduction to Spatial Cognition - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials
From Geometry to Behavior: An Introduction to Spatial Cognition
• Title: From Geometry to Behavior: An Introduction to Spatial Cognition
• Author(s) Hanspeter A. Mallot
• Publisher: The MIT Press (January 23, 2024); eBook (Creative Commons Edition)
• License(s): CC BY-NC-ND
• Paperback: 328 pages
• eBook: PDF and PDF Files
• Language: English
• ISBN-10: 0262547112
• ISBN-13: 978-0262547116
• Share This:
Book Description
An overview of the mechanisms and evolution of Spatial Cognition, integrating evidence from psychology, neuroscience, cognitive science, and Computational Geometry. The volume is also relevant to the
epistemology of spatial knowledge in the philosophy of mind.
About the Authors
• Hanspeter A. Mallot is a Senior Professor of Cognitive Neuroscience, University of Tübingen.
Reviews, Ratings, and Recommendations: Related Book Categories: Read and Download Links: Similar Books:
Book Categories
Other Categories
Resources and Links | {"url":"https://freecomputerbooks.com/From-Geometry-to-Behavior-An-Introduction-to-Spatial-Cognition.html","timestamp":"2024-11-03T19:11:02Z","content_type":"application/xhtml+xml","content_length":"33070","record_id":"<urn:uuid:9e032f36-02bd-4d64-9422-d9024bb5763d>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00009.warc.gz"} |
Geodesic geometry of 2+1-D Dirac materi
SciPost Submission Page
Geodesic geometry of 2+1-D Dirac materials subject to artificial, quenched gravitational singularities
by S. M. Davis, M. S. Foster
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Seth Davis · Matthew Foster
Submission information
Preprint Link: scipost_202107_00036v2 (pdf)
Date accepted: 2022-06-10
Date submitted: 2022-05-04 23:07
Submitted by: Davis, Seth
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties: • Condensed Matter Physics - Theory
Approach: Theoretical
The spatial modulation of the Fermi velocity for gapless Dirac electrons in quantum materials is mathematically equivalent to the problem of massless fermions on a certain class of curved spacetime
manifolds. We study null geodesic lensing through these manifolds, which are dominated by curvature singularities, such as nematic singularity walls (where the Dirac cone flattens along one
direction). Null geodesics lens across these walls, but do so by perfectly collimating to a local transit angle. Nevertheless, nematic walls can trap null geodesics into stable or metastable orbits
characterized by repeated transits. We speculate about the role of induced one-dimensionality for such bound orbits in 2D dirty d-wave superconductivity.
Author comments upon resubmission
We thank the referees for their thoughtful analysis of our work and their proposals for its improvement.
We divide the referees’ concerns and suggestions into two categories: “major” will denote concerns requiring additional nontrivial content in the paper, while “minor” will refer to content questions
or small changes in the manuscript. The concerns we regard as major are the following
1. S.A.J.’s request for an estimate of conductivity,
2. S.A.J.’s request for more information on the relation of the fully-quantum picture to the geodesic flow
3. The request by referee 2 to discuss the applicability of our results and methods to important spacetime metrics in cosmology.
We first respond to these major concerns:
1. Quoting S.A.J.:
"The eventual goal of any mathematical modeling is to compute a measurable quantity. The (null) geodesics studied in this paper are not directly measurable. A simple estimate of conductivity and the
role of attractors and collimator singularities in conductivity is missing."
"What is missing in this paper, is the implications of collimination and absorption of geodesics in transport properteis? What can be implied e.g. about the conductivity tensor for the nematic and
isotropic singularities?"
We agree that the eventual computation of a measurable quantity (e.g., conductivity) is the ultimate goal and a numerical treatment of diffusion exponents and effects of QGD on conductivity remain a
future research goal. Indeed, early numerical results show that weak QGD give diffusive behavior of geodesics, with average distance from the origin scaling as the square root of time, as expected.
We hope to determine the effects of singularity proliferation on this as QGD disorder strength is enhanced to O(1); however, accurately extracting the effects of the singularities has turned out to
be a surprisingly non-trivial technical issue due to a certain unexpected phenomena where geodesics can become trapped in extremely high-frequency meta-stable orbits. Treating these correctly seems
to require an extremely small time-step. Over the time scales needed to extract the asymptotic diffusive character of QGD, these trajectories cannot be ignored. Further, throwing these trajectories
out would ignore important data on the true effects of singularities on diffusion of geodesics. While we hope to overcome this issue in the future, the difficulties we encountered here have led us to
publish this first work focusing on the interesting elementary geometric effects of the singularities and geodesics, without a heavy discussion of numerical simulation results.
We note that effects of weak geodesic scattering in kinetic theory have been considered before in other context [1], but since our focus is on the effects of the singularities formed at strong
disorder, we feel a treatment like this would be too far from our central message to include in the current manuscript.
For these reasons, we would like to avoid getting into a discussion of the influence of QGD singularities on conductance, and leave this topic for future work.
2. Quoting S.A.J.:
"As pointed out in the introduction, the null geodesics play a role in shaping the quantum mechanical wave function. Authors have meticulously computed null geodesics for interesting solvable toy
models. But there is no comment/thoughts about possible roles of these (classical) solutions for the corresponding quantum mechanical problem."
"Classically geodesics of isotropic singularities are absorbed by the singularities. Would that quantum mechanically correspond to possible bound state? If yes, does it imply that isotropic
singularities are more efficient in forming Anderson insulator?"
We strongly agree that the discussion of geodesic dynamics as a proxy for semiclassics raises the question of how well the null geodesics capture the fully-quantum mechanical behavior. While we
explain in the introduction that null geodesics give the bicharacteristics of the curved-space Dirac equation and that this is canonically associated with a ‘geometric optics’ limit in the general
wave equation literature, we agree that it would be nice to analyze the fully-quantum picture on the toy models for which we are able to solve the geodesic dynamics.
To this end, we add a new section to our paper. Sec. 6 (6 pages) now presents fully-quantum calculations for all of the toy models considered in Sec. 5, and qualitatively compares the solutions with
the associated geodesic flows. In two cases where we can make analytical statements about the asymptotic quantum dynamics, we find near full agreement with the geodesic solution. In the cases where
quantum dynamics aren’t tractable, we instead calculate the wave functions and energy spectra explicitly. We find that in some cases, the quantum spectrum mirrors the geodesic behavior. For example:
the linear dreibein wall model has only bound-state geodesics, and we find that its spectrum consists of discrete bound-state wavefunctions. On the other hand, the tanh wall model hosts both bound
and unbound geodesics, and its quantum spectrum contains a transition from discrete bound states to a continuum of scattering states. Sec. 6 gives a more in-depth commentary on these features.
To specifically answer the question about isotropic singularities: we find that in the isotropic power-law models we study, the spectrum consists of a continuum of scattering states, as opposed to
discrete bound states like we find in other models. While it is perhaps surprising that (for \alpha > 1) the geodesics all form bound orbits but the spectrum consists only of scattering states, we
are able to show for the \alpha = 2 example that for an arbitrary initial wavefunction, the probability density asymptotically collects at the singularity, mirroring the geodesics. Since the
relationship between bound states, quantum dynamics, and geodesic flow seems to be quite complex, we don’t make any statement about possible connections to Anderson localization.
Sec. 6 is supported by two additional appendices providing calculational details; Appendix D (3 pages) merely provides details to understand the derivations of energy eigenstates in Sec. 6. Appendix
E (2 pages) provides the proof of our statement that in the \alpha = 2 isotropic power-law model, quantum density asymptotically approaches the singularity at the origin.
We stress that while this adds some substantial length to the paper, and while some of these results are quite interesting, these new results all play the supporting role of allowing comparison of
our primary results (geodesics) with some fully-quantum results, as requested. We do not think this foray into QM changes the scope, mission, context, or conclusions of our paper.
3. Quoting Referee 2:
"Although the primary motivation for the paper was dictated by the potential applicability of the results in the context of quantum materials, it could be of interest in the gravity context as well.
For instance, the authors could have analyzed whether some known metrics (e.g., for black holes, cosmological horizons, etc.) belong to the class considered in the paper, providing thus new links
between gravity and condensed matter theory. Perhaps, it will be done and published elsewhere."
We agree that applications of gravity analogies to condensed matter systems could possibly reversed in a way that hopefully lets condensed matter shed light on topics in cosmology.
In this case, essentially all of the geometric features we find follow from temporal flatness [Eq.(8)], including both the nature of the singularities and their collimating effects on geodesics. The
temporal flatness condition, which fixes the time-time dreibein in terms of the metric determinate, arises as a necessary condition for metrics whose Hamiltonians can be realized as Dirac-type
Hamiltonians in flat space. We do not expect important cosmological metrics to have this feature: for example, the Schwarzchild metric is not temporally flat. Further, this puts some naïve limits on
the ability of Hamiltonian systems in flat space to simulate many cosmologically important spacetime metrics
In addition to temporal flatness, our results also rely heavily on time-independence of the metric, (2+1)-dimensionality, and time-space block-diagonality, which all further remove our current
results from cosmological relevance.
We’ve added a comment on these limitations and connections to our introduction.
Now we address minor concerns:
4. Quoting S.A.J.:
" In Eq. (27), the conservation of E/m is associated with the global Killing vector (1,0,0)^T. Can it be explicitly derived by writing down the Killing equation? What I am wondering at this stage is
that, are there any other Killing vectors in addition to the one that gives the conservation of E/m (energy)?"
The (1,0,0)^T global Killing vector follows from the metric’s independence on time – it is a general fact that if the spacetime metric is independent of a coordinate xk, then the unit-vector eµ = δµk
is a global Killing vector associated with the manifold’s translation symmetry in the xk coordinate. So the existence of the global Killing vector isn’t a nontrivial result.
Our comment about the Killing vector is a technical aside that isn’t really used in the paper, so we have restructured these remarks to clarify.
Since we don’t generally make assumptions about the dependence of the disorder vectors in the xy-plane, there are not additional global Killing vectors that apply to the general model. However,
global killing vectors may be found in our toy models where translational or rotational invariance is present, and there will correspond to conservation of momentum or angular momentum, respectively.
However, since our treatment there is based on a direct solution of the geodesic equation, we don’t bother introducing the formal Killing vector analysis.
5. Quoting S.A.J:
"Rewriting the equation of motion in terms of t, rather than the affine parameter s in Eq. (3), seems to obscure the fact that energy is conserved as there appear quadratic dissipation-looking terms.
Can the authors help the reader and the present referee to understand how the dissipation-looking terms at the end do no harm to the conservation of E/m by adding a brief calculation of intuitive
Indeed, the re-parametrization of the geodesic equation by the global time coordinate introduces friction/dissipation terms. We emphasize that this is a general feature of global-time
re-parametrization – see the discussion in Wikipedia (section 3): https://en.wikipedia.org/wiki/Geodesics_in_general_relativity .
As discussed above in point (4), the geodesics have energy as a “constant of motion”. However, what this means is essentially limited to Eq.(26). When we re-parametrize in terms of global coordinate
time, E plays no role in determining the trajectory of the null geodesic. [This is not true for massive geodesics – see Eq.(32)]. This is analogous to the familiar situation regarding the
trajectories of light beams in GR; no knowledge of their wavelength enters a classical calculation.
More importantly, the friction/dissipation terms play a key role in the geodesics dynamics, especially in allowing the capture of geodesics by isotropic singularities. As discussed in Sec. 4.1, the
geodesic equations (written in terms of the affine parameter, not coordinate time) for the purely isotropic model can be mapped onto a Hamiltonian dynamics system. In this picture, it would naively
seem that geodesic capture by a potential well would be impossible due to conservation of energy. Indeed, if one solves the geodesic equations in the affine parametrization, they will obtain
solutions that pass right through the singularity in finite ‘proper time’ (measured in the affine parameter – proper time doesn’t technically exist for massless particles). When these geodesics are
reparametrized in the global time coordinate, the point where the geodesic actually crosses the singularity is sent to infinite time. This is analogous to the well-known situation in GR where to an
observer outside an event horizon, a light beam can never actually cross an event horizon – the actual collision is time-dilated to infinity. So conservation of energy in the affine parametrization
is time-dilated away when we move to a description in terms of the global time coordinate.
In this paper we work in the other limit where we first reparametrize the geodesics equation in terms of t and only then solve for the geodesics. In this order of operations, we find that the
friction terms in the reparametrized GE provide the ‘dissipation’ necessary for a geodesic to come to rest at a singularity.
We’ve clarified the relevant discussions in Secs. 3.3, 4.1, and 5.1.
6. Quoting S.A.J:
"How essential is the temporal-flatness condition in collimation property of nematic singularities and/or attractive property of isotropic singularities?"
The collimation property follows from the form of the geodesic equation in Eq.(33-35), in which temporal flatness has been used extensively to re-write the geodesic equation in terms of the disorder
vectors (See Appendex A.)
While it seems possible to construct collimating singularities in a non-temporally flat spacetime, we don’t expect the geometric features found here to generalize.
7. Quoting S.A.J:
"The motivation for the geometrical models of velocity modulated Dirac equation comes from hight Tc superconducting compunds. Also there is a brief mention of graphene and twisted bilayer graphene.
In these examples the gravitational (geometrical) coupling can be encoded into spatial-spatial compoents of the stress tensor."
It has been recently proposed in [arxiv:2108.08183] that in 8Pmmn borophene, substitution of boron atoms with carbon atoms substantially affects the tilt of the resulting Dirac cone. Since the
tilting of the Dirac cone always induces a velocity anisotropy (like nemacity), how likely are these compounds to realize possible QGD in non-superconducting phase? This will correspond to modulation
of the spatio-temporal components of the velocity. Does the random modulation of tilt velocity relax "temporal flatness" condition?
Indeed, tilted-cone scenarios, including 3D Weyl semimetals and the 8Pmmn borophene example mentioned provide examples of Hamiltonians in condensed matter that are equivalent to massless Dirac
Fermions on curved-space manifolds. Further, these systems do satisfy the temporal flatness condition. If the tilt can be randomized by coupling to various forms of disorder or engineered through
some set of control parameters, then these systems are very adjacent to our discussion.
However, the form of the geodesic equation we work with is particular to (2+1)-D metrics with spatial-spatial components and temporal flatness, so it isn’t clear if any of the geometry we find here
generalizes to spacetime manifolds outside the scope we consider here. We leave a study of time-space mixing manifolds for future work.
We thank the referee for making us aware of this work. We have added a discussion of this class of materials to our introduction and included a citation to the paper in the comment.
8. Quoting S.A.J:
"All over sectoin 1.1 the concept of "Null geodesics" has been used repeatedly, which is a crucial concept to understand the paper. Perhaps it will help the readers to define it right at the
We have added a comment on the physical role of null geodesics to the introduction, at the first mention of the concept. We have also added a mathematical definition of geodesic mass to Sec. 3,
immediately after the introduction of the geodesic equation.
9. Quoting S.A.J:
"Fig. 9B seems too crowded to follow examples of geodesics that collide or do not collide with the singularities. Maybe it helps the readers to make one curve of each category bolder than the others
to assist the readers to follow at least two bold geodesics."
After some consideration of how to best clarify Fig.9B, we decided the best way was to add a clarifying comment in the caption. All geodesics are captured for launch angle in (-π/4, 3π/4), and all
geodesics escape for launch angle in (3π/4, 7π/4). The confusion in the figure comes from the fact that for orbits launched near the critical angle, the decay/escape is very slow. Once the reader is
explicitly told that all orbits are monotonically escaping/decaying, we think the figure is more clear.
1. J. P. Dahlhaus, C.-Y. Hou, A. R. Akhmerov, and C. W. J. Beenakker, Geodesic scatter-
ing by surface deformations of a topological insulator, Phys. Rev. B 82, 085312 (2010),
List of changes
Please refer to "author comments" section, where changes are discussed alongside referee comments.
Published as SciPost Phys. 12, 204 (2022)
The authors have satisfactorily addressed my comments/questions/suggestions. I am happy to recommend the paper for publication in its present form. | {"url":"https://scipost.org/submissions/scipost_202107_00036v2/","timestamp":"2024-11-12T06:14:36Z","content_type":"text/html","content_length":"48753","record_id":"<urn:uuid:5329edb5-7ef3-4ff7-9d3a-6a3298d9a2ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00105.warc.gz"} |
Getting around a lower bound for the minimum Hausdorff distance
We consider the following geometric pattern matching problem: find the minimum Hausdorff distance between two point sets under translation with L[1] or L[∞] as the underlying metric. Huttenlocher,
Kedem and Sharir have shown that this minimum distance can be found by constructing the upper envelope of certain Voronoi surfaces. Further, they show that if the two sets are each of cardinality n
then the complexity of the upper envelope of such surfaces is Ω(n^3). We examine the question of whether one can get around this cubic lower bound, and show that under the L[1] and L[∞] metrics, the
time to compute the minimum Hausdorff distance between two point sets is O(n^2 log^2 n).
• Algorithm
• Approximate matching
• Geometric pattern matching
• Optimization
• Segment tree
ASJC Scopus subject areas
• Computer Science Applications
• Geometry and Topology
• Control and Optimization
• Computational Theory and Mathematics
• Computational Mathematics
Dive into the research topics of 'Getting around a lower bound for the minimum Hausdorff distance'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/getting-around-a-lower-bound-for-the-minimum-hausdorff-distance","timestamp":"2024-11-13T21:09:43Z","content_type":"text/html","content_length":"57054","record_id":"<urn:uuid:71b49f58-c997-48fc-935e-1560af78c9ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00110.warc.gz"} |
Financial Ratio Analysis List of Financial Ratios
• November 25, 2021
• Posted by: admin
• Category: Bookkeeping
To correctly implement ratio analysis to compare different companies, consider only analyzing similar companies within the same industry. In addition, be mindful how different capital structures and
company sizes may impact a company’s ability to be efficient. Using financial ratios can also give you an idea of how much risk you might be taking on with a particular company, based on how well it
manages its financial obligations. You can use these ratios to select companies that align with your risk tolerance and desired return profile. That results in an interest coverage ratio of 4, which
means the company has four times more earnings than interest payments. Equity ratio is a measure of solvency based on assets and total equity.
• For any value investor, you must conduct a level of analysis that is quantitative and removes any emotional component to your investment strategy.
• Then, a company analyzes how the ratio has changed over time (whether it is improving, the rate at which it is changing, and whether the company wanted the ratio to change over time).
• The term “ratio” conjures up complex and frustrating high school math problems, but that need not be the case.
• This need can arise in an emergency situation or in the normal course of business.
Liquidity ratios include the current ratio, quick ratio, and working capital ratio. The inventory turnover ratio indicates the speed at which a company’s inventory of goods was sold during the past
year. The low fixed asset turnover ratio is dragging down total asset turnover.
Operating-Margin Ratio
A company may be thrilled with this financial ratio until it learns that every competitor is achieving a gross profit margin of 25%. Ratio analysis is incredibly useful for a company to better stand
how its performance compares to similar companies. The interest coverage ratio measures the company’s ability to pay interest. We can calculate it by dividing earnings before interest and tax (EBIT)
by interest expense. Example 13
Assume that a company’s cost of goods sold for the year was $280,000 and its average inventory cost for the year was $70,000. Therefore, its inventory turnover ratio was 4 times during the year
($280,000 / $70,000).
It compares a company’s stock price to its earnings on a per-share basis. It can help investors determine a stock’s potential for growth. The total-debt-to-total-assets ratio is used to determine how
much of a company is financed by debt rather than shareholder equity. There are significant limitations on the use of financial ratios. First, the information used for a ratio is as of a specific
point in time or reporting period, which may not be indicative of long-term trends.
What are Accounting Ratios?
Basically, the P/E tells you how much investors are willing to pay for $1 of earnings in that company. Remember, lenders typically have the first claim on a company’s assets if it’s required to
liquidate. Generally, ratios are used in combination to gain a fuller picture of a company. Using a particular ratio as a comparison tool for more than one company can shed light on the less risky or
most attractive. Additionally, for a view of past performance, an investor can compare a ratio for certain data today to historical results derived from the same ratio.
This can help them to determine which might be a lower-risk investment. XYZ company has $8 million in current assets, $2 million in inventory and prepaid expenses, and $4 million in current
liabilities. That means the quick ratio is 1.5 ($8 million – $2 million / $4 million).
Inventory Turnover Ratio
Efficiency ratios measure how well the business is utilizing its assets and liabilities to create deals and earn profits. They compute the utilization of inventory, machinery utilization, and
turnover of liabilities, financial ratios formulas and explanations as well as the use of equity. I created this writeup nearly solely for this analysis of this exact ratio. I love the analysis to
find a true value of a dividend growth stock on a go-forward basis. | {"url":"https://agilewealthacademy.com/2021/11/25/financial-ratio-analysis-list-of-financial-ratios/","timestamp":"2024-11-05T13:30:17Z","content_type":"text/html","content_length":"120387","record_id":"<urn:uuid:b96be9e2-3069-477a-8c4d-9713e887662b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00795.warc.gz"} |
glgetmaterial(3g) [osx man page]
glgetmaterial(3g) [osx man page]
GLGETMATERIAL(3G) GLGETMATERIAL(3G)
glGetMaterialfv, glGetMaterialiv - return material parameters
void glGetMaterialfv( GLenum face,
GLenum pname,
GLfloat *params )
void glGetMaterialiv( GLenum face,
GLenum pname,
GLint *params )
face Specifies which of the two materials is being queried. GL_FRONT or GL_BACK are accepted, representing the front and back materi-
als, respectively.
pname Specifies the material parameter to return. GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR, GL_EMISSION, GL_SHININESS, and GL_COLOR_INDEXES
are accepted.
params Returns the requested data.
glGetMaterial returns in params the value or values of parameter pname of material face. Six parameters are defined:
GL_AMBIENT params returns four integer or floating-point values representing the ambient reflectance of the material. Integer
values, when requested, are linearly mapped from the internal floating-point representation such that 1.0 maps to the
most positive representable integer value, and -1.0 maps to the most negative representable integer value. If the
internal value is outside the range [-1, 1], the corresponding integer return value is undefined. The initial value
is (0.2, 0.2, 0.2, 1.0)
GL_DIFFUSE params returns four integer or floating-point values representing the diffuse reflectance of the material. Integer
values, when requested, are linearly mapped from the internal floating-point representation such that 1.0 maps to the
most positive representable integer value, and -1.0 maps to the most negative representable integer value. If the
internal value is outside the range [-1, 1], the corresponding integer return value is undefined. The initial value
is (0.8, 0.8, 0.8, 1.0).
GL_SPECULAR params returns four integer or floating-point values representing the specular reflectance of the material. Integer
values, when requested, are linearly mapped from the internal floating-point representation such that 1.0 maps to the
most positive representable integer value, and -1.0 maps to the most negative representable integer value. If the
internal value is outside the range [-1, 1], the corresponding integer return value is undefined. The initial value
is (0, 0, 0, 1).
GL_EMISSION params returns four integer or floating-point values representing the emitted light intensity of the material. Inte-
ger values, when requested, are linearly mapped from the internal floating-point representation such that 1.0 maps to
the most positive representable integer value, and -1.0 maps to the most negative representable integer value. If
the internal value is outside the range [-1, 1.0], the corresponding integer return value is undefined. The initial
value is (0, 0, 0, 1).
GL_SHININESS params returns one integer or floating-point value representing the specular exponent of the material. Integer val-
ues, when requested, are computed by rounding the internal floating-point value to the nearest integer value. The
initial value is 0.
GL_COLOR_INDEXES params returns three integer or floating-point values representing the ambient, diffuse, and specular indices of the
material. These indices are used only for color index lighting. (All the other parameters are used only for RGBA
lighting.) Integer values, when requested, are computed by rounding the internal floating-point values to the near-
est integer values.
If an error is generated, no change is made to the contents of params.
GL_INVALID_ENUM is generated if face or pname is not an accepted value.
GL_INVALID_OPERATION is generated if glGetMaterial is executed between the execution of glBegin and the corresponding execution of glEnd. | {"url":"https://www.unix.com/man-page/osx/3g/glgetmaterial","timestamp":"2024-11-05T23:13:28Z","content_type":"text/html","content_length":"33953","record_id":"<urn:uuid:90abaddd-cdb7-453d-825e-373dd55f7538>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00724.warc.gz"} |
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an
unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a
better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle
distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems of a first-order theory rather than conditional
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
• Hilbert style. Every line is an unconditional tautology (or theorem).
• Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
□ Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
□ Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules,
relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the
elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a
typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced.
This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and
are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ
, were introduced in 1934/1935 by Gerhard Gentzen^[1] as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main
Theorem" (Hauptsatz) about LK and LJ was the cut-elimination theorem,^[2]^[3] a result with far-reaching meta-theoretic consequences, including consistency. Gentzen further demonstrated the power and
flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic, in surprising response to Gödel's
incompleteness theorems. Since this early work, sequent calculi, also called Gentzen systems,^[4]^[5]^[6]^[7] and the general concepts relating to them, have been widely applied in the fields of
proof theory, mathematical logic, and automated deduction.
Hilbert-style deduction systems
One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e., which things may appear as the conclusion of a (sub)proof. The simplest judgment
form is used in Hilbert-style deduction systems, where a judgment has the form
${\displaystyle B}$
where ${\displaystyle B}$ is any formula of first-order logic (or whatever logic the deduction system applies to, e.g., propositional calculus or a higher-order logic or a modal logic). The theorems
are those formulas that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulas and judgments; we make one here solely for comparison with the
cases that follow.
The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the
deduction theorem. This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction.
Natural deduction systems
In natural deduction, judgments have the shape
${\displaystyle A_{1},A_{2},\ldots ,A_{n}\vdash B}$
where the ${\displaystyle A_{i}}$ 's and ${\displaystyle B}$ are again formulas and ${\displaystyle n\geq 0}$ . In other words, a judgment consists of a list (possibly empty) of formulas on the
left-hand side of a turnstile symbol "${\displaystyle \vdash }$ ", with a single formula on the right-hand side,^[8]^[9]^[10] (though permutations of the ${\displaystyle A_{i}}$ 's are often
immaterial). The theorems are those formulae ${\displaystyle B}$ such that ${\displaystyle \vdash B}$ (with an empty left-hand side) is the conclusion of a valid proof. (In some presentations of
natural deduction, the ${\displaystyle A_{i}}$ s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.)
The standard semantics of a judgment in natural deduction is that it asserts that whenever^[11] ${\displaystyle A_{1}}$ , ${\displaystyle A_{2}}$ , etc., are all true, ${\displaystyle B}$ will also
be true. The judgments
${\displaystyle A_{1},\ldots ,A_{n}\vdash B}$
${\displaystyle \vdash (A_{1}\land \cdots \land A_{n})\rightarrow B}$
are equivalent in the strong sense that a proof of either one may be extended to a proof of the other.
Sequent calculus systems
Finally, sequent calculus generalizes the form of a natural deduction judgment to
${\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k},}$
a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent, and the formulas on right-hand side are called the succedent or consequent; together
they are called cedents or sequents.^[12] Again, ${\displaystyle A_{i}}$ and ${\displaystyle B_{i}}$ are formulas, and ${\displaystyle n}$ and ${\displaystyle k}$ are nonnegative integers, that is,
the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those ${\displaystyle B}$ where ${\displaystyle \vdash B}$ is the conclusion of a
valid proof.
The standard semantics of a sequent is an assertion that whenever every ${\displaystyle A_{i}}$ is true, at least one ${\displaystyle B_{i}}$ will also be true.^[13] Thus the empty sequent, having
both cedents empty, is false.^[14] One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought
of as an (inclusive) "or". The sequents
${\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k}}$
${\displaystyle \vdash (A_{1}\land \cdots \land A_{n})\rightarrow (B_{1}\lor \cdots \lor B_{k})}$
are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent.
At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the
comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either
${\displaystyle \vdash eg A_{1}\lor eg A_{2}\lor \cdots \lor eg A_{n}\lor B_{1}\lor B_{2}\lor \cdots \lor B_{k}}$
(at least one of the As is false, or one of the Bs is true)
or as
${\displaystyle \vdash eg (A_{1}\land A_{2}\land \cdots \land A_{n}\land eg B_{1}\land eg B_{2}\land \cdots \land eg B_{k})}$
(it cannot be the case that all of the As are true and all of the Bs are false).
In these formulations, the only difference between formulas on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the
constituent formulas. This means that a symmetry such as De Morgan's laws, which manifests itself as logical negation on the semantic level, translates directly into a left–right symmetry of
sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨).
Many logicians feel that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as
apparent in the rules.
Distinction between natural deduction and sequent calculus
Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic
natural deduction system NJ was somewhat ugly.^[15] He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus
system LK.^[16] He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK).^[17] Then
he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz").^[18]
Origin of word "sequent"
The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper.^[1] Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as
'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."^[19]
Proving logical formulas
A rooted tree describing a proof finding procedure by sequent calculus
Reduction trees
Sequent calculus can be seen as a tool for proving formulas in propositional logic, similar to the method of analytic tableaux. It gives a series of steps that allows one to reduce the problem of
proving a logical formula to simpler and simpler formulas until one arrives at trivial ones.^[20]
Consider the following formula:
${\displaystyle ((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r)}$
This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol ${\displaystyle \vdash }$ :
${\displaystyle \vdash ((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r)}$
Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion.^[21] Hence one moves to the following sequent:
${\displaystyle (p\rightarrow r)\lor (q\rightarrow r)\vdash (p\land q)\rightarrow r}$
Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven:
${\displaystyle (p\rightarrow r)\lor (q\rightarrow r),(p\land q)\vdash r}$
Since the arguments in the left-hand side are assumed to be related by conjunction, this can be replaced by the following:
${\displaystyle (p\rightarrow r)\lor (q\rightarrow r),p,q\vdash r}$
This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately:
${\displaystyle p\rightarrow r,p,q\vdash r}$
${\displaystyle q\rightarrow r,p,q\vdash r}$
In the case of the first judgment, we rewrite ${\displaystyle p\rightarrow r}$ as ${\displaystyle \lnot p\lor r}$ and split the sequent again to get:
${\displaystyle \lnot p,p,q\vdash r}$
${\displaystyle r,p,q\vdash r}$
The second sequent is done; the first sequent can be further simplified into:
${\displaystyle p,q\vdash p,r}$
This process can always be continued until there are only atomic formulas in each side. The process can be graphically described by a rooted tree, as depicted on the right. The root of the tree is
the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree.^[20]^[22]
The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is
accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left.
Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely
change the order of the arguments in each side; Γ and Δ stand for possible additional arguments.^[20]
The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line.^[23]
Left Right
${\displaystyle L\land {\text{rule: }}\quad {\cfrac {\Gamma ,A\land B\vdash \Delta }{\Gamma ,A,B\vdash ${\displaystyle R\land {\text{rule: }}{\cfrac {\Gamma \vdash \Delta ,A\land B}{\Gamma \vdash \
\Delta }}}$ Delta ,A\qquad \Gamma \vdash \Delta ,B}}}$
${\displaystyle L\lor {\text{rule: }}{\cfrac {\Gamma ,A\lor B\vdash \Delta }{\Gamma ,A\vdash \Delta \ ${\displaystyle R\lor {\text{rule: }}\quad {\cfrac {\Gamma \vdash \Delta ,A\lor B}{\Gamma \
qquad \Gamma ,B\vdash \Delta }}}$ vdash \Delta ,A,B}}}$
${\displaystyle L\rightarrow {\text{rule: }}{\cfrac {\Gamma ,A\rightarrow B\vdash \Delta }{\Gamma \ ${\displaystyle R\rightarrow {\text{rule: }}\quad {\cfrac {\Gamma \vdash \Delta ,A\rightarrow
vdash \Delta ,A\qquad \Gamma ,B\vdash \Delta }}}$ B}{\Gamma ,A\vdash \Delta ,B}}}$
${\displaystyle L\lnot {\text{rule: }}\quad {\cfrac {\Gamma ,\lnot A\vdash \Delta }{\Gamma \vdash \ ${\displaystyle R\lnot {\text{rule: }}\quad {\cfrac {\Gamma \vdash \Delta ,\lnot A}{\Gamma ,A\
Delta ,A}}}$ vdash \Delta }}}$
Axiom: ${\displaystyle \Gamma ,A\vdash \Delta ,A}$
Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left
side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed.
Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the
It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a
split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical
propositional logic.
Relation to standard axiomatizations
Sequent calculus is related to other axiomatizations of classical propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard
Hilbert system): Every formula that can be proven in these has a reduction tree. This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use
of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below. The only inference rule in the systems mentioned above is modus
ponens, which is implemented by the cut rule.
The system LK
This section introduces the rules of the sequent calculus LK (standing for Logistische Kalkül) as introduced by Gentzen in 1934.^[24] A (formal) proof in this calculus is a finite sequence of
sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below.
Inference rules
The following notation will be used:
• ${\displaystyle \vdash }$ known as the turnstile, separates the assumptions on the left from the propositions on the right
• ${\displaystyle A}$ and ${\displaystyle B}$ denote formulas of first-order predicate logic (one may also restrict this to propositional logic),
• ${\displaystyle \Gamma ,\Delta ,\Sigma }$ , and ${\displaystyle \Pi }$ are finite (possibly empty) sequences of formulas (in fact, the order of formulas does not matter; see § Structural rules),
called contexts,
□ when on the left of the ${\displaystyle \vdash }$ , the sequence of formulas is considered conjunctively (all assumed to hold at the same time),
□ while on the right of the ${\displaystyle \vdash }$ , the sequence of formulas is considered disjunctively (at least one of the formulas must hold for any assignment of variables),
• ${\displaystyle t}$ denotes an arbitrary term,
• ${\displaystyle x}$ and ${\displaystyle y}$ denote variables.
• a variable is said to occur free within a formula if it is not bound by quantifiers ${\displaystyle \forall }$ or ${\displaystyle \exists }$ .
• ${\displaystyle A[t/x]}$ denotes the formula that is obtained by substituting the term ${\displaystyle t}$ for every free occurrence of the variable ${\displaystyle x}$ in formula ${\displaystyle
A}$ with the restriction that the term ${\displaystyle t}$ must be free for the variable ${\displaystyle x}$ in ${\displaystyle A}$ (i.e., no occurrence of any variable in ${\displaystyle t}$
becomes bound in ${\displaystyle A[t/x]}$ ).
• ${\displaystyle WL}$ , ${\displaystyle WR}$ , ${\displaystyle CL}$ , ${\displaystyle CR}$ , ${\displaystyle PL}$ , ${\displaystyle PR}$ : These six stand for the two versions of each of three
structural rules; one for use on the left ('L') of a ${\displaystyle \vdash }$ , and the other on its right ('R'). The rules are abbreviated 'W' for Weakening (Left/Right), 'C' for Contraction,
and 'P' for Permutation.
Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact
mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added.
Axiom Cut
${\displaystyle {\cfrac {\qquad }{A\vdash A}}\quad (I)}$ ${\displaystyle {\cfrac {\Gamma \vdash \Delta ,A\qquad A,\Sigma \vdash \Pi }{\Gamma ,\Sigma \vdash \Delta ,\Pi }}\quad ({\mathit {Cut}})}$
Left logical rules Right logical rules
${\displaystyle {\cfrac {\Gamma ,A\vdash \Delta }{\Gamma ,A\land B\vdash \Delta }}\quad ({\land }L_{1})}$ ${\displaystyle {\cfrac {\Gamma \vdash A,\Delta }{\Gamma \vdash A\lor B,\Delta }}\quad ({\
lor }R_{1})}$
${\displaystyle {\cfrac {\Gamma ,B\vdash \Delta }{\Gamma ,A\land B\vdash \Delta }}\quad ({\land }L_{2})}$ ${\displaystyle {\cfrac {\Gamma \vdash B,\Delta }{\Gamma \vdash A\lor B,\Delta }}\quad ({\
lor }R_{2})}$
${\displaystyle {\cfrac {\Gamma ,A\vdash \Delta \qquad \Gamma ,B\vdash \Delta }{\Gamma ,A\lor B\vdash \ ${\displaystyle {\cfrac {\Gamma \vdash A,\Delta \qquad \Gamma \vdash B,\Delta }{\Gamma \
Delta }}\quad ({\lor }L)}$ vdash A\land B,\Delta }}\quad ({\land }R)}$
${\displaystyle {\cfrac {\Gamma \vdash A,\Delta \qquad \Sigma ,B\vdash \Pi }{\Gamma ,\Sigma ,A\rightarrow B ${\displaystyle {\cfrac {\Gamma ,A\vdash B,\Delta }{\Gamma \vdash A\rightarrow B,\Delta }}\
\vdash \Delta ,\Pi }}\quad ({\rightarrow }L)}$ quad ({\rightarrow }R)}$
${\displaystyle {\cfrac {\Gamma \vdash A,\Delta }{\Gamma ,\lnot A\vdash \Delta }}\quad ({\lnot }L)}$ ${\displaystyle {\cfrac {\Gamma ,A\vdash \Delta }{\Gamma \vdash \lnot A,\Delta }}\quad ({\
lnot }R)}$
${\displaystyle {\cfrac {\Gamma ,A[t/x]\vdash \Delta }{\Gamma ,\forall xA\vdash \Delta }}\quad ({\forall } ${\displaystyle {\cfrac {\Gamma \vdash A[y/x],\Delta }{\Gamma \vdash \forall xA,\Delta }}\
L)}$ quad ({\forall }R)}$
${\displaystyle {\cfrac {\Gamma ,A[y/x]\vdash \Delta }{\Gamma ,\exists xA\vdash \Delta }}\quad ({\exists } ${\displaystyle {\cfrac {\Gamma \vdash A[t/x],\Delta }{\Gamma \vdash \exists xA,\Delta }}\
L)}$ quad ({\exists }R)}$
Left structural rules Right structural rules
${\displaystyle {\cfrac {\Gamma \vdash \Delta }{\Gamma ,A\vdash \Delta }}\quad ({\mathit {WL}})}$ ${\displaystyle {\cfrac {\Gamma \vdash \Delta }{\Gamma \vdash A,\Delta }}\quad ({\mathit {WR}})}$
${\displaystyle {\cfrac {\Gamma ,A,A\vdash \Delta }{\Gamma ,A\vdash \Delta }}\quad ({\mathit ${\displaystyle {\cfrac {\Gamma \vdash A,A,\Delta }{\Gamma \vdash A,\Delta }}\quad ({\mathit
{CL}})}$ {CR}})}$
${\displaystyle {\cfrac {\Gamma _{1},A,B,\Gamma _{2}\vdash \Delta }{\Gamma _{1},B,A,\Gamma _{2}\ ${\displaystyle {\cfrac {\Gamma \vdash \Delta _{1},A,B,\Delta _{2}}{\Gamma \vdash \Delta _{1},B,A,\
vdash \Delta }}\quad ({\mathit {PL}})}$ Delta _{2}}}\quad ({\mathit {PR}})}$
Restrictions: In the rules ${\displaystyle ({\forall }R)}$ and ${\displaystyle ({\exists }L)}$ , the variable ${\displaystyle y}$ must not occur free anywhere in the respective lower sequents.
An intuitive explanation
The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile ${\
displaystyle \vdash }$ . In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulas. The two exceptions to this general scheme are the axiom
of identity (I) and the rule of (Cut).
Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule ${\displaystyle ({\land }L_{1})}$ . It says that,
whenever one can prove that ${\displaystyle \Delta }$ can be concluded from some sequence of formulas that contain ${\displaystyle A}$ , then one can also conclude ${\displaystyle \Delta }$ from the
(stronger) assumption that ${\displaystyle A\land B}$ holds. Likewise, the rule ${\displaystyle ({eg }R)}$ states that, if ${\displaystyle \Gamma }$ and ${\displaystyle A}$ suffice to conclude ${\
displaystyle \Delta }$ , then from ${\displaystyle \Gamma }$ alone one can either still conclude ${\displaystyle \Delta }$ or ${\displaystyle A}$ must be false, i.e. ${\displaystyle {eg }A}$ holds.
All the rules can be interpreted in this way.
For an intuition about the quantifier rules, consider the rule ${\displaystyle ({\forall }R)}$ . Of course concluding that ${\displaystyle \forall {x}A}$ holds just from the fact that ${\displaystyle
A[y/x]}$ is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulas), then one may assume,
that ${\displaystyle A[y/x]}$ holds for any value of y. The other rules should then be pretty straightforward.
Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case
the rules can be read bottom-up; for example, ${\displaystyle ({\land }R)}$ says that, to prove that ${\displaystyle A\land B}$ follows from the assumptions ${\displaystyle \Gamma }$ and ${\
displaystyle \Sigma }$ , it suffices to prove that ${\displaystyle A}$ can be concluded from ${\displaystyle \Gamma }$ and ${\displaystyle B}$ can be concluded from ${\displaystyle \Sigma }$ ,
respectively. Note that, given some antecedent, it is not clear how this is to be split into ${\displaystyle \Gamma }$ and ${\displaystyle \Sigma }$ . However, there are only finitely many
possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both
${\displaystyle A}$ and ${\displaystyle B}$ , one can construct a proof for ${\displaystyle A\land B}$ .
When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula ${\displaystyle A}$ can be concluded
and this formula may also serve as a premise for concluding other statements, then the formula ${\displaystyle A}$ can be "cut out" and the respective derivations are joined. When constructing a
proof bottom-up, this creates the problem of guessing ${\displaystyle A}$ (since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus
in automated deduction: it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof.
The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat
redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability.
Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective
${\displaystyle ot \leftarrow }$ that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left–right symmetric.
Example derivations
Here is the derivation of "${\displaystyle \vdash A\lor \lnot A}$ ", known as the Law of excluded middle (tertium non datur in Latin).
${\displaystyle (I)}$
${\displaystyle A\vdash A}$
${\displaystyle (\lnot R)}$
${\displaystyle \vdash \lnot A,A}$
${\displaystyle (\lor R_{2})}$
${\displaystyle \vdash A\lor \lnot A,A}$
${\displaystyle (PR)}$
${\displaystyle \vdash A,A\lor \lnot A}$
${\displaystyle (\lor R_{1})}$
${\displaystyle \vdash A\lor \lnot A,A\lor \lnot A}$
${\displaystyle (CR)}$
${\displaystyle \vdash A\lor \lnot A}$
Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable
cannot be used in substitution in the rules ${\displaystyle (\forall R)}$ and ${\displaystyle (\exists L)}$ .
${\displaystyle (I)}$
${\displaystyle p(x,y)\vdash p(x,y)}$
${\displaystyle (\forall L)}$
${\displaystyle \forall x\left(p(x,y)\right)\vdash p(x,y)}$
${\displaystyle (\exists R)}$
${\displaystyle \forall x\left(p(x,y)\right)\vdash \exists y\left(p(x,y)\right)}$
${\displaystyle (\exists L)}$
${\displaystyle \exists y\left(\forall x\left(p(x,y)\right)\right)\vdash \exists y\left(p(x,y)\right)}$
${\displaystyle (\forall R)}$
${\displaystyle \exists y\left(\forall x\left(p(x,y)\right)\right)\vdash \forall x\left(\exists y\left(p(x,y)\right)\right)}$
For something more interesting we shall prove ${\displaystyle {\left(\left(A\rightarrow \left(B\lor C\right)\right)\rightarrow \left(\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\
rightarrow \lnot A\right)\right)}}$ . It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving.
${\ ${\
displaystyle displaystyle
${\ (I)}$ ${\ (I)}$
displaystyle displaystyle
B\vdash B}$ ${\ C\vdash C}$ ${\
displaystyle displaystyle
${\ (WR)}$ ${\ (WR)}$
displaystyle displaystyle ${\
B\vdash B,C}$ C\vdash B,C}$ displaystyle
(\lor L)}$
${\displaystyle B\lor C\vdash B,C}$ ${\
${\displaystyle B\lor C\vdash C,B}$
${\ ${\
displaystyle displaystyle
(\lnot L)}$ ${\displaystyle \ (I)}$
${\displaystyle B\lor C,\lnot C\vdash B}$ lnot A\vdash \lnot
A}$ ${\displaystyle (\
${\displaystyle \left(B\lor C\right),\lnot C,\left(B\rightarrow \lnot A\right)\vdash \lnot A}$ rightarrow L)}$
${\displaystyle (\
${\displaystyle \left(B\lor C\right),\lnot C,\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\vdash \ land L_{1})}$
lnot A}$ ${\displaystyle
${\displaystyle \left(B\lor C\right),\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right),\lnot C\vdash \ (PL)}$
${\displaystyle lnot A}$ ${\displaystyle (\
${\displaystyle A\ (I)}$ ${\displaystyle \left(B\lor C\right),\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right),\left(\left(B\ land L_{2})}$
vdash A}$ ${\displaystyle rightarrow \lnot A\right)\land \lnot C\right)\vdash \lnot A}$ ${\displaystyle
${\displaystyle \ (\lnot R)}$ ${\displaystyle \left(B\lor C\right),\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\vdash \lnot A}$ (CL)}$
vdash \lnot A,A}$ ${\displaystyle ${\displaystyle
${\displaystyle \ (PR)}$ ${\displaystyle \left(\left(B\rightarrow \lnot A\right)\land \lnot C\right),\left(B\lor C\right)\vdash \lnot A}$ (PL)}$
vdash A,\lnot A}$ ${\displaystyle (\
${\displaystyle \left(\left(B\rightarrow \lnot A\right)\land \lnot C\right),\left(A\rightarrow \left(B\lor C\right)\right)\vdash \lnot A,\lnot A}$ rightarrow L)}$
${\displaystyle \left(\left(B\rightarrow \lnot A\right)\land \lnot C\right),\left(A\rightarrow \left(B\lor C\right)\right)\vdash \lnot A}$ (CR)}$
${\displaystyle \left(A\rightarrow \left(B\lor C\right)\right),\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\vdash \lnot A}$ (PL)}$
${\displaystyle (\
${\displaystyle \left(A\rightarrow \left(B\lor C\right)\right)\vdash \left(\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\rightarrow \lnot A\right)}$ rightarrow R)}$
${\displaystyle (\
${\displaystyle \vdash \left(\left(A\rightarrow \left(B\lor C\right)\right)\rightarrow \left(\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\rightarrow \lnot A\ rightarrow R)}$
These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile,
such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of
multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and
derivations outside the sequent calculus, whereas LK embeds it within the system itself.
Relation to analytic tableaux
For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau.^[25]
Structural rules
The structural rules deserve some additional discussion.
Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels,
then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have
either wheels or wings).
Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider
The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics.
Properties of the system LK
This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement ${\displaystyle A}$ follows semantically from a set of premises ${\displaystyle \
Gamma }$ ${\displaystyle (\Gamma \vDash A)}$ if and only if the sequent ${\displaystyle \Gamma \vdash A}$ can be derived by the above rules.^[26]
In the sequent calculus, the rule of cut is admissible. This result is also referred to as Gentzen's Hauptsatz ("Main Theorem").^[2]^[3]
The above rules can be modified in various ways:
Minor structural alternatives
There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized without changing what sequents the system derives.
First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets. In this case, the rules for permuting and (when using sets) contracting formulas are unnecessary.
The rule of weakening becomes admissible if the axiom (I) is changed to derive any sequent of the form ${\displaystyle \Gamma ,A\vdash A,\Delta }$ . Any weakening that appears in a derivation can
then be moved to the beginning of the proof. This may be a convenient change when constructing proofs bottom-up.
One may also change whether rules with more than one premise share the same context for each of those premises or split their contexts between them: For example, ${\displaystyle ({\lor }L)}$ may be
instead formulated as
${\displaystyle {\cfrac {\Gamma ,A\vdash \Delta \qquad \Sigma ,B\vdash \Pi }{\Gamma ,\Sigma ,A\lor B\vdash \Delta ,\Pi }}.}$
Contraction and weakening make this version of the rule interderivable with the version above, although in their absence, as in linear logic, these rules define different connectives.
One can introduce ${\displaystyle \bot }$ , the absurdity constant representing false, with the axiom:
${\displaystyle {\cfrac {}{\bot \vdash \quad }}}$
Or if, as described above, weakening is to be an admissible rule, then with the axiom:
${\displaystyle {\cfrac {}{\Gamma ,\bot \vdash \Delta }}}$
With ${\displaystyle \bot }$ , negation can be subsumed as a special case of implication, via the definition ${\displaystyle (eg A)\iff (A\to \bot )}$ .
Substructural logics
Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK (i.e., they have fewer
theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer
science and artificial intelligence.
Intuitionistic sequent calculus: System LJ
Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic.^[27] To this end, one has to restrict to sequents with at most one formula on the
right-hand side,^[28] and modify the rules to maintain this invariant. For example, ${\displaystyle ({\lor }L)}$ is reformulated as follows (where C is an arbitrary formula):
${\displaystyle {\cfrac {\Gamma ,A\vdash C\qquad \Gamma ,B\vdash C}{\Gamma ,A\lor B\vdash C}}\quad ({\lor }L)}$
The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence
In fact, the only rules in LK that need to be restricted to single-formula consequents are ${\displaystyle ({\to }R)}$ , ${\displaystyle (eg R)}$ (which can be seen as a special case of ${\
displaystyle {\to }R}$ , as described above) and ${\displaystyle ({\forall }R)}$ . When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are derivable
in LJ, while the rules ${\displaystyle ({\to }R)}$ and ${\displaystyle ({\forall }R)}$ become
${\displaystyle {\cfrac {\Gamma ,A\vdash B\lor C}{\Gamma \vdash (A\to B)\lor C}}}$
and (when ${\displaystyle y}$ does not occur free in the bottom sequent)
${\displaystyle {\cfrac {\Gamma \vdash A[y/x]\lor C}{\Gamma \vdash (\forall xA)\lor C}}.}$
These rules are not intuitionistically valid.
See also
1. ^ ^a ^b Gentzen 1934, Gentzen 1935.
2. ^ ^a ^b Curry 1977, pp. 208–213, gives a 5-page proof of the elimination theorem. See also pages 188, 250.
3. ^ ^a ^b Kleene 2009, pp. 453, gives a very brief proof of the cut-elimination theorem.
4. ^ Curry 1977, pp. 189–244, calls Gentzen systems LC systems. Curry's emphasis is more on theory than on practical logic proofs.
5. ^ Kleene 2009, pp. 440–516. This book is much more concerned with the theoretical, metamathematical implications of Gentzen-style sequent calculus than applications to practical logic proofs.
6. ^ Kleene 2002, pp. 283–312, 331–361, defines Gentzen systems and proves various theorems within these systems, including Gödel's completeness theorem and Gentzen's theorem.
7. ^ Smullyan 1995, pp. 101–127, gives a brief theoretical presentation of Gentzen systems. He uses the tableau proof layout style.
8. ^ Curry 1977, pp. 184–244, compares natural deduction systems, denoted LA, and Gentzen systems, denoted LC. Curry's emphasis is more theoretical than practical.
9. ^ Suppes 1999, pp. 25–150, is an introductory presentation of practical natural deduction of this kind. This became the basis of System L.
10. ^ Lemmon 1965 is an elementary introduction to practical natural deduction based on the convenient abbreviated proof layout style System L based on Suppes 1999, pp. 25–150.
11. ^ Here, "whenever" is used as an informal abbreviation "for every assignment of values to the free variables in the judgment"
12. ^ Shankar, Natarajan; Owre, Sam; Rushby, John M.; Stringer-Calvert, David W. J. (2001-11-01). "PVS Prover Guide" (PDF). User guide. SRI International. Retrieved 2015-05-29.
13. ^ For explanations of the disjunctive semantics for the right side of sequents, see Curry 1977, pp. 189–190, Kleene 2002, pp. 290, 297, Kleene 2009, p. 441, Hilbert & Bernays 1970, p. 385,
Smullyan 1995, pp. 104–105 and Gentzen 1934, p. 180.
14. ^ Buss 1998, p. 10
15. ^ Gentzen 1934, p. 188. "Der Kalkül NJ hat manche formale Unschönheiten."
16. ^ Gentzen 1934, p. 191. "In dem klassischen Kalkül NK nahm der Satz vom ausgeschlossenen Dritten eine Sonderstellung unter den Schlußweisen ein [...], indem er sich der Einführungs- und
Beseitigungssystematik nicht einfügte. Bei dem im folgenden anzugebenden logistischen klassichen Kalkül LK wird diese Sonderstellung aufgehoben."
17. ^ Gentzen 1934, p. 191. "Die damit erreichte Symmetrie erweist sich als für die klassische Logik angemessener."
18. ^ Gentzen 1934, p. 191. "Hiermit haben wir einige Gesichtspunkte zur Begründung der Aufstellung der folgenden Kalküle angegeben. Im wesentlichen ist ihre Form jedoch durch die Rücksicht auf den
nachher zu beweisenden 'Hauptsatz' bestimmt und kann daher vorläufig nicht näher begründet werden."
19. ^ Kleene 2002, p. 441.
20. ^ ^a ^b ^c Applied Logic, Univ. of Cornell: Lecture 9. Last Retrieved: 2016-06-25
21. ^ "Remember, the way that you prove an implication is by assuming the hypothesis."—Philip Wadler, on 2 November 2015, in his Keynote: "Propositions as Types". Minute 14:36 /55:28 of Code Mesh
video clip
22. ^ Tait WW (2010). "Gentzen's original consistency proof and the Bar Theorem" (PDF). In Kahle R, Rathjen M (eds.). Gentzen's Centenary: The Quest for Consistency. New York: Springer. pp. 213–228.
23. ^ Jan von Plato, Elements of Logical Reasoning, Cambridge University Press, 2014, p. 32.
24. ^ Andrzej-Indrzejczak, An Introduction to the Theory and Applications of Propositional Sequent Calculi (2021, chapter "Gentzen's Sequent Calculus LK"). Accessed 3 August 2022.
25. ^ Smullyan 1995, p. 107
26. ^ Kleene 2002, p. 336, wrote in 1967 that "it was a major logical discovery by Gentzen 1934–5 that, when there is any (purely logical) proof of a proposition, there is a direct proof. The
implications of this discovery are in theoretical logical investigations, rather than in building collections of proved formulas."
27. ^ Gentzen 1934, p. 194, wrote: "Der Unterschied zwischen intuitionistischer und klassischer Logik ist bei den Kalkülen LJ und LK äußerlich ganz anderer Art als bei NJ und NK. Dort bestand er in
Weglassung bzw. Hinzunahme des Satzes vom ausgeschlossenen Dritten, während er hier durch die Sukzedensbedingung ausgedrückt wird." English translation: "The difference between intuitionistic and
classical logic is in the case of the calculi LJ and LK of an extremely, totally different kind to the case of NJ and NK. In the latter case, it consisted of the removal or addition respectively
of the excluded middle rule, whereas in the former case, it is expressed through the succedent conditions."
28. ^ M. Tiomkin, "Proving unprovability", pp.22--26. In Proceedings Of The Third Annual Symposium On Logic In Computer Science, July 5-8, 1988 (1988), IEEE. ISBN 0-8186-0853-6.
External links
• Proof Theory (Sequent Calculi) in the Stanford Encyclopedia of Philosophy
• "Sequent calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• A Brief Diversion: Sequent Calculus
• Interactive tutorial of the Sequent Calculus | {"url":"https://www.knowpia.com/knowpedia/Sequent_calculus","timestamp":"2024-11-03T13:28:32Z","content_type":"text/html","content_length":"511663","record_id":"<urn:uuid:0d4ec133-e3c4-428f-a839-684186f83d41>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00765.warc.gz"} |
How is Keq related to molality? | Socratic
How is Keq related to molality?
1 Answer
The concentrations of reactants and products contained in equilibrium constants (Keq) may be expressed in terms of either molality or molarity, depending on the reference state used to determine the
value of Keq.
The value of Keq can be calculated from the equation
${K}_{e q} = \exp \left(\frac{- \Delta {G}^{0}}{R T}\right)$
where $\Delta {G}^{0}$ is the standard change in Gibbs Free Energy of the reaction under standard conditions. Standard conditions can be chosen as molarity (with a reference state of 1M) or molality
(with a reference state of 1m), and the numerical value of Keq will be different depending on which reference state is chosen.
For dilute aqueous solutions at room temperature, the difference between molarity and molality is very slight, so in this case there is no practical difference between values of Keq for these two
different choices of reference state. But for the most accurate work, it is necessary to find out which reference state was used in the calculation of Keq, and use those units respectively.
For nonaqueous solutions, the values of Keq for the different reference states will be significant, so in this case it is really necessary to have the information about how Keq was determined.
Impact of this question
3788 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-is-kep-related-to-molality","timestamp":"2024-11-10T02:02:50Z","content_type":"text/html","content_length":"32210","record_id":"<urn:uuid:fbf418b2-4fa0-4fb4-ab91-e83e6c3720f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00754.warc.gz"} |